Two papers will be published in ACL-2016

Released Time: 2016-06-08

Two papers from the team of Associate Professor Zhao Hai of the Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering (IICE) will be published as long papers on the 54rd Annual Meeting of the Association for Computational Linguistics (ACL 2016) held in Berlin. ACL is the best conference on NLP. As a CCF-A tier conference, it had an acceptance rate from 19% to 26% between 2009 and 2015. This year, 240 long papers have been accepted.

 

The following is the related paper information. Both of first authors are first-year graduate students mentored by Assoc. Prof. Zhao.

 

1. "Neural Word Segmentation Learning for Chinese"

Author: Deng Cai, Hai Zhao*

Abstract:  Most previous approaches to Chinese word segmentation formalize this problem as a character-based sequence labeling task so that only contextual information within fixed sized local windows and simple interactions between adjacent tags can be captured. In this paper, we propose a novel neural network model which thoroughly eliminates context windows and can utilize complete segmentation history. Our model employs a gated combination neural network over characters to produce vector representations of word candidates, which are then given to a long short-term memory (LSTM) language scoring model. Experiments on the benchmark datasets show that without the help of feature engineering as most existing approaches, our models achieve competitive performance with previous state-of-the-art methods.




2. "Probabilistic Graph-based Dependency Parsing with Convolutional Neural Network"

Author: Zhisong Zhang, Hai Zhao*, Lianhui Qin

Abstract: This paper presents neural probabilistic parsing models which explore up to third-order graph-based parsing with maximum likelihood training criteria. Two neural network extensions are exploited for performance improvement. Firstly, a convolutional layer that absorbs the influences of all words in a sentence is used so that sentence-level information can be effectively captured. Secondly, a linear layer is added to integrate different order neural models and trained with perceptron method. The proposed parsers are evaluated on English and Chinese Penn Treebanks and obtain competitive accuracies.


Contact webmaster@cs.sjtu.edu.cn

Copyright @ 2013 SJTU Computer Science & Engineering All Rights Reserved