by Zhemin Zhu, Djoerd Hiemstra, and Peter Apers
Sequence labeling has wide applications in natural language processing and speech processing. Popular sequence labeling models suffer from some known problems. Hidden Markov models (HMMs) are generative models and they cannot encode transition features; Conditional Markov models (CMMs) suffer from the label bias problem; And training of conditional random fields (CRFs) can be expensive. In this paper, we propose Linear Co-occurrence Rate Networks (L-CRNs) for sequence labeling which avoid the mentioned problems with existing models. The factors of L-CRNs can be locally normalized and trained separately, which leads to a simple and efficient training method. Experimental results on real-world natural language processing data sets show that L-CRNs reduce the training time by orders of magnitudes while achieve very competitive results to CRFs.
The paper will be presented at the International Conference on Statistical Language and Speech Processing (SLSP) in Grenoble, France on October 14-16, 2014
Our C++ implementation of L-CRNs and the datasets used in this paper can be found on Github.