25. 前向き確率率率の計算
2015/04/30 25
[t][k][z] =
t k
j=1
Z
r=0
P(ct
t k|ct k
t k j+1, z)P(z|r) [t k][j][r]
EOSBOS 諸 ⾏行行 無 常 の 響
諸⾏行行 ⾏行行無 無常 常の の響
word length
POS
index
time 諸⾏行行無 ⾏行行無常 無常の 常の響
α[6][1][1] →(α[響][1])
つまり周辺化して を求めている P(響, 1)
37. 参考⽂文献
n Miaohong Chen, et al. 2014. A Joint Model for Unsupervised Chinese Word
Segmentation. In EMNLP 2014, pages 854–1 863.
n Sharon Goldwater, et al. A Fully Bayesian Approach to Unsupervised Part-of-Speech
Tagging. In Proceedings of ACL 2007, pages 744– 751.
n Sharon Goldwater, et al. Contextual Dependencies in Un- supervised Word
Segmentation. In Proceedings of ACL/COLING 2006, pages 673–680.
n Matthew J. Johnson et al. Bayesian Nonparametric Hidden Semi-Markov Models.
Journal of Machine Learning Research, 14:673–701.
n Pierre Magistry et al. Can MDL Improve Unsupervised Chinese Word Segmenta- tion?
In Proceedings of the Seventh SIGHAN Work- shop on Chinese Language Processing,
pages 2–10.
n Daichi Mochihashi, et al. Bayesian Unsupervised Word Seg- mentation with Nested
Pitman-Yor Language Mod- eling. In Proceedings of ACL-IJCNLP 2009, pages 100–108.
n Yee Whye Teh. A Bayesian Interpretation of In- terpolated Kneser-Ney. Technical
Report TRA2/06, School of Computing, NUS.
n Valentin Zhikov, et al. An Efficient Algorithm for Unsuper- vised Word Segmentation
with Branching Entropy and MDL. In EMNLP 2010, pages 832–842.
2015/04/30 37