nips nips2001 nips2001-97 nips2001-97-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Toshiyuki Tanaka, Shiro Ikeda, Shun-ichi Amari
Abstract: We report a result of perturbation analysis on decoding error of the belief propagation decoder for Gallager codes. The analysis is based on information geometry, and it shows that the principal term of decoding error at equilibrium comes from the m-embedding curvature of the log-linear submanifold spanned by the estimated pseudoposteriors, one for the full marginal, and K for partial posteriors, each of which takes a single check into account, where K is the number of checks in the Gallager code. It is then shown that the principal error term vanishes when the parity-check matrix of the code is so sparse that there are no two columns with overlap greater than 1. 1
[1] R. G. Gallager, Low Density Parity Check Codes, Ph. D. Thesis, Mass. Inst. Tech., 1960.
[2] R. J. McEliece, D. J. C. MacKay, and J. Cheng, “Turbo decoding as an instance of Pearl's `belief propagation' algorithm,” IEEE J. Select. A. Commun., vol. 16, no. 2, pp. 140–152, 1998.
[3] D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Trans. Inform. Theory, vol. 45, no. 2, pp. 399–431, 1999.
[4] D. J. Thouless, P. W. Anderson, and R. G. Palmer, “Solution of `Solvable model of a spin glass',” Phil. Mag., vol. 35, no. 3, pp. 593–601, 1977.
[5] T. Murayama, Y. Kabashima, D. Saad, and R. Vicente, “Statistical physics of regular low-density parity-check error-correcting codes,” Phys. Rev. E, vol. 62, no. 2, pp. 1577–1591, 2000.
[6] S. Amari and H. Nagaoka (Transl. by D. Harada), Methods of Information Geometry, Translations of Mathematical Monographs, vol. 191, American Math. Soc., 2000.
[7] Y. Kabashima and D. Saad, “The TAP approach to intensive and extensive connectivity systems,” in M. Opper and D. Saad (eds.), Advanced Mean Field Methods — Theory and Practice, The MIT Press, 2001, pp. 65–84.
[8] S. Ikeda, T. Tanaka, and S. Amari, “Information geometrical framework for analyzing belief propagation decoder,” in T. G. Dietterich et al. (eds.), Advances in Neural Information Processing Systems, vol. 14 (this volume), The MIT Press, 2002.
[9] S. Ikeda, T. Tanaka, and S. Amari, “Information geometry of turbo codes and low-density paritycheck codes,” submitted to IEEE Trans. Inform. Theory, 2001.
[10] H. J. Kappen and F. B. Rodriguez, “Efficient learning in Boltzmann machines using linear response theory,” Neural Computation, vol. 10, no. 5, pp. 1137–1156, 1998.
[11] H. J. Kappen and F. B. Rodriguez, “Boltzmann machine learning using mean field theory and linear response correction,” in M. I. Jordan et al. (eds.), Advances in Neural Information Processing Systems, vol. 10, The MIT Press, 1998, pp. 280–286.
[12] T. Tanaka, “A theory of mean field approximation,” in M. S. Kearns et al. (eds.), Advances in Neural Information Processing Systems, vol. 11, The MIT Press, 1999, pp. 351–357.
[13] T. Tanaka, “Information geometry of mean-field approximation,” Neural Computation, vol. 12, no. 8, pp. 1951–1968, 2000.
[14] J. S. Yedidia, “An idiosyncratic journey beyond mean field theory,” in M. Opper and D. Saad (eds.), Advanced Mean Field Methods — Theory and Practice, The MIT Press, 2001, pp. 21–35.
[15] H. J. Kappen and W. J. Wiegerinck, “Mean field theory for graphical models,” in M. Opper and D. Saad (eds.), Advanced Mean Field Methods — Theory and Practice, The MIT Press, 2001, pp. 37–49.
[16] S. Amari, S. Ikeda, and H. Shimokawa, “Information geometry of α-projection in mean field approximation,” in M. Opper and D. Saad (eds.), Advanced Mean Field Methods — Theory and Practice, The MIT Press, 2001, pp. 241–257.
[17] T. Tanaka, “Information geometry of mean-field approximation,” in M. Opper and D. Saad (eds.), Advanced Mean Field Methods — Theory and Practice, The MIT Press, 2001, pp. 259– 273.
[18] T. J. Richardson and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decodeing,” IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 599–618, 2001.