nips nips2000 nips2000-59 nips2000-59-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Cynthia Archer, Todd K. Leen
Abstract: We establish a principled framework for adaptive transform coding. Transform coders are often constructed by concatenating an ad hoc choice of transform with suboptimal bit allocation and quantizer design. Instead, we start from a probabilistic latent variable model in the form of a mixture of constrained Gaussian mixtures. From this model we derive a transform coding algorithm, which is a constrained version of the generalized Lloyd algorithm for vector quantizer design. A byproduct of our derivation is the introduction of a new transform basis, which unlike other transforms (PCA, DCT, etc.) is explicitly optimized for coding. Image compression experiments show adaptive transform coders designed with our algorithm improve compressed image signal-to-noise ratio up to 3 dB compared to global transform coding and 0.5 to 2 dB compared to other adaptive transform coders. 1
[1] Mark A. Kramer. Nonlinear prinipal component analysis using autoassociative neural networks. AIChE journal, 37(2):233- 243, February 1991.
[2] David DeMers and Garrison Cottrell. Non-linear dimensionality reduction. In Giles, Hanson, and Cowan , editors, Advances in Neural Information Processing Systems 5, San Mateo, CA, 1993. Morgan Kaufmann.
[3] N anda Kambhatla and Todd K. Leen. Fast non-linear dimension reduction. In Cowan, Tesauro, and Alspector, editors, Advances in Neural Information Processing Systems 6, pages 152- 159. Morgan Kauffmann, Feb 1994.
[4] G. Hinton, M. Revow, and P. Dayan. Recognizing handwritten digits using mixtures of linear models. In Tesauro, Touretzky, and Leen , editors , Advances in Neural Information Processing Systems 7, pages 1015-1022. MIT Press, 1995.
[5] Robert D. Dony and Simon Haykin. Optimally adaptive transform coding. IEEE Transactions on Image Processing, 4(10) :1358- 1370, 1995.
[6] M. Tipping and C. Bishop. Mixture of probabilistic principal component analyzers. Neural Computation, 11(2):443- 483, 1999.
[7] C. Archer and T.K. Leen. Optimal dimension reduction and transform coding with mixture principal components. In Proceedings of International Joint Conference on Neural Networks , July 1999.
[8] Steve Nowlan. Soft Competitive Adaptation: neural network learning algorithms based on fitting statistical mixtures. PhD thesis, School of Computer Science, Carnegie Mellon University, 1991.
[9] Y. Linde, A. Buzo, and R.M. Gray. An algorithm for vector quantizer design. IEEE Transactions on Communications, 28(1):84- 95, January 1980.
[10] S. Lloyd. Least square optimization in PCM. IEEE Transactions on Information Theory, 28(2):129- 137, 1982.
[11] Eve A. Riskin. Optimal bit allocation via the generalized BFOS algorithm. IEEE Transactions on Information Theory, 37(2) :400- 402 , 1991.
[12] A. Gersho and R. Gray. Vector Quantization and Signal Compression. Kluwer Academic, 1992.