nips nips2006 nips2006-175 nips2006-175-reference knowledge-graph by maker-knowledge-mining

175 nips-2006-Simplifying Mixture Models through Function Approximation


Source: pdf

Author: Kai Zhang, James T. Kwok

Abstract: Finite mixture model is a powerful tool in many statistical learning problems. In this paper, we propose a general, structure-preserving approach to reduce its model complexity, which can bring significant computational benefits in many applications. The basic idea is to group the original mixture components into compact clusters, and then minimize an upper bound on the approximation error between the original and simplified models. By adopting the L2 norm as the distance measure between mixture models, we can derive closed-form solutions that are more robust and reliable than using the KL-based distance measure. Moreover, the complexity of our algorithm is only linear in the sample size and dimensionality. Experiments on density estimation and clustering-based image segmentation demonstrate its outstanding performance in terms of both speed and accuracy.


reference text

[1] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603–619, 2002.

[2] T. Feder and D. Greene. Optimal algorithms for approximate clustering. In Proceedings of ACM Symposium on Theory of Computing, pages 434–444, 1988.

[3] K. Fukunaga and L. Hostetler. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Transactions on Information Theory, 21:32–40, 1975.

[4] A. Gersho and R.M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Press, Boston, 1992.

[5] J. Goldberger and S. Roweis. Hierarchical clustering of a mixture model. In Advances in Neural Information Processing Systems 17, pages 505–512. 2005.

[6] B. Han, D. Comaniciu, Y. Zhu, and L. Davis. Incremental density approximation and kernel-based Bayesian filtering for object tracking. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, pages 638–644, 2004.

[7] A.J. Izenman. Recent developments in nonparametric density estimation. Journal of the American Statistical Association, 86(413):205–224, 1991.

[8] T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, and A.Y. Wu. An efficient kmeans clustering algorithm: Analysis and implementation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7):881–892, 2002.

[9] A.W. Moore. Very fast EM-based mixture model clustering using multiresolution kd-trees. In Advances in Neural Information Processing Systems 11, pages 543–549, 1998.

[10] D.M. Mount and S. Arya. ANN: A library for approximate nearest neighbor searching. In Proceedings of Center for Geometric Computing Second Annual Fall Workshop Computational Geometry (available from http://www.cs.umd.edu/∼mount/ANN), 1997.

[11] E. Parzen. On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1065–1075, 1962.

[12] K. Popat and R.W. Picard. Cluster-based probability model and its application to image and texture processing. IEEE Transactions on Image Processing, 6(2):268–284, 1997.

[13] E.B. Sudderth, A. Torralba, W.T. Freeman, and A.S. Willsky. Describing visual scenes using transformed Dirichlet processes. In Advances in Neural Information Processing Systems 19, 2006.

[14] P. Vincent and Y. Bengio. Manifold Parzen windows. In Advances in Neural Information Processing Systems 15, 2003.