nips nips2003 nips2003-66 nips2003-66-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Max Welling, Christopher Williams, Felix V. Agakov
Abstract: Principal components analysis (PCA) is one of the most widely used techniques in machine learning and data mining. Minor components analysis (MCA) is less well known, but can also play an important role in the presence of constraints on the data distribution. In this paper we present a probabilistic model for “extreme components analysis” (XCA) which at the maximum likelihood solution extracts an optimal combination of principal and minor components. For a given number of components, the log-likelihood of the XCA model is guaranteed to be larger or equal than that of the probabilistic models for PCA and MCA. We describe an efficient algorithm to solve for the globally optimal solution. For log-convex spectra we prove that the solution consists of principal components only, while for log-concave spectra the solution consists of minor components. In general, the solution admits a combination of both. In experiments we explore the properties of XCA on some synthetic and real-world datasets.
[1] G.E. Hinton. Products of experts. In Proceedings of the International Conference on Artificial Neural Networks, volume 1, pages 1–6, 1999.
[2] J.G. Proakis and D.G. Manolakis. Digital Signal Processing: Principles, Algorithms and Applications. Macmillan, 1992.
[3] S.T. Roweis. Em algorithms for pca and spca. In Advances in Neural Information Processing Systems, volume 10, pages 626–632, 1997.
[4] M.E. Tipping and C.M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B, 21(3):611–622, 1999.
[5] M. Welling, R.S. Zemel, and G.E. Hinton. A tractable probabilistic model for projection pursuit. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2003. accepted for publication.
[6] C.K.I. Williams and F.V. Agakov. Products of gaussians and probabilistic minor components analysis. Neural Computation, 14(5):1169–1182, 2002.
[7] H. Zhu, C. K. I. Williams, R. J. Rohwer, and M. Morciniec. Gaussian regression and optimal finite dimensional linear models. In C. M. Bishop, editor, Neural Networks and Machine Learning. Springer-Verlag, Berlin, 1998.