nips nips2011 nips2011-70 nips2011-70-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Ioannis A. Gkioulekas, Todd Zickler
Abstract: We propose an approach for linear unsupervised dimensionality reduction, based on the sparse linear model that has been used to probabilistically interpret sparse coding. We formulate an optimization problem for learning a linear projection from the original signal domain to a lower-dimensional one in a way that approximately preserves, in expectation, pairwise inner products in the sparse domain. We derive solutions to the problem, present nonlinear extensions, and discuss relations to compressed sensing. Our experiments using facial images, texture patches, and images of object categories suggest that the approach can improve our ability to recover meaningful structure in many classes of signals. 1
[1] M.A. Davenport, P.T. Boufounos, M.B. Wakin, and R.G. Baraniuk. Signal processing with compressive measurements. IEEE JSTSP, 2010.
[2] S.J. Koppal, I. Gkioulekas, T. Zickler, and G.L. Barrows. Wide-angle micro sensors for vision on a tight budget. CVPR, 2011.
[3] I. Jolliffe. Principal component analysis. Wiley, 1986.
[4] X. He and P. Niyogi. Locality Preserving Projections. NIPS, 2003.
[5] X. He, D. Cai, S. Yan, and H.J. Zhang. Neighborhood preserving embedding. ICCV, 2005.
[6] D. Cai, X. He, J. Han, and H.J. Zhang. Orthogonal laplacianfaces for face recognition. IEEE IP, 2006.
[7] D. Cai, X. He, and J. Han. Spectral regression for efficient regularized subspace learning. ICCV, 2007.
[8] D. Cai, X. He, Y. Hu, J. Han, and T. Huang. Learning a spatially smooth subspace for face recognition. CVPR, 2007.
[9] X. He, D. Cai, and P. Niyogi. Tensor subspace analysis. NIPS, 2006.
[10] J. Ye, R. Janardan, and Q. Li. Two-dimensional linear discriminant analysis. NIPS, 2004.
[11] B. Scholkopf, A. Smola, and K.R. Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural computation, 1998.
[12] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 1997.
[13] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, and Y. Ma. Robust face recognition via sparse representation. PAMI, 2008.
[14] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE IP, 2006.
[15] J.F. Cai, H. Ji, C. Liu, and Z. Shen. Blind motion deblurring from a single image using sparse approximation. CVPR, 2009.
[16] R. Raina, A. Battle, H. Lee, B. Packer, and A.Y. Ng. Self-taught learning: Transfer learning from unlabeled data. ICML, 2007.
[17] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised dictionary learning. NIPS, 2008. 8
[18] I. Ramirez, P. Sprechmann, and G. Sapiro. Classification and clustering via dictionary learning with structured incoherence and shared features. CVPR, 2010.
[19] J. Yang, K. Yu, and T. Huang. Supervised translation-invariant sparse coding. CVPR, 2010.
[20] M.W. Seeger. Bayesian inference and optimal design for the sparse linear model. JMLR, 2008.
[21] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. NIPS, 2007.
[22] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. JMLR, 2010.
[23] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations. NIPS, 2009.
[24] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Non-local sparse models for image restoration. ICCV, 2009.
[25] R. Tibshirani. Regression shrinkage and selection via the lasso. JRSS-B, 1996.
[26] A.M. Bruckstein, D.L. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM review, 2009.
[27] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of statistics, 2004.
[28] J. Ham, D.D. Lee, S. Mika, and B. Sch¨ lkopf. A kernel view of the dimensionality reduction of manifolds. o ICML, 2004.
[29] W.J. Fu. Penalized regressions: the bridge versus the lasso. JCGS, 1998.
[30] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. JRSS-B, 2005.
[31] S. Ji, Y. Xue, and L. Carin. Bayesian compressive sensing. IEEE SP, 2008.
[32] N. Srebro and T. Jaakkola. Weighted low-rank approximations. ICML, 2003.
[33] A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Kluwer, 2004.
[34] V.I. Bogachev. Gaussian measures. AMS, 1998.
[35] J. Kuelbs, FM Larkin, and J.A. Williamson. Weak probability distributions on reproducing kernel hilbert spaces. Rocky Mountain J. Math, 1972.
[36] S. Gao, I. Tsang, and L.T. Chia. Kernel Sparse Representation for Image Classification and Face Recognition. ECCV, 2010.
[37] J. Abernethy, F. Bach, T. Evgeniou, and J.P. Vert. A new approach to collaborative filtering: Operator estimation with spectral regularization. JMLR, 2009.
[38] B. Scholkopf, R. Herbrich, and A. Smola. A generalized representer theorem. COLT, 2001.
[39] T. Sim, S. Baker, and M. Bsat. The CMU pose, illumination, and expression (PIE) database. IEEE ICAFGR, 2002.
[40] T. Randen and J.H. Husoy. Filtering for texture classification: A comparative study. PAMI, 2002.
[41] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. CVPR Workshops, 2004.
[42] P. Gehler and S. Nowozin. On feature combination for multiclass object classification. ICCV, 2009.
[43] D. Dueck and B.J. Frey. Non-metric affinity propagation for unsupervised image categorization. ICCV, 2007.
[44] J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI, 2000.
[45] N.X. Vinh, J. Epps, and J. Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. JMLR, 2010.
[46] T.F. Cox and M.A.A. Cox. Multidimensional Scaling. Chapman & Hall, 2000.
[47] E.J. Cand` s and T. Tao. Decoding by linear programming. IEEE IT, 2005. e
[48] H. Rauhut, K. Schnass, and P. Vandergheynst. Compressed sensing and redundant dictionaries. IEEE IT, 2008.
[49] D.L. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. IEEE IT, 2001.
[50] M. Elad. Optimized projections for compressed sensing. IEEE SP, 2007.
[51] J.M. Duarte-Carvajalino and G. Sapiro. Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization. IEEE IP, 2009.
[52] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. NIPS, 2009. 9