nips nips2006 nips2006-75 nips2006-75-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Honglak Lee, Alexis Battle, Rajat Raina, Andrew Y. Ng
Abstract: Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1 -regularized least squares problem and an L2 -constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons. 1
[1] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996.
[2] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37:3311–3325, 1997.
[3] M. S. Lewicki and T. J. Sejnowski. Learning overcomplete representations. Neural Comp., 12(2), 2000.
[4] B. A. Olshausen. Sparse coding of time-varying natural images. Vision of Vision, 2(7):130, 2002.
[5] B.A. Olshausen and D.J. Field. Sparse coding of sensory inputs. Cur. Op. Neurobiology, 14(4), 2004.
[6] M. P. Sceniak, M. J. Hawken, and R. Shapley. Visual spatial characterization of macaque V1 neurons. The Journal of Neurophysiology, 85(5):1873–1887, 2001.
[7] J.R. Cavanaugh, W. Bair, and J.A. Movshon. Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. Journal of Neurophysiology, 88(5):2530–2546, 2002.
[8] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning. In NIPS Workshop on Learning when test and training inputs have different distributions, 2006.
[9] A. Y. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. In ICML, 2004.
[10] Y. Censor and S. A. Zenios. Parallel Optimization: Theory, Algorithms and Applications. 1997.
[11] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998.
[12] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Stat., 32(2), 2004.
[13] S. Perkins and J. Theiler. Online feature selection using grafting. In ICML, 2003.
[14] Aapo Hyv¨ rinen, Patrik O. Hoyer, and Mika O. Inki. Topographic independent component analysis. a Neural Computation, 13(7):1527–1558, 2001.
[15] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265:359–366, 1998.