nips nips2006 nips2006-167 nips2006-167-reference knowledge-graph by maker-knowledge-mining

167 nips-2006-Recursive ICA


Source: pdf

Author: Honghao Shan, Lingyun Zhang, Garrison W. Cottrell

Abstract: Independent Component Analysis (ICA) is a popular method for extracting independent features from visual data. However, as a fundamentally linear technique, there is always nonlinear residual redundancy that is not captured by ICA. Hence there have been many attempts to try to create a hierarchical version of ICA, but so far none of the approaches have a natural way to apply them more than once. Here we show that there is a relatively simple technique that transforms the absolute values of the outputs of a previous application of ICA into a normal distribution, to which ICA maybe applied again. This results in a recursive ICA algorithm that may be applied any number of times in order to extract higher order structure from previous layers. 1


reference text

[1] Anthony J. Bell and Terrence J. Sejnowski. The ‘independent components’ of natural scenes are edge filters. Vision Research, 37(23):3327–3338, 1997.

[2] Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996.

[3] Michael S. Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 5(4):356–363, 2002.

[4] Odelia Schwartz and Eero P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4(8):819–825, 2001.

[5] Aapo Hyvarinen and Patrik O. Hoyer. A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41(18):2413– 2423, 2001.

[6] Martin J. Wainwright and Eero P. Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In Advances in Neural Information Processing Systems, volume 12, pages 855–861, Cambridge, MA, May 2000. MIT Press.

[7] Yan Karklin and Michael S. Lewicki. A hierarchical bayesian model for learning non-linear statistical regularities in non-stationary natural signals. Neural Computation, 17(2):397–423, 2005.

[8] Simon Osindero, Max Welling, and Geoffrey E. Hinton. Topographic Product Models Applied to Natural Scene Statistics. Neural Computation, 18:381–414, 2005.

[9] Eva M. Finney, Ione Fine, and Karen R. Dobkins. Visual stimuli activate auditory cortex in the deaf. Nature Neuroscience, 4:1171–1173, 2001.

[10] Horace B. Barlow. Redundancy reduction revisited. Network: Computation in Neural Systems, 12:241–253, 2001.

[11] Horace B. Barlow. Possible principles underlying the transformation of sensory messages. In Walter A. Rosenblith, editor, Sensory Communication, pages 217–234. MIT Press, Cambridge, MA, USA, 1961.

[12] Michael S. Lewicki and Bruno A. Olshausen. A probabilistic framework for the adaptation and comparison of image codes. Journal of the Optical Society of America A, 16(7):1587–1601, 1999.

[13] Yee Whye Teh, Max Welling, Simon Osindero, and Geoffrey E. Hinton. Energy-based models for sparse overcomplete representations. Journal of Machine Learning Research, 4:1235– 1260, 2003.

[14] David Attwell and Simon B. Laughlin. An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow and Metabolism, 21(10):1133–1145, 2001.

[15] Aapo Hyvarinen and Erkki Oja. A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7):1483–1492, 1997.

[16] David J. Field. What is the goal of sensory coding? Neural Computation, 6(4):559–601, 1994.

[17] Kai-Sheng Song. A globally convergent and consistent method for estimating the shape parameter of a generalized Gaussian distribution. IEEE Transactions on Information Theory, 52(2):510–527, 2006.

[18] Yan Karklin and Michael S. Lewicki. Learning higher-order structures in natural images. Network: Computation in Neural Systems, 14:483–499, 2003.