nips nips2009 nips2009-176 nips2009-176-reference knowledge-graph by maker-knowledge-mining

176 nips-2009-On Invariance in Hierarchical Models


Source: pdf

Author: Jake Bouvrie, Lorenzo Rosasco, Tomaso Poggio

Abstract: A goal of central importance in the study of hierarchical models for object recognition – and indeed the mammalian visual cortex – is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data. In this work we provide a general group-theoretic framework for characterizing and understanding invariance in a family of hierarchical models. We show that by taking an algebraic perspective, one can provide a concise set of conditions which must be met to establish invariance, as well as a constructive prescription for meeting those conditions. Analyses in specific cases of particular relevance to computer vision and text processing are given, yielding insight into how and when invariance can be achieved. We find that the minimal intrinsic properties of a hierarchical model needed to support a particular invariance can be clearly described, thereby encouraging efficient computational implementations. 1


reference text

[1] M. Artin. Algebra. Prentice-Hall, 1991.

[2] J. Bouvrie, L. Rosasco, and T. Poggio. Supplementary material for “On Invariance in Hierarchical Models”. NIPS, 2009. Available online: http://cbcl.mit.edu/ publications/ps/978_supplement.pdf.

[3] K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cyb., 36:193–202, 1980.

[4] G.E. Hinton and R.R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.

[5] D.H. Hubel and T.N. Wiesel. Receptive fields and functional architecture of monkey striate cortex. J. Phys., 195:215–243, 1968.

[6] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. of the IEEE, 86(11):2278–2324, November 1998.

[7] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the Twenty-Sixth International Conference on Machine Learning, 2009.

[8] B.W. Mel. SEEMORE: Combining color, shape, and texture histogramming in a neurally inspired approach to visual object recognition. Neural Comp., 9:777–804, 1997.

[9] T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Science, 104:6424–6429, 2007.

[10] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with cortex-like mechanisms. IEEE Trans. on Pattern Analysis and Machine Intelligence, 29:411– 426, 2007.

[11] S. Smale, L. Rosasco, J. Bouvrie, A. Caponnetto, and T. Poggio. Mathematics of the neural response. Foundations of Computational Mathematics, June 2009. available online, DOI:10.1007/s10208-009-9049-1.

[12] H. Wersing and E. Korner. Learning optimized features for hierarchical models of invariant object recognition. Neural Comput., 7(15):1559–1588, July 2003. 9