nips nips2005 nips2005-151 nips2005-151-reference knowledge-graph by maker-knowledge-mining

151 nips-2005-Pattern Recognition from One Example by Chopping


Source: pdf

Author: Francois Fleuret, Gilles Blanchard

Abstract: We investigate the learning of the appearance of an object from a single image of it. Instead of using a large number of pictures of the object to recognize, we use a labeled reference database of pictures of other objects to learn invariance to noise and variations in pose and illumination. This acquired knowledge is then used to predict if two pictures of new objects, which do not appear on the training pictures, actually display the same object. We propose a generic scheme called chopping to address this task. It relies on hundreds of random binary splits of the training set chosen to keep together the images of any given object. Those splits are extended to the complete image space with a simple learning algorithm. Given two images, the responses of the split predictors are combined with a Bayesian rule into a posterior probability of similarity. Experiments with the COIL-100 database and with a database of 150 deA graded LTEX symbols compare our method to a classical learning with several examples of the positive class and to a direct learning of the similarity. 1


reference text

[1] Y. Bengio and M. Monperrus. Non-local manifold tangent learning. In Advances in Neural Information Processing Systems 17, pages 129–136. MIT press, 2005.

[2] T. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2:263–286, 1995.

[3] A. Ferencz, E. Learned-Miller, and J. Malik. Learning hyper-features for visual identification. In Advances in Neural Information Processing Systems 17, pages 425–432. MIT Press, 2004.

[4] A. Ferencz, E. Learned-Miller, and J. Malik. Building a classification cascade for visual identification from one example. In International Conference on Computer Vision (ICCV), 2005.

[5] M. Fink. Object classification from a single example utilizing class relevance metrics. In Advances in Neural Information Processing Systems 17, pages 449–456. MIT Press, 2005.

[6] F. Fleuret. Fast binary feature selection with conditional mutual information. Journal of Machine Learning Research, 5:1531–1555, November 2004.

[7] F. Li, R. Fergus, and P. Perona. A Bayesian approach to unsupervised one-shot learning of object categories. In Proceedings of ICCV, volume 2, page 1134, 2003.

[8] E. G. Miller, N. E. Matsakis, and P. A. Viola. Learning from one example through shared densities on transforms. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, volume 1, pages 464–471, 2000.

[9] S. A. Nene, S. K. Nayar, and H. Murase. Columbia Object Image Library (COIL-100). Technical Report CUCS-006-96, Columbia University, 1996.

[10] T. Sejnowski and C. Rosenberg. Parallel networks that learn to pronounce english text. Journal of Complex Systems, 1:145–168, 1987.

[11] P. Simard, Y. Le Cun, and J. Denker. Efficient pattern recognition using a new transformation distance. In S. Hanson, J. Cowan, and C. Giles, editors, Advances in Neural Information Processing Systems 5, pages 50–68. Morgan Kaufmann, 1993.

[12] S. Thrun and L. Pratt, editors. Learning to learn. Kluwer, 1997.