nips nips2002 nips2002-40 nips2002-40-reference knowledge-graph by maker-knowledge-mining

40 nips-2002-Bayesian Models of Inductive Generalization


Source: pdf

Author: Neville E. Sanjana, Joshua B. Tenenbaum

Abstract: We argue that human inductive generalization is best explained in a Bayesian framework, rather than by traditional models based on similarity computations. We go beyond previous work on Bayesian concept learning by introducing an unsupervised method for constructing flexible hypothesis spaces, and we propose a version of the Bayesian Occam’s razor that trades off priors and likelihoods to prevent under- or over-generalization in these flexible spaces. We analyze two published data sets on inductive reasoning as well as the results of a new behavioral study that we have carried out.


reference text

[1] S. Atran. Classifying nature across cultures. In An Invitation to Cognitive Science, volume 3. MIT Press, 1995.

[2] R. Duda, P. Hart, and D. Stork. Pattern Classification. Wiley, New York, NY, 2001.

[3] E. Heit. A Bayesian analysis of some forms of induction. In Rational Models of Cognition. Oxford University Press, 1998.

[4] T. Landauer and S. Dumais. A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104:211–240, 1997.

[5] T. Mitchell. Machine Learning. McGraw-Hill, Boston, MA, 1997.

[6] D. Osherson, E. Smith, O. Wilkie, A. L´ pez, and E. Shafir. Category-based induction. Psychoo logical Review, 97(2):185–200, 1990.

[7] N. Sanjana and J. Tenenbaum. Capturing property-based similarity in human concept learning. In Sixth International Conference on Cognitive and Neural Systems, 2002.

[8] S. Sloman. Feature-based induction. Cognitive Psychology, 25:231–280, 1993.

[9] P. Smyth. Clustering using Monte Carlo cross-validation. In Second International Conference on Knowledge Discovery and Data Mining, 1996.

[10] J. Tenenbaum. Rules and similarity in concept learning. In S. Solla, T. Keen, and K.-R. M¨ ller, u editors, Advances in Neural Information Processing Systems 12, pages 59–65. MIT Press, 2000.

[11] J. Tenenbaum and F. Xu. Word learning as Bayesian inference. In Proceedings of the 22nd Annual Conference of the Cognitive Science Society, 2000. Bayes 1 General: mammals n=3 ρ = 0.94 0.5 0 0 1 0 0 0.5 1 0 0 0.5 1 0.5 1 0 0 0.5 1 ρ = 0.87 0.5 0 0.2 0.4 0.6 0.8 0 0 1 ρ = 0.93 0.5 0 0 1 ρ = 0.91 1 ρ = 0.97 ρ = _ 0.33 0.5 0.5 0.5 0 ρ = 0.87 1 ρ = 0.97 1 Specific: horse n=1,2,3 0.5 0.5 0 Sum−Similarity 1 0.5 1 Specific: horse n=2 Max−Similarity 1 0.5 1 ρ = 0.39 0.5 0 0.2 0.4 0.6 0.8 0 0 1 2 3 Figure 3: Model predictions ( -axis) plotted against human confirmation scores ( -axis). Each ¡   column shows the results for a particular model. Each row is a different inductive generalization experiment, where indicates the number of examples (premises) in the stimuli. ¢