nips nips2008 nips2008-248 nips2008-248-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Ilya Sutskever, Geoffrey E. Hinton
Abstract: We describe a way of learning matrix representations of objects and relationships. The goal of learning is to allow multiplication of matrices to represent symbolic relationships between objects and symbolic relationships between relationships, which is the main novelty of the method. We demonstrate that this leads to excellent generalization in two different domains: modular arithmetic and family relationships. We show that the same system can learn first-order propositions such as (2, 5) ∈ +3 or (Christopher, Penelope) ∈ has wife, and higher-order propositions such as (3, +3) ∈ plus and (+3, −3) ∈ inverse or (has husband, has wife) ∈ higher oppsex. We further demonstrate that the system understands how higher-order propositions are related to first-order ones by showing that it can correctly answer questions about first-order propositions involving the relations +3 or has wife even though it has not been trained on any first-order examples involving these relations. 1
[1] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155, 2003.
[2] L.A.A. Doumas, J.E. Hummel, and C.M. Sandhofer. A Theory of the Discovery and Predication of Relational Concepts. psychological Review, 115(1):1, 2008.
[3] G.E. Hinton. Learning distributed representations of concepts. Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pages 1–12, 1986.
[4] J.E. Hummel and K.J. Holyoak. A Symbolic-Connectionist Theory of Relational Inference and Generalization. Psychological Review, 110(2):220–264, 2003.
[5] R. Memisevic and G.E. Hinton. Unsupervised learning of image transformations. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2007.
[6] T.M. Mitchell. The need for biases in learning generalizations. Readings in Machine Learning. Morgan Kaufmann, 1991.
[7] S. Muggleton and L. De Raedt. Inductive logic programming: Theory and methods. Journal of Logic Programming, 19(20):629–679, 1994.
[8] R.C. O’Reilly. The LEABRA Model of Neural Interactions and Learning in the Neocortex. PhD thesis, Carnegie Mellon University, 1996.
[9] A. Paccanaro. Learning Distributed Representations of Relational Data Using Linear Relational Embedding. PhD thesis, University of Toronto, 2002.
[10] A. Paccanaro and G. Hinton. Learning Distributed Representations of Concepts using Linear Relational Embedding. IEEE Transactions on Knowledge and Data Engineering, 13(2):232–245, 2001.
[11] R.P.N. Rao and D.H. Ballard. Development of localized oriented receptive fields by learning a translationinvariant code for natural images. Network: Computation in Neural Systems, 9(2):219–234, 1998.
[12] D.E. Rumelhart, G.E. Hinton, and J.L. McClelland. A general framework for parallel distributed processing. Mit Press Computational Models Of Cognition And Perception Series, pages 45–76, 1986.
[13] J.B. Tenenbaum and W.T. Freeman. Separating Style and Content with Bilinear Models. Neural Computation, 12(6):1247–1283, 2000.