nips nips2001 nips2001-53 nips2001-53-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Wheeler Ruml
Abstract: If the promise of computational modeling is to be fully realized in higherlevel cognitive domains such as language processing, principled methods must be developed to construct the semantic representations used in such models. In this paper, we propose the use of an established formalism from mathematical psychology, additive clustering, as a means of automatically constructing binary representations for objects using only pairwise similarity data. However, existing methods for the unsupervised learning of additive clustering models do not scale well to large problems. We present a new algorithm for additive clustering, based on a novel heuristic technique for combinatorial optimization. The algorithm is simpler than previous formulations and makes fewer independence assumptions. Extensive empirical tests on both human and synthetic data suggest that it is more effective than previous methods and that it also scales better to larger problems. By making additive clustering practical, we take a significant step toward scaling connectionist models beyond hand-coded examples.
Arabie, Phipps and J. Douglas Carroll. 1980. MAPCLUS: A mathematical programming approach to fitting the adclus model. Psychometrika, 45(2):211–235, June. Baluja, Shumeet. 1997. Genetic algorithms and explicit search statistics. In Michael C. Mozer, Michael I. Jordan, and Thomas Petsche, editors, NIPS 9. Boese, Kenneth D., Andrew B. Kahng, and Sudhakar Muddu. 1994. A new adaptive multi-start technique for combinatorial global optimizations. Operations Research Letters, 16:101–113. Carroll, J. Douglas and Phipps Arabie. 1983. INDCLUS: An individual differences generalization of the ADCLUS model and the MAPCLUS algorithm. Psychometrika, 48(2):157–169, June. Chaturvedi, Anil and J. Douglas Carroll. 1994. An alternating combinatorial optimization approach to fitting the INDCLUS and generalized INDCLUS models. Journal of Classification, 11:155–170. Clouse, Daniel S. and Garrison W. Cottrell. 1996. Discrete multi-dimensional scaling. In Proceedings of the 18th Annual Conference of the Cognitive Science Society, pp. 290–294. Hojo, Hiroshi. 1983. A maximum likelihood method for additive clustering and its applications. Japanese Psychological Research, 25(4):191–201. Kernighan, B. and S. Lin. 1970. An efficient heuristic procedure for partitioning graphs. The Bell System Technical Journal, 49(2):291–307, February. Kiers, Henk A. L. 1997. A modification of the SINDCLUS algorithm for fitting the ADCLUS and INDCLUS models. Journal of Classification, 14(2):297–310. Lee, Michael D. in press. A simple method for generating additive clustering models with limited complexity. Machine Learning. Mechelen, I. Van and G. Storms. 1995. Analysis of similarity data and Tversky’s contrast model. Psychologica Belgica, 35(2–3):85–102. Noelle, David C., Garrison W. Cottrell, and Fred R. Wilms. 1997. Extreme attraction: On the discrete representation preference of attractor networks. In M. G. Shafto and P. Langley, eds, Proceedings of the 19th Annual Conference of the Cognitive Science Society, p. 1000. Ruml, Wheeler, J. Thomas Ngo, Joe Marks, and Stuart Shieber. 1996. Easily searched encodings for number partitioning. Journal of Optimization Theory and Applications, 89(2). Shepard, Roger N. and Phipps Arabie. 1979. Additive clustering: Representation of similarities as combinations of discrete overlapping properties. Psychological Review, 86(2):87–123, March. Stark, Philip B. and Robert L. Parker. 1995. Bounded-variable least-squares: An algorithm and applications. Computational Statistics, 10:129–141. Tenenbaum, Joshua B. 1996. Learning the structure of similarity. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, NIPS 8.