nips nips2008 nips2008-96 nips2008-96-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Bernhard Nessler, Michael Pfeiffer, Wolfgang Maass
Abstract: Uncertainty is omnipresent when we perceive or interact with our environment, and the Bayesian framework provides computational methods for dealing with it. Mathematical models for Bayesian decision making typically require datastructures that are hard to implement in neural networks. This article shows that even the simplest and experimentally best supported type of synaptic plasticity, Hebbian learning, in combination with a sparse, redundant neural code, can in principle learn to infer optimal Bayesian decisions. We present a concrete Hebbian learning rule operating on log-probability ratios. Modulated by reward-signals, this Hebbian plasticity rule also provides a new perspective for understanding how Bayesian inference could support fast reinforcement learning in the brain. In particular we show that recent experimental results by Yang and Shadlen [1] on reinforcement learning of probabilistic inference in primates can be modeled in this way. 1
[1] T. Yang and M. N. Shadlen. Probabilistic reasoning by neurons. Nature, 447:1075–1080, 2007.
[2] R. P. N. Rao. Neural models of Bayesian belief propagation. In K. Doya, S. Ishii, A. Pouget, and R. P. N. Rao, editors, Bayesian Brain., pages 239–267. MIT-Press, 2007.
[3] C. M. Bishop. Pattern Recognition and Machine Learning. Springer (New York), 2006.
[4] S. Deneve. Bayesian spiking neurons I, II. Neural Computation, 20(1):91–145, 2008. ¨
[5] A. Sandberg, A. Lansner, K. M. Petersson, and O. Ekeberg. A Bayesian attractor network with incremental learning. Network: Computation in Neural Systems, 13:179–194, 2002.
[6] D. Roth. Learning in natural language. In Proc. of IJCAI, pages 898–904, 1999.
[7] D. O. Hebb. The Organization of Behavior. Wiley, New York, 1949.
[8] D. P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[9] B. Nessler, M. Pfeiffer, and W. Maass. Journal version. in preparation, 2009.
[10] L. P. Sugrue, G. S. Corrado, and W. T. Newsome. Matching behavior and the representation of value in the parietal cortex. Science, 304:1782–1787, 2004.
[11] J. S. Ide and F. G. Cozman. Random generation of Bayesian networks. In Proceedings of the 16th Brazilian Symposium on Artificial Intelligence, pages 366–375, 2002.
[12] R. A. Rescorla and A. R. Wagner. Classical conditioning II. In A. H. Black and W. F. Prokasy, editors, A theory of Pavlovian conditioning, pages 64–99. 1972.
[13] A. Y. Ng and M. I. Jordan. On discriminative vs. generative classifiers. NIPS, 14:841–848, 2002. 8