nips nips2011 nips2011-24 nips2011-24-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mijung Park, Greg Horwitz, Jonathan W. Pillow
Abstract: A sizeable literature has focused on the problem of estimating a low-dimensional feature space for a neuron’s stimulus sensitivity. However, comparatively little work has addressed the problem of estimating the nonlinear function from feature space to spike rate. Here, we use a Gaussian process (GP) prior over the infinitedimensional space of nonlinear functions to obtain Bayesian estimates of the “nonlinearity” in the linear-nonlinear-Poisson (LNP) encoding model. This approach offers increased flexibility, robustness, and computational tractability compared to traditional methods (e.g., parametric forms, histograms, cubic splines). We then develop a framework for optimal experimental design under the GP-Poisson model using uncertainty sampling. This involves adaptively selecting stimuli according to an information-theoretic criterion, with the goal of characterizing the nonlinearity with as little experimental data as possible. Our framework relies on a method for rapidly updating hyperparameters under a Gaussian approximation to the posterior. We apply these methods to neural data from a color-tuned simple cell in macaque V1, characterizing its nonlinear response function in the 3D space of cone contrasts. We find that it combines cone inputs in a highly nonlinear manner. With simulated experiments, we show that optimal design substantially reduces the amount of data required to estimate these nonlinear combination rules. 1
[1] E. P. Simoncelli, J. W. Pillow, L. Paninski, and O. Schwartz. The Cognitive Neurosciences, III, chapter 23, pages 327–338. MIT Press, Cambridge, MA, October 2004.
[2] R.R. de Ruyter van Steveninck and W. Bialek. Proc. R. Soc. Lond. B, 234:379–414, 1988.
[3] E. J. Chichilnisky. Network: Computation in Neural Systems, 12:199–213, 2001.
[4] F. Theunissen, S. David, N. Singh, A. Hsu, W. Vinje, and J. Gallant. Network: Computation in Neural Systems, 12:289–316, 2001.
[5] M. Sahani and J. Linden. NIPS, 15, 2003.
[6] L. Paninski. Network: Computation in Neural Systems, 15:243–262, 2004.
[7] Tatyana Sharpee, Nicole C Rust, and William Bialek. Neural Comput, 16(2):223–250, Feb 2004.
[8] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli. Journal of Vision, 6(4):484–507, 7 2006.
[9] J. W. Pillow and E. P. Simoncelli. Journal of Vision, 6(4):414–428, 4 2006.
[10] Misha B Ahrens, Jennifer F Linden, and Maneesh Sahani. J Neurosci, 28(8):1929–1942, Feb 2008.
[11] Nicole C Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P Simoncelli. Neuron, 46(6):945–956, Jun 2005.
[12] I. DiMatteo, C. Genovese, and R. Kass. Biometrika, 88:1055–1073, 2001.
[13] S.F. Martins, L.A. Sousa, and J.C. Martins. Image Processing, 2007. ICIP 2007. IEEE International Conference on, volume 3, pages III–309. IEEE, 2007.
[14] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[15] Liam Paninski, Yashar Ahmadian, Daniel Gil Ferreira, Shinsuke Koyama, Kamiar Rahnama Rad, Michael Vidne, Joshua Vogelstein, and Wei Wu. J Comput Neurosci, Aug 2009.
[16] Jarno Vanhatalo, Ville Pietil¨ inen, and Aki Vehtari. Statistics in medicine, 29(15):1580–1607, July 2010. a
[17] E. Brown, L. Frank, D. Tang, M. Quirk, and M. Wilson. Journal of Neuroscience, 18:7411–7425, 1998.
[18] W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black. Neural Computation, 18(1):80–118, 2006.
[19] Y. Ahmadian, J. W. Pillow, and L. Paninski. Neural Comput, 23(1):46–96, Jan 2011.
[20] K.R. Rad and L. Paninski. Network: Computation in Neural Systems, 21(3-4):142–168, 2010.
[21] Jakob H Macke, Sebastian Gerwinn, Leonard E White, Matthias Kaschube, and Matthias Bethge. Neuroimage, 56(2):570–581, May 2011.
[22] John P. Cunningham, Krishna V. Shenoy, and Maneesh Sahani. Proceedings of the 25th international conference on Machine learning, ICML ’08, pages 192–199, New York, NY, USA, 2008. ACM.
[23] R.P. Adams, I. Murray, and D.J.C. MacKay. Proceedings of the 26th Annual International Conference on Machine Learning. ACM New York, NY, USA, 2009.
[24] Todd P. Coleman and Sridevi S. Sarma. Neural Computation, 22(8):2002–2030, 2010.
[25] J. E. Kulkarni and L Paninski. Network: Computation in Neural Systems, 18(4):375–407, 2007.
[26] A.C. Smith and E.N. Brown. Neural Computation, 15(5):965–991, 2003.
[27] B.M. Yu, J.P. Cunningham, G. Santhanam, S.I. Ryu, K.V. Shenoy, and M. Sahani. Journal of Neurophysiology, 102(1):614, 2009.
[28] C.M. Bishop. Pattern recognition and machine learning. Springer New York:, 2006.
[29] D. Mackay. Neural Computation, 4:589–603, 1992.
[30] J. Lewi, R. Butera, and L. Paninski. Neural Computation, 21(3):619–687, 2009.
[31] David D. Lewis and William A. Gale. Proceedings of the ACM SIGIR conference on Research and Development in Information Retrieval, pages 3–12. Springer-Verlag, 1994.
[32] G. Casella. American Statistician, pages 83–87, 1985.
[33] J. W. Pillow, Y. Ahmadian, and L. Paninski. Neural Comput, 23(1):1–45, Jan 2011.
[34] T. P. Minka. UAI ’01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362–369, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc.
[35] E. Snelson and Z. Ghahramani. Advances in neural information processing systems, 18:1257, 2006.
[36] Andreas Krause, Ajit Singh, and Carlos Guestrin. J. Mach. Learn. Res., 9:235–284, June 2008. 9