nips nips2013 nips2013-157 nips2013-157-reference knowledge-graph by maker-knowledge-mining

157 nips-2013-Learning Multi-level Sparse Representations


Source: pdf

Author: Ferran Diego Andilla, Fred A. Hamprecht

Abstract: Bilinear approximation of a matrix is a powerful paradigm of unsupervised learning. In some applications, however, there is a natural hierarchy of concepts that ought to be reflected in the unsupervised analysis. For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel → neuron → assembly that should find their counterpart in the unsupervised analysis. Driven by this concrete problem, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts. In addition, we learn the nature of these relations rather than imposing them. Finally, we describe an optimization scheme that allows to optimize the decomposition over all levels jointly, rather than in a greedy level-by-level fashion. The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. Experiments show that the proposed model fully recovers the structure from difficult synthetic data designed to imitate the experimental data. More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data. 1


reference text

[1] G. Bergqvist and E. G. Larsson. The Higher-Order Singular Value Decomposition Theory and an Application. IEEE Signal Processing Magazine, 27(3):151–154, 2010.

[2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.

[3] A. Cichocki and R. Zdunek. Multilayer nonnegative matrix factorization. Electronics Letters, 42:947– 948, 2006.

[4] A. Cichocki, R. Zdunek, A. H. Phan, and S. Amari. Nonnegative Matrix and Tensor Factorizations Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, 2009.

[5] F. Diego, S. Reichinnek, M. Both, and F. A. Hamprecht. Automated identification of neuronal activity from calcium imaging by sparse dictionary learning. In International Symposium on Biomedical Imaging, in press, 2013.

[6] W. Goebel and F. Helmchen. In vivo calcium imaging of neural network function. Physiology, 2007.

[7] C. Grienberger and A. Konnerth. Imaging calcium in neurons. Neuron, 2011.

[8] Q. Ho, J. Eisenstein, and E. P. Xing. Document hierarchies from text and links. In Proc. of the 21st Int. World Wide Web Conference (WWW 2012), pages 739–748. ACM, 2012.

[9] R. Jenatton, A. Gramfort, V. Michel, G. Obozinski, E. Eger, F. Bach, and B. Thirion. Multi-scale Mining of fMRI data with Hierarchical Structured Sparsity. SIAM Journal on Imaging Sciences, 5(3), 2012.

[10] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.

[11] J. Kerr and W. Denk. Imaging in vivo: watching the brain in action. Nature Review Neuroscience, 2008.

[12] H. Kim and H. Park. Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method. SIAM J. on Matrix Analysis and Applications, 2008.

[13] S. Kim and E. P. Xing. Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping. Ann. Appl. Stat., 2012.

[14] Y. Li and A. Ngom. The non-negative matrix factorization toolbox for biological data mining. In BMC Source Code for Biology and Medicine, 2013.

[15] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In Proceedings of the 26th Annual International Conference on Machine Learning, 2009.

[16] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online Learning for Matrix Factorization and Sparse Coding. Journal of Machine Learning Research, 2010.

[17] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and R. Jenatton. Sparse modeling software. http://spamsdevel.gforge.inria.fr/.

[18] E. A. Mukamel, A. Nimmerjahn, and M. J. Schnitzer. Automated analysis of cellular signals from largescale calcium imaging data. Neuron, 2009.

[19] M. Protter and M. Elad. Image sequence denoising via sparse and redundant representations. IEEE Transactions on Image Processing, 18(1), 2009.

[20] S. Reichinnek, A. von Kameke, A. M. Hagenston, E. Freitag, F. C. Roth, H. Bading, M. T. Hasan, A. Draguhn, and M. Both. Reliable optical detection of coherent neuronal activity in fast oscillating networks in vitro. NeuroImage, 60(1), 2012.

[21] R. Rubinstein, M. Zibulevsky, and M. Elad. Double sparsity: Learning sparse dictionaries for sparse signal approximation. IEEE Transactions on Signal Processing, 2010.

[22] A. P. Singh and G. J. Gordon. A unified view of matrix factorization models. ECML PKDD, 2008.

[23] M. Sun and H. Van Hamme. A two-layer non-negative matrix factorization model for vocabulary discovery. In Symposium on machine learning in speech and language processing, 2011.

[24] Q. Sun, P. Wu, Y. Wu, M. Guo, and J. Lu. Unsupervised multi-level non-negative matrix factorization model: Binary data case. Journal of Information Security, 2012.

[25] J. Yang, Z. Wang, Z. Lin, X. Shu, and T. S. Huang. Bilevel sparse coding for coupled feature spaces. In CVPR’12, pages 2360–2367. IEEE, 2012.

[26] B. Zhao, L. Fei-Fei, and E. P. Xing. Online detection of unusual events in videos via dynamic sparse coding. In The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011. 9