nips nips2013 nips2013-157 knowledge-graph by maker-knowledge-mining

157 nips-2013-Learning Multi-level Sparse Representations


Source: pdf

Author: Ferran Diego Andilla, Fred A. Hamprecht

Abstract: Bilinear approximation of a matrix is a powerful paradigm of unsupervised learning. In some applications, however, there is a natural hierarchy of concepts that ought to be reflected in the unsupervised analysis. For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel → neuron → assembly that should find their counterpart in the unsupervised analysis. Driven by this concrete problem, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts. In addition, we learn the nature of these relations rather than imposing them. Finally, we describe an optimization scheme that allows to optimize the decomposition over all levels jointly, rather than in a greedy level-by-level fashion. The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. Experiments show that the proposed model fully recovers the structure from difficult synthetic data designed to imitate the experimental data. More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel → neuron → assembly that should find their counterpart in the unsupervised analysis. [sent-8, score-0.403]

2 Driven by this concrete problem, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. [sent-9, score-0.171]

3 In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts. [sent-10, score-0.217]

4 The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. [sent-13, score-1.033]

5 More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data. [sent-15, score-0.35]

6 1 Introduction This work was stimulated by a concrete problem, namely the decomposition of state-of-the-art 2D + time calcium imaging sequences as shown in Fig. [sent-16, score-0.407]

7 Leveraging sparsity constraints seems natural, given that the neural activations are sparse in both space and time. [sent-19, score-0.132]

8 The experimentally achievable optical slice thickness still results in spatial overlap of cells, meaning that each pixel can show intensity from more than one neuron. [sent-20, score-0.127]

9 All neurons of an assembly are expected to fire at roughly the same time [20]. [sent-22, score-0.391]

10 A standard sparse decomposition of the set of vectorized images into a dictionary and a set of coefficients would not conform with prior knowledge that we have entities at three levels: the pixels, the neurons, and the assemblies, see Fig. [sent-23, score-0.332]

11 3) that • • • • allows enforcing (structured) sparsity constraints at each level, admits both hierarchical or heterarchical relations between levels (Fig. [sent-27, score-0.346]

12 1 Figure 1: Left: frames from a calcium imaging sequence showing firing neurons that were recorded by an epi-fluorescence microscope. [sent-31, score-0.56]

13 The underlying biological aim motivating these experiments is to study the role of neuronal assemblies in memory consolidation. [sent-33, score-0.764]

14 1 Relation to Previous Work Most important unsupervised data analysis methods such as PCA, NMF / pLSA, ICA, cluster analysis, sparse coding and others can be written in terms of a bilinear decomposition of, or approximation to, a two-way matrix of raw data [22]. [sent-35, score-0.291]

15 These do not use structured sparsity constraints, but go beyond our approach in automatically estimating the appropriate number of levels using nonparametric Bayesian models. [sent-41, score-0.133]

16 [10] introduce structured sparsity constraints that we use to find dictionary basis functions representing single neurons. [sent-43, score-0.288]

17 In contrast, the method proposed here can infer either hierarchical (tree-structured) or heterarchical (directed acyclic graph) relations between entities at different levels. [sent-46, score-0.289]

18 This is a multi-stage procedure which iteratively decomposes the rightmost matrix of the decomposition that was previously found. [sent-48, score-0.16]

19 [21] proposed a novel dictionary structure where each basis function in a dictionary is a linear combination of a few elements from a fixed base dictionary. [sent-51, score-0.323]

20 The T idea of dictionary learning is to find a decomposition X ≈ D U0 , see Fig. [sent-67, score-0.197]

21 ΩD prevents the inflation of dictionary entries to compensate for small coefficients, and induces, if desired, additional structure on the learned basis functions [16]. [sent-71, score-0.185]

22 2 id assemblies tim es (fra s) me id neuron a s (fr me s) me ti 5 10 15 20 25 30 35 40 45 heterarchical correspondence Figure 2: Bottom left: Shown are the temporal activation patterns of individual neurons U0 (lower level), and assemblies of neurons U1 (upper level). [sent-77, score-2.066]

23 Neurons D and assemblies are related by a bipartite graph A1 the estimation of which is a central goal of this work. [sent-78, score-0.633]

24 The signature of five neuronal assemblies (five columns of DA1 ) in the spatial domain is shown at the top. [sent-79, score-0.797]

25 The outlines in the middle of the bottom show the union of all neurons found in D, superimposed onto a maximum intensity projection across the background-subtracted raw image sequence. [sent-80, score-0.355]

26 The raw data comes from a mouse hippocampal slice, where single neurons can indeed be part of more than one assembly [20]. [sent-82, score-0.49]

27 We would like to find the following: • a dictionary D of q0 vectorized images comprising m pixels each. [sent-92, score-0.159]

28 • a matrix A1 indicating to what extent each of the q0 neurons is associated with any of the q1 neuronal assemblies. [sent-94, score-0.388]

29 We will call this matrix interchangeably assignment or adjacency matrix in the following. [sent-95, score-0.176]

30 • a coefficient matrix [U1 ]T that encodes in its rows the temporal evolution (activation) of the q1 neuronal assemblies across n time steps. [sent-98, score-0.894]

31 3(b)) that encodes in its rows the temporal activation of the q0 neuron basis functions across n time steps. [sent-100, score-0.27]

32 To illustrate, assume for the moment that only a single neuronal assembly is active at any given time. [sent-105, score-0.315]

33 Then all neurons associated with that assembly would follow an absolutely identical time course. [sent-106, score-0.412]

34 While it is expected that neurons from an assembly show similar activation patterns [20], this is something we want to glean from the data, and not absolutely impose. [sent-107, score-0.498]

35 In response, we introduce an auxiliary matrix U0 ≈ U1 [A1 ]T showing the temporal activation pattern of individual neurons. [sent-108, score-0.185]

36 3 Trilevel and Multi-level Sparse Matrix Factorization We now discuss the generalization to an arbitrary number of levels that may be relevant for applications other than calcium imaging. [sent-118, score-0.278]

37 Assume, first, that the relations between lower-level and higher-level concepts obey a strict inclusion hierarchy. [sent-123, score-0.161]

38 Such a forest can also be seen as a (special case of an L + 1-partite) graph, with an adjacency matrix Al specifying the parents of each concept at level l − 1. [sent-126, score-0.133]

39 In that case, the relations between concepts can be expressed in terms of a concatenation of bipartite graphs that conform with a directed acyclic graph. [sent-129, score-0.186]

40 3(d) is a principled alternative to simpler approaches that would impose the relations between concepts, or estimate them separately using, for instance, clustering algorithms; and that would then find a sparse factorization subject to this structure. [sent-132, score-0.25]

41 Instead, we simultaneously estimate the relation between concepts at different levels, as well as find a sparse approximation to the raw data. [sent-133, score-0.233]

42 Indeed, it is possible to define convex norms that not only induce sparse solutions, but also favor non-zero patterns of a specific structure, such as sets of variables in a convex polygon with certain symmetry constraints [10]. [sent-140, score-0.144]

43 Following [5], we use such norms to bias towards neuron basis functions holding a single neuron only. [sent-141, score-0.245]

44 1 Methods Decomposition into neurons and their transients only Cell Sorting [18] and Adina [5] focus only on the detection of cell centroids and of cell shape, and the estimation and analysis of Calcium transient signals. [sent-146, score-0.576]

45 However, these methods provide no means to detect and identify neuronal co-activation. [sent-147, score-0.131]

46 The key idea is to decompose calcium imaging data into constituent signal sources, i. [sent-148, score-0.34]

47 In contrast, Adina relies on a matrix factorization based on sparse coding and dictionary learning [15], exploiting that neuronal activity is sparsely distributed in both space and time. [sent-152, score-0.538]

48 Without such a segmentation step, overlapping cells or those with highly correlated activity are often associated with the same basis function. [sent-154, score-0.235]

49 2 Decomposition into neurons, their transients, and assemblies of neurons MNNMF+Adina Here, we combine a multilayer extension of non-negative matrix factorization with the segmentation from Adina. [sent-156, score-1.072]

50 MNNMF [3] is a multi-stage procedure that iteratively decomposes the rightmost matrix of the decomposition that was previously found. [sent-157, score-0.16]

51 In the first stage, we decompose the calcium imaging data into spatial and temporal components, just like the methods cited above, but using NMF and a non-negative least squares loss function [12] as implemented in [14]. [sent-158, score-0.432]

52 We then use the segmentation from [5] to obtain single neurons in an updated dictionary1 D. [sent-159, score-0.254]

53 Altogether, this procedure allows identifying neuronal assemblies and their temporal evolution. [sent-162, score-0.844]

54 However, the exact number of assemblies q1 must be defined a priori. [sent-163, score-0.633]

55 KSVDS+Adina allows estimating a sparse decomposition [21] X ≈ DA1 [U1 ]T provided that i) a dictionary of basis functions and ii) the exact number of assemblies is supplied as input. [sent-164, score-0.939]

56 We obtain good results when supplying the purged dictionary1 of single neurons resulting from Adina [5]. [sent-166, score-0.245]

57 SHMF – Sparse Heterarchical Matrix Factorization in its bilevel formulation decomposes the raw data simultaneously into neuron basis functions D, a mapping of these to assemblies A1 , as well as time courses of neurons U0 and assemblies U1 , see equation in Fig. [sent-167, score-1.991]

58 In addition, we impose the l2 -norm at the assembly level Ω1 , D 1 Without such a segmentation step, the dictionary atoms often comprise more than one neuron, and overall results (not shown) are poor. [sent-170, score-0.418]

59 Exceptions arise only in the case of cells which both overlap in space and have high temporal correlation. [sent-174, score-0.174]

60 Finally, the number of neurons q0 and neuronal assemblies q1 are set to generous upper bounds of the expected true numbers, and are both set to equal values (here: q0 = q1 = 60) for simplicity. [sent-178, score-0.971]

61 Since neuronal assemblies are still the subject of ongoing research, ground truth is not available for such real-world data. [sent-182, score-0.764]

62 The data is created by randomly selecting cell shapes from 36 different active cells extracted from real data, and locating them in different locations with an overlap of up to 30%. [sent-185, score-0.152]

63 Each assembly fires according to a dependent Poisson process, with transient shapes following a one-sided exponential decay with a scale of 500 to 800ms that is convolved by a Gaussian kernel with σ = 50ms. [sent-187, score-0.266]

64 The dependency is induced by eliminating all transients that overlap by more than 20%. [sent-188, score-0.145]

65 Within such a transient, the neurons associated with the assembly fire with a probability of 90% each. [sent-189, score-0.391]

66 The number of cells per assembly varies from 1 to 10, and we use five assemblies in all experiments. [sent-190, score-0.883]

67 By construction, the identity, location and activity patterns of all cells along with their membership in assemblies are known. [sent-193, score-0.784]

68 Identificaton of assemblies First, we want to quantify the ability to correctly infer assemblies from an image sequence. [sent-196, score-1.327]

69 To that end, we compute the graph edit distance of the estimated assignments of neurons to assemblies, encoded in matrices A1 , to the known ground truth. [sent-197, score-0.251]

70 We count the number of false positive and false negative edges in the assignment graphs, where vertices (assemblies) are matched by minimizing the Hamming distance between binarized assignment matrices over all permutations. [sent-198, score-0.164]

71 Accordingly, adjacency matrices, A1 ∈ Rq0 ×q1 for different values for the number of assemblies q1 ∈ [3, 7] were estimated. [sent-200, score-0.671]

72 2 give respectable performance in the task of inferring neuronal assemblies from nontrivial synthetic image sequences. [sent-206, score-0.842]

73 For the true number of assemblies (q1 = 5), Bilevel SHMF reaches a higher sensitivity than the alternative methods, with a median difference of 14%. [sent-207, score-0.675]

74 2 also infer the temporal activity of all assemblies, U1 . [sent-210, score-0.157]

75 We omit a comparison of these matrices for lack of a good metric that would also take into account the correctness of the assemblies themselves: a fine time course has little worth if its associated assembly is deficient, for instance by having lost some neurons with respect to ground truth. [sent-211, score-1.068]

76 6 Sensitivity Precision Figure 4: Performance on learning correct assignments of neurons to assemblies from nontrivial synthetic data with ground truth. [sent-212, score-0.88]

77 KSVDS+Adina and MNNMF+Adina require that the number of assemblies q1 be fixed in advance. [sent-213, score-0.633]

78 In contrast, bilevel SHMF estimates the number of assemblies given an upper-bound. [sent-214, score-0.883]

79 Detection of calcium transients While the detection of assemblies as evaluated above is completely new in the literature, we now turn to a better studied [18, 5] problem: the detection of calcium transients of individual neurons. [sent-218, score-1.413]

80 To quantify transient detection performance, we compute the sensitivity and precision as in [20]. [sent-221, score-0.206]

81 Here, sensitivity is the ratio of correctly detected to all neuronal activities; and precision is the ratio of correctly detected to all detected neuronal activities. [sent-222, score-0.41]

82 Figure 5: Sensitivity and precision of transient detection for individual neurons. [sent-225, score-0.164]

83 Methods that estimate both assemblies and neuron transients perform at least as well as their simpler counterparts that focus on the latter. [sent-226, score-0.838]

84 This is not self-evident, because a bilevel factorization could be expected to be more ill-posed than a single level factorization. [sent-230, score-0.375]

85 We make two observations: Firstly, it seems that using a bilevel representation with suitable regularization constraints helps stabilize the activity estimates also for single neurons. [sent-231, score-0.333]

86 Incidentally, the great spread of both sensitivities and precisions results from the great variety of noise levels used in the simulations, and attests to the difficulty of part of the synthetic data sets. [sent-233, score-0.126]

87 2 Real Sequences We have applied bilevel SHMF to epifluorescent data sets from mice (C57BL6) hippocampal slice cultures. [sent-239, score-0.305]

88 2, the method is able to distinguish overlapping cells and highly correlated cells, while at the same time estimating neuronal co-activation patterns (assemblies). [sent-241, score-0.249]

89 Exploiting spatio-temporal sparsity and convex cell shape priors allows to accurately infer the transient events. [sent-242, score-0.204]

90 On the application side, the proposed method allows to accomplish the detection of neurons, assemblies and their relation in a single framework, exploiting sparseness in the temporal and spatial domain in the process. [sent-247, score-0.83]

91 6, this approach is able to reconstruct the raw data at both levels of representations, and to make plausible proposals for neuron and assembly identification. [sent-250, score-0.403]

92 Given the experimental importance of calcium imaging, automated methods in the spirit of the one described here can be expected to become an essential tool for the investigation of complex activation patterns in live neural tissue. [sent-251, score-0.332]

93 Automated identification of neuronal activity from calcium imaging by sparse dictionary learning. [sent-284, score-0.704]

94 Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method. [sent-330, score-0.153]

95 The non-negative matrix factorization toolbox for biological data mining. [sent-345, score-0.153]

96 Automated analysis of cellular signals from largescale calcium imaging data. [sent-378, score-0.319]

97 Reliable optical detection of coherent neuronal activity in fast oscillating networks in vitro. [sent-398, score-0.239]

98 A two-layer non-negative matrix factorization model for vocabulary discovery. [sent-416, score-0.153]

99 Unsupervised multi-level non-negative matrix factorization model: Binary data case. [sent-424, score-0.153]

100 Online detection of unusual events in videos via dynamic sparse coding. [sent-441, score-0.143]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('assemblies', 0.633), ('adina', 0.264), ('bilevel', 0.25), ('calcium', 0.219), ('neurons', 0.207), ('assembly', 0.184), ('shmf', 0.17), ('dictionary', 0.138), ('heterarchical', 0.132), ('mnnmf', 0.132), ('neuronal', 0.131), ('transients', 0.117), ('factorization', 0.103), ('imaging', 0.1), ('ksvds', 0.094), ('neuron', 0.088), ('transient', 0.082), ('temporal', 0.08), ('raw', 0.072), ('concepts', 0.069), ('cells', 0.066), ('sparse', 0.062), ('decomposition', 0.059), ('levels', 0.059), ('relations', 0.058), ('cell', 0.058), ('reichinnek', 0.057), ('activation', 0.055), ('detection', 0.054), ('activity', 0.054), ('matrix', 0.05), ('basis', 0.047), ('segmentation', 0.047), ('matrices', 0.044), ('supplemental', 0.044), ('jenatton', 0.044), ('cichocki', 0.043), ('sensitivity', 0.042), ('sparsity', 0.041), ('synthetic', 0.04), ('coef', 0.04), ('ponce', 0.039), ('intensity', 0.038), ('assignment', 0.038), ('draguhn', 0.038), ('purged', 0.038), ('trilevel', 0.038), ('adjacency', 0.038), ('image', 0.038), ('nmf', 0.037), ('inclusion', 0.034), ('frames', 0.034), ('courses', 0.033), ('zdunek', 0.033), ('spatial', 0.033), ('structured', 0.033), ('multilayer', 0.032), ('patterns', 0.031), ('mairal', 0.031), ('sorting', 0.031), ('conform', 0.031), ('relation', 0.03), ('bach', 0.03), ('rubinstein', 0.029), ('subordinate', 0.029), ('sequences', 0.029), ('constraints', 0.029), ('precision', 0.028), ('heidelberg', 0.028), ('overlap', 0.028), ('acyclic', 0.028), ('decomposes', 0.028), ('slice', 0.028), ('hippocampal', 0.027), ('unusual', 0.027), ('precisions', 0.027), ('automated', 0.027), ('impose', 0.027), ('hierarchical', 0.027), ('detected', 0.026), ('vivo', 0.025), ('bilinear', 0.024), ('ica', 0.024), ('unsupervised', 0.024), ('infer', 0.023), ('rightmost', 0.023), ('concept', 0.023), ('jointly', 0.023), ('norms', 0.022), ('false', 0.022), ('level', 0.022), ('entities', 0.021), ('diego', 0.021), ('vectorized', 0.021), ('constituent', 0.021), ('obozinski', 0.021), ('nonnegative', 0.021), ('overlapping', 0.021), ('absolutely', 0.021), ('formalism', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 157 nips-2013-Learning Multi-level Sparse Representations

Author: Ferran Diego Andilla, Fred A. Hamprecht

Abstract: Bilinear approximation of a matrix is a powerful paradigm of unsupervised learning. In some applications, however, there is a natural hierarchy of concepts that ought to be reflected in the unsupervised analysis. For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel → neuron → assembly that should find their counterpart in the unsupervised analysis. Driven by this concrete problem, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts. In addition, we learn the nature of these relations rather than imposing them. Finally, we describe an optimization scheme that allows to optimize the decomposition over all levels jointly, rather than in a greedy level-by-level fashion. The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. Experiments show that the proposed model fully recovers the structure from difficult synthetic data designed to imitate the experimental data. More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data. 1

2 0.20249866 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke

Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1

3 0.19461201 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

Author: Eftychios A. Pnevmatikakis, Liam Paninski

Abstract: We propose a compressed sensing (CS) calcium imaging framework for monitoring large neuronal populations, where we image randomized projections of the spatial calcium concentration at each timestep, instead of measuring the concentration at individual locations. We develop scalable nonnegative deconvolution methods for extracting the neuronal spike time series from such observations. We also address the problem of demixing the spatial locations of the neurons using rank-penalized matrix factorization methods. By exploiting the sparsity of neural spiking we demonstrate that the number of measurements needed per timestep is significantly smaller than the total number of neurons, a result that can potentially enable imaging of larger populations at considerably faster rates compared to traditional raster-scanning techniques. Unlike traditional CS setups, our problem involves a block-diagonal sensing matrix and a non-orthogonal sparse basis that spans multiple timesteps. We provide tight approximations to the number of measurements needed for perfect deconvolution for certain classes of spiking processes, and show that this number undergoes a “phase transition,” which we characterize using modern tools relating conic geometry to compressed sensing. 1

4 0.13744418 114 nips-2013-Extracting regions of interest from biological images with convolutional sparse block coding

Author: Marius Pachitariu, Adam M. Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, Maneesh Sahani

Abstract: Biological tissue is often composed of cells with similar morphologies replicated throughout large volumes and many biological applications rely on the accurate identification of these cells and their locations from image data. Here we develop a generative model that captures the regularities present in images composed of repeating elements of a few different types. Formally, the model can be described as convolutional sparse block coding. For inference we use a variant of convolutional matching pursuit adapted to block-based representations. We extend the KSVD learning algorithm to subspaces by retaining several principal vectors from the SVD decomposition instead of just one. Good models with little cross-talk between subspaces can be obtained by learning the blocks incrementally. We perform extensive experiments on simulated images and the inference algorithm consistently recovers a large proportion of the cells with a small number of false positives. We fit the convolutional model to noisy GCaMP6 two-photon images of spiking neurons and to Nissl-stained slices of cortical tissue and show that it recovers cell body locations without supervision. The flexibility of the block-based representation is reflected in the variability of the recovered cell shapes. 1

5 0.13395652 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

Author: Jasper Snoek, Richard Zemel, Ryan P. Adams

Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1

6 0.12432623 321 nips-2013-Supervised Sparse Analysis and Synthesis Operators

7 0.11048979 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

8 0.10470752 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

9 0.091811806 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

10 0.08570455 210 nips-2013-Noise-Enhanced Associative Memories

11 0.079769477 251 nips-2013-Predicting Parameters in Deep Learning

12 0.079106957 121 nips-2013-Firing rate predictions in optimal balanced networks

13 0.076705076 64 nips-2013-Compete to Compute

14 0.072131172 186 nips-2013-Matrix factorization with binary components

15 0.065830603 5 nips-2013-A Deep Architecture for Matching Short Texts

16 0.064977586 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

17 0.06254182 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

18 0.057043757 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

19 0.055938341 349 nips-2013-Visual Concept Learning: Combining Machine Vision and Bayesian Generalization on Concept Hierarchies

20 0.052178249 65 nips-2013-Compressive Feature Learning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.15), (1, 0.101), (2, -0.084), (3, -0.038), (4, -0.164), (5, -0.071), (6, -0.067), (7, -0.049), (8, -0.024), (9, 0.027), (10, 0.014), (11, 0.038), (12, 0.06), (13, -0.007), (14, -0.044), (15, -0.044), (16, -0.026), (17, -0.059), (18, -0.024), (19, 0.003), (20, -0.016), (21, -0.001), (22, 0.014), (23, -0.021), (24, 0.024), (25, -0.062), (26, 0.016), (27, 0.016), (28, 0.04), (29, 0.011), (30, -0.077), (31, 0.044), (32, 0.017), (33, 0.186), (34, -0.017), (35, -0.003), (36, 0.14), (37, -0.029), (38, 0.067), (39, -0.051), (40, -0.1), (41, 0.001), (42, 0.108), (43, 0.077), (44, -0.141), (45, -0.085), (46, -0.028), (47, -0.116), (48, 0.128), (49, 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93072557 157 nips-2013-Learning Multi-level Sparse Representations

Author: Ferran Diego Andilla, Fred A. Hamprecht

Abstract: Bilinear approximation of a matrix is a powerful paradigm of unsupervised learning. In some applications, however, there is a natural hierarchy of concepts that ought to be reflected in the unsupervised analysis. For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel → neuron → assembly that should find their counterpart in the unsupervised analysis. Driven by this concrete problem, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts. In addition, we learn the nature of these relations rather than imposing them. Finally, we describe an optimization scheme that allows to optimize the decomposition over all levels jointly, rather than in a greedy level-by-level fashion. The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. Experiments show that the proposed model fully recovers the structure from difficult synthetic data designed to imitate the experimental data. More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data. 1

2 0.78706467 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke

Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1

3 0.74739558 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

Author: Eftychios A. Pnevmatikakis, Liam Paninski

Abstract: We propose a compressed sensing (CS) calcium imaging framework for monitoring large neuronal populations, where we image randomized projections of the spatial calcium concentration at each timestep, instead of measuring the concentration at individual locations. We develop scalable nonnegative deconvolution methods for extracting the neuronal spike time series from such observations. We also address the problem of demixing the spatial locations of the neurons using rank-penalized matrix factorization methods. By exploiting the sparsity of neural spiking we demonstrate that the number of measurements needed per timestep is significantly smaller than the total number of neurons, a result that can potentially enable imaging of larger populations at considerably faster rates compared to traditional raster-scanning techniques. Unlike traditional CS setups, our problem involves a block-diagonal sensing matrix and a non-orthogonal sparse basis that spans multiple timesteps. We provide tight approximations to the number of measurements needed for perfect deconvolution for certain classes of spiking processes, and show that this number undergoes a “phase transition,” which we characterize using modern tools relating conic geometry to compressed sensing. 1

4 0.6326341 114 nips-2013-Extracting regions of interest from biological images with convolutional sparse block coding

Author: Marius Pachitariu, Adam M. Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, Maneesh Sahani

Abstract: Biological tissue is often composed of cells with similar morphologies replicated throughout large volumes and many biological applications rely on the accurate identification of these cells and their locations from image data. Here we develop a generative model that captures the regularities present in images composed of repeating elements of a few different types. Formally, the model can be described as convolutional sparse block coding. For inference we use a variant of convolutional matching pursuit adapted to block-based representations. We extend the KSVD learning algorithm to subspaces by retaining several principal vectors from the SVD decomposition instead of just one. Good models with little cross-talk between subspaces can be obtained by learning the blocks incrementally. We perform extensive experiments on simulated images and the inference algorithm consistently recovers a large proportion of the cells with a small number of false positives. We fit the convolutional model to noisy GCaMP6 two-photon images of spiking neurons and to Nissl-stained slices of cortical tissue and show that it recovers cell body locations without supervision. The flexibility of the block-based representation is reflected in the variability of the recovered cell shapes. 1

5 0.54470724 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

6 0.53665155 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

7 0.52334839 210 nips-2013-Noise-Enhanced Associative Memories

8 0.52279359 86 nips-2013-Demixing odors - fast inference in olfaction

9 0.51643759 350 nips-2013-Wavelets on Graphs via Deep Learning

10 0.49262923 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

11 0.48400766 321 nips-2013-Supervised Sparse Analysis and Synthesis Operators

12 0.46763301 64 nips-2013-Compete to Compute

13 0.45605078 214 nips-2013-On Algorithms for Sparse Multi-factor NMF

14 0.43660194 186 nips-2013-Matrix factorization with binary components

15 0.43644491 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

16 0.4240433 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

17 0.41454715 354 nips-2013-When in Doubt, SWAP: High-Dimensional Sparse Recovery from Correlated Measurements

18 0.40105808 202 nips-2013-Multiclass Total Variation Clustering

19 0.37808812 205 nips-2013-Multisensory Encoding, Decoding, and Identification

20 0.36660305 329 nips-2013-Third-Order Edge Statistics: Contour Continuation, Curvature, and Cortical Connections


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(16, 0.027), (33, 0.09), (34, 0.087), (41, 0.018), (49, 0.061), (56, 0.069), (70, 0.42), (85, 0.042), (89, 0.026), (93, 0.057), (95, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.920986 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

Author: Hesham Mostafa, Lorenz. K. Mueller, Giacomo Indiveri

Abstract: We present a recurrent neuronal network, modeled as a continuous-time dynamical system, that can solve constraint satisfaction problems. Discrete variables are represented by coupled Winner-Take-All (WTA) networks, and their values are encoded in localized patterns of oscillations that are learned by the recurrent weights in these networks. Constraints over the variables are encoded in the network connectivity. Although there are no sources of noise, the network can escape from local optima in its search for solutions that satisfy all constraints by modifying the effective network connectivity through oscillations. If there is no solution that satisfies all constraints, the network state changes in a seemingly random manner and its trajectory approximates a sampling procedure that selects a variable assignment with a probability that increases with the fraction of constraints satisfied by this assignment. External evidence, or input to the network, can force variables to specific values. When new inputs are applied, the network re-evaluates the entire set of variables in its search for states that satisfy the maximum number of constraints, while being consistent with the external input. Our results demonstrate that the proposed network architecture can perform a deterministic search for the optimal solution to problems with non-convex cost functions. The network is inspired by canonical microcircuit models of the cortex and suggests possible dynamical mechanisms to solve constraint satisfaction problems that can be present in biological networks, or implemented in neuromorphic electronic circuits. 1

2 0.84231144 84 nips-2013-Deep Neural Networks for Object Detection

Author: Christian Szegedy, Alexander Toshev, Dumitru Erhan

Abstract: Deep Neural Networks (DNNs) have recently shown outstanding performance on image classification tasks [14]. In this paper we go one step further and address the problem of object detection using DNNs, that is not only classifying but also precisely localizing objects of various classes. We present a simple and yet powerful formulation of object detection as a regression problem to object bounding box masks. We define a multi-scale inference procedure which is able to produce high-resolution object detections at a low cost by a few network applications. State-of-the-art performance of the approach is shown on Pascal VOC. 1

same-paper 3 0.83265048 157 nips-2013-Learning Multi-level Sparse Representations

Author: Ferran Diego Andilla, Fred A. Hamprecht

Abstract: Bilinear approximation of a matrix is a powerful paradigm of unsupervised learning. In some applications, however, there is a natural hierarchy of concepts that ought to be reflected in the unsupervised analysis. For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel → neuron → assembly that should find their counterpart in the unsupervised analysis. Driven by this concrete problem, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts. In addition, we learn the nature of these relations rather than imposing them. Finally, we describe an optimization scheme that allows to optimize the decomposition over all levels jointly, rather than in a greedy level-by-level fashion. The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. Experiments show that the proposed model fully recovers the structure from difficult synthetic data designed to imitate the experimental data. More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data. 1

4 0.82797438 16 nips-2013-A message-passing algorithm for multi-agent trajectory planning

Author: Jose Bento, Nate Derbinsky, Javier Alonso-Mora, Jonathan S. Yedidia

Abstract: We describe a novel approach for computing collision-free global trajectories for p agents with specified initial and final configurations, based on an improved version of the alternating direction method of multipliers (ADMM). Compared with existing methods, our approach is naturally parallelizable and allows for incorporating different cost functionals with only minor adjustments. We apply our method to classical challenging instances and observe that its computational requirements scale well with p for several cost functionals. We also show that a specialization of our algorithm can be used for local motion planning by solving the problem of joint optimization in velocity space. 1

5 0.80719709 90 nips-2013-Direct 0-1 Loss Minimization and Margin Maximization with Boosting

Author: Shaodan Zhai, Tian Xia, Ming Tan, Shaojun Wang

Abstract: We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak classifiers to maximize any targeted arbitrarily defined margins until reaching a local coordinatewise maximum of the margins in a certain sense. Experimental results on a collection of machine-learning benchmark datasets show that DirectBoost gives better results than AdaBoost, LogitBoost, LPBoost with column generation and BrownBoost, and is noise tolerant when it maximizes an n′ th order bottom sample margin. 1

6 0.74820811 56 nips-2013-Better Approximation and Faster Algorithm Using the Proximal Average

7 0.72264546 15 nips-2013-A memory frontier for complex synapses

8 0.62106729 121 nips-2013-Firing rate predictions in optimal balanced networks

9 0.61778486 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

10 0.59146225 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

11 0.5792706 162 nips-2013-Learning Trajectory Preferences for Manipulators via Iterative Improvement

12 0.54570174 22 nips-2013-Action is in the Eye of the Beholder: Eye-gaze Driven Model for Spatio-Temporal Action Localization

13 0.53811502 136 nips-2013-Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream

14 0.53319854 64 nips-2013-Compete to Compute

15 0.52939749 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively

16 0.51084524 255 nips-2013-Probabilistic Movement Primitives

17 0.50326002 114 nips-2013-Extracting regions of interest from biological images with convolutional sparse block coding

18 0.49668774 86 nips-2013-Demixing odors - fast inference in olfaction

19 0.49554521 163 nips-2013-Learning a Deep Compact Image Representation for Visual Tracking

20 0.49457037 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity