nips nips2013 nips2013-49 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski
Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. [sent-8, score-0.59]
2 In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. [sent-9, score-0.382]
3 The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. [sent-10, score-0.801]
4 We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. [sent-11, score-0.292]
5 Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. [sent-12, score-1.371]
6 1 Introduction A major goal of neuroscience is the mapping of neural microcircuits at the scale of hundreds to thousands of neurons [1]. [sent-13, score-0.448]
7 By mapping, we specifically mean determining which neurons synapse onto each other and with what weight. [sent-14, score-0.284]
8 In this paper, we specifically address the mapping experiment in which a set of putative presynaptic neurons are optically stimulated while an electrophysiological trace is recorded from a designated postsynaptic neuron. [sent-16, score-0.91]
9 For example, while it has been shown that multiple neurons can be stimulated simultaneously [4, 5], successful mapping experiments have thus far only stimulated a single neuron per trial which increases experimental time [2, 3, 6]. [sent-19, score-0.966]
10 Stimulating multiple neurons simultaneously and with high accuracy requires well-tuned hardware, and even then some level of stimulus uncertainty may remain. [sent-20, score-0.421]
11 In this paper, we address these issues by developing a procedure for sparse Bayesian inference and information-based experimental design which can reconstruct neural microcircuits accurately and quickly despite the issues listed above. [sent-24, score-0.372]
12 , N , the experimenter stimulates R of K possible presynaptic neurons. [sent-30, score-0.239]
13 We represent the chosen set of neurons for each trial with the binary vector zn ∈ {0, 1}K , which has a one in each of the the R entries corresponding to the stimulated neurons on that trial. [sent-31, score-1.11]
14 One of the difficulties of optical stimulation lies in the experimenter’s inability to stimulate a specific neuron without possibly failing to stimulate the target neuron or engaging other nearby neurons. [sent-32, score-0.936]
15 In general, this is a result of the fact that optical excitation does not stimulate a single point in space but rather has a point spread function that is dependent on the hardware and the biological tissue. [sent-33, score-0.374]
16 To complicate matters further, each neuron has a different rheobase (a measure of how much current is needed to generate an action potential) and expression level of the optogenetic protein. [sent-34, score-0.299]
17 While some work has shown that it may be possible to stimulate exact sets of neurons, this setup requires very specific hardware and fine tuning [4, 5]. [sent-35, score-0.276]
18 In addition, even if a neuron fires, there is some probability that synaptic transmission will not occur. [sent-36, score-0.524]
19 Because these events are difficult or impossible to observe, we model this uncertainty by introducing a second binary vector xn ∈ {0, 1}K denoting the neurons that actually release neurotransmitter in trial n. [sent-37, score-0.564]
20 The conditional distribution of xn given zn can be chosen by the experimenter to match their hardware settings and understanding of synaptic transmission rates in their preparation. [sent-38, score-0.753]
21 2 Sparse connectivity Numerous studies have collected data to estimate both connection probabilities and synaptic weight distributions as a function of distance and cell identity [2, 3, 6, 7, 8, 9, 10, 11, 12]. [sent-40, score-0.597]
22 Generally, the data show that connectivity is sparse and that most synaptic weights are small with a heavy tail of strong connections. [sent-41, score-0.45]
23 To capture the sparsity of neural connectivity, we place a “spike-and-slab” prior on the synaptic weights wk [13, 14, 15], for each presynaptic neuron k = 1, . [sent-42, score-0.843]
24 Note that we do not need to restrict the “slab” distributions (the conditional distributions of wk given that wk is nonzero) to the traditional Gaussian choice, and in fact each weight can have its own parameters. [sent-46, score-0.239]
25 3 Postsynaptic response In our model a subthreshold response is measured from a designated postsynaptic neuron. [sent-49, score-0.419]
26 The postsynaptic response for each synaptic event in a given trial can be modeled using an appropriate template function fk (·) for each presynaptic neuron k. [sent-51, score-1.26]
27 For this paper we use an alpha function to model the shape of each neuron’s contribution to the postsynaptic current, parameterized by time constants τk which define the rise and decay time. [sent-52, score-0.257]
28 As with the synaptic weight priors, the template functions could be designed based on the cells’ identities. [sent-53, score-0.309]
29 The onset of each postsynaptic 1 A cell’s identity can be general such as excitatory or inhibitory, or more specific such as VIP- or PVinterneurons. [sent-54, score-0.379]
30 These identities can be identified by driving the optogenetic channel with a particular promotor unique to that cell type or by coexpressing markers for various cell types along with the optogenetic channel. [sent-55, score-0.337]
31 2 Presynaptic weights Location of presynaptic neurons and stimuli Weight 1 0 − 1 0 20 40 10 Current [pA] Neuron k 60 80 100 Postsynaptic current trace 0 − 10 − 20 − 30 0 50 100 Time [samples] 150 200 Figure 1: A schematic of the model experiment. [sent-56, score-0.655]
32 The left figure shows the relative location of 100 presynaptic neurons; inhibitory neurons are shown in yellow, and excitatory neurons in purple. [sent-57, score-0.903]
33 Neurons marked with a black outline have a nonzero connectivity to the postsynaptic neuron (shown as a blue star, in the center). [sent-58, score-0.653]
34 The true connectivity weights are shown on the upper right, with blue vertical lines marking the five neurons which were actually fired as a result of this stimulus. [sent-60, score-0.506]
35 The resulting time series postsynaptic current trace is shown in the bottom right. [sent-61, score-0.299]
36 The connected neurons which fired are circled in red, the triangle and star marking their weights and corresponding postsynaptic events in the plots at right. [sent-62, score-0.627]
37 response may be jittered such that each event starts at some time dnk after t = 0, where the delays could be conditionally distributed on the parameters of the stimulation and cells. [sent-63, score-0.363]
38 To infer the marginal distribution of the synaptic weights, one can use standard Bayesian methods such as Gibbs sampling or variational inference, both of which are discussed below. [sent-69, score-0.502]
39 An example set of neurons and connectivity weights, along with the set of stimuli and postsynaptic current trace for a single trial, is shown in Figure 1. [sent-70, score-0.838]
40 1 Charge as synaptic strength To reduce the space over which we perform inference, we collapse the variables wk and τk into a single variable ck = t wk fk (t − dnk , τk ) which quantifies the charge transfer during the synaptic event and can be used to define the strength of a connection. [sent-77, score-1.108]
41 p(y|X, c) = (3) n We found that na¨ve MCMC sampling over the posterior of w, τ , γ, X, and D insufficiently exı plored the support and inference was unsuccessful. [sent-80, score-0.227]
42 We approximate the prior over c as a spike-and-slab with Gaussian slabs where the slabs could be truncated if the cells’ excitatory or inhibitory identity is known. [sent-87, score-0.394]
43 Each xnk can be sampled by computing the odds ratio, and following [15] we draw each ck , γk from the joint distribution p(ck , γk |Z, y, X, {cj , γj |j = k}) by sampling first γk from p(γk |Z, y, X, {cj |j = k}), then p(ck |Z, y, X, {cj , |j = k}, γk ). [sent-88, score-0.3]
44 This means that we must be able to perform inference of the posterior as well as choose the next stimulus extremely quickly. [sent-92, score-0.326]
45 To achieve this decrease in runtime, we approximate the posterior distribution of c and γ using a variational approach [16]. [sent-94, score-0.314]
46 The use of variational inference for spike-and-slab regression models has been explored in [17, 18], and we follow their methods with some minor changes. [sent-95, score-0.271]
47 As is the case with fully-factorized variational distributions, updating the posterior involves an iterative algorithm which cycles through the parameters for each factor. [sent-102, score-0.314]
48 4 Therefore, since the product of a spike-and-slab and a Gaussian is still a spike-and-slab, if we stimulate only one neuron at each trial then this posterior is also spike-and-slab, and the variational approximation becomes exact in this limit. [sent-105, score-0.957]
49 We Monte Carlo approximate this integral in a manner similar to the approach used for integrating over the hyperparameters in [17]; however, here we further approximate by sampling over potential stimuli xnk from p(xnk = 1|zn ). [sent-107, score-0.24]
50 In practice we will see this approximation suffices for experimental design, with the overall variational approach performing nearly as well for posterior weight reconstruction as Gibbs sampling from the true posterior. [sent-108, score-0.488]
51 4 Optimal experimental design The preparations needed to perform these type of experiments tend to be short-lived, and indeed, the very act of collecting data — that is, stimulating and probing cells — can compromise the health of the preparation further. [sent-109, score-0.338]
52 We are thus strongly motivated to optimize the experimental design: to choose the optimal subset of neurons zn to stimulate at each trial to minimize N , the overall number of trials required for good inference. [sent-112, score-1.075]
53 , (zn−1 , yn−1 )} are fixed and yn is dependent on the stimulus zn , our problem is reduced to choosing the optimal next stimulus, denoted zn , in expectation over yn , (7) zn = arg max Eyn |zn [I(θ; D)] = arg min Eyn |zn [H(θ|D)] . [sent-120, score-1.121]
54 zn zn 5 Experimental design procedure The optimization described in Section 4 entails performing a combinatorial optimization over zn , where for each zn we consider an expectation over all possible yn . [sent-121, score-1.16]
55 1 Computing the objective function The variational posterior distribution of ck , γk can be used to characterize our general objective function described in Section 4. [sent-125, score-0.493]
56 We define the cost function J to be the right-hand side of Equation 7, J ≡ Eyn |zn [H(c, γ|D)] (8) such that the optimal next stimulus zn can be found by minimizing J. [sent-126, score-0.408]
57 (10) k,n 2 k 5 Here, we have introduced additional notation, using αk,n , µk,n , and sk,n to refer to the parameters of the variational posterior distribution given the data through trial n. [sent-129, score-0.556]
58 Intuitively, we see that equation 10 represents a balance between minimizing the sparsity pattern entropy H[γk ] of each neuron and minimizing the weight entropy H[ck |γk = 1] proportional to the probability αk that the presynaptic neuron is connected. [sent-130, score-0.847]
59 In algorithm behavior, we see when the probability that a neuron is connected increases, we spend time stimulating it to reduce the uncertainty in the corresponding nonzero slab distribution. [sent-132, score-0.514]
60 For any particular candidate zn , this can be Monte Carlo approximated by first sampling yn from the posterior distribution p(yn |zn , c, Dn−1 ), where c is drawn from the variational posterior inferred at trial n − 1. [sent-134, score-1.051]
61 Each sampled yn may be used to estimate the variational parameters αk,n and sk,n with which we evaluate H[ck , γk ]; we average over these evaluations of the entropy from each sample to compute an estimate of J in Eq. [sent-135, score-0.393]
62 Once we have chosen zn , we execute the actual trial and run the variational inference procedure on the full data to obtain the updated variational posterior parameters αk,n , µk,n , and sk,n which are needed for optimization. [sent-137, score-1.066]
63 Once the experiment has concluded, Gibbs sampling can be run, though we found only a limited gain when comparing Gibbs sampling to variational inference. [sent-138, score-0.274]
64 It is not feasible to evaluate the right-hand side of equation 10 for every zn because as K grows there is a combinatorial explosion of possible stimuli. [sent-141, score-0.239]
65 To avoid an exhaustive search over possible zn , we adopt a greedy approach for choosing which R of the K locations to stimulate. [sent-142, score-0.239]
66 First we rank the K neurons based on an ˜n approximation of the objective function. [sent-143, score-0.284]
67 To do this, we propose K hypothetical stimuli, zk , each all zeros except the k th entry equal to 1 — that is, we examine only the K stimuli which represent ∗ ˜n stimulating a single location. [sent-144, score-0.315]
68 We then set znk = 1 for the R neurons corresponding to the zk which give the smallest values for the objective function and all other entries of z∗ to zero. [sent-145, score-0.334]
69 We found that n the neurons selected by a brute force approach are most likely to be the neurons that the greedy selection process chooses (see Figure 1 in the Appendix). [sent-146, score-0.598]
70 For each of ˜n the K proposed stimuli zk , to approximate the expected entropy we must compute the variational ˜ ˜ posterior for M samples of [X1:n−1 xn ] and L samples of yn (where xn is the random variable corresponding to p(˜ n |˜n )). [sent-148, score-0.754]
71 Therefore we run the variational inference procedure on the full data x z on the order of O(M KL) times at each trial. [sent-149, score-0.271]
72 As the system size grows, running the variational inference procedure this many times becomes intractable because the number of iterations needed to converge the coordinate ascent algorithm is dependent on the correlations between the rows of X. [sent-150, score-0.338]
73 Note that the stronger dependence here is on R; when R = 1 the variational parameter updates become exact and independent across the neurons, and therefore no coordinate ascent is necessary and the runtime becomes linear in K. [sent-152, score-0.234]
74 We therefore take one last measure to speed up the optimization process by implementing an online Bayesian approach to updating the variational posterior (in the stimulus selection phase only). [sent-153, score-0.511]
75 Since the variational posterior of ck and γk takes the same form as the prior distribution, we can use the posterior from trial n − 1 as the prior at trial n, allowing us to effectively summarize the previous data. [sent-154, score-1.175]
76 In this online setting, when we stimulate only one neuron, only the parameters of that specific ˜ n ˜n neuron change. [sent-155, score-0.431]
77 If during optimization we temporarily assume that xk = zk , this results in explicit updates for each variational parameter, with no coordinate ascent iterations required. [sent-156, score-0.284]
78 The combined accelerations described in this section result in a speed up of several orders of magnitude which allows the full inference and optimization procedure to be run in real time, running at approximately one second per trial in our computing environment for K = 500, R = 8. [sent-158, score-0.315]
79 We chose to parallelize over M which distributes the sampling of X and the running of variational inference for each sample. [sent-160, score-0.309]
80 The heavy red and blue lines indicate the results when running the Gibbs sampler at that point in the experiment, and the thinner magenta and cyan lines indicate the results from variational inference. [sent-182, score-0.23]
81 6 Experiments and results We ran our inference and optimal experimental design algorithm on data sets generated from the model described in Section 2. [sent-187, score-0.261]
82 Baseline results are shown in Figure 2, over a range of values for stimulations per trial R and baseline postsynaptic noise levels ν. [sent-189, score-0.499]
83 The results here use an informative prior, where we assume the excitatory or inhibitory identity is known, and we set individual prior connectivity probabilities for each neuron based on that neuron’s identity and distance from the postsynaptic cell. [sent-190, score-0.911]
84 We choose to let X be unobserved and let the stimuli Z produce Gaussian ellipsoids which excite neurons that are located nearby. [sent-191, score-0.403]
85 The optimal procedure was able to achieve equivalent reconstruction quality as a random stimulation paradigm in significantly fewer trials when the number of stimuli per trial and response noise were in an experimentally realistic range (R = 4 and ν = 2. [sent-194, score-0.685]
86 As the the number of stimuli per trial R increases, we start to see improved weight estimates and faster convergence but a decrease in the relative benefit of optimal design; the random approach “catches up” to the optimal approach as R becomes large. [sent-197, score-0.468]
87 4 0 200 400 trial, n 600 800 0 200 400 trial, n 600 800 Figure 3: The results of inference and optimal design (A) with a single spike-andslab prior for all connections (prior connection probability of . [sent-209, score-0.281]
88 ) Finally, we see that we are still able to recover the synaptic strengths when we use a more general prior as in Figure 3A where we placed a single spike-and-slab prior across all the connections. [sent-216, score-0.348]
89 Since we assumed the cells’ identities were unknown, we used a zero-centered Gaussian for the slab and a prior connection probability of . [sent-217, score-0.215]
90 While we allow for stimulus uncertainty, it will likely soon be possible to stimulate multiple neurons with high accuracy. [sent-219, score-0.594]
91 The algorithms proposed by [23] are based on computing a maximum a posteriori (MAP) estimate of the weights w; note that to pursue the optimal Bayesian experimental design methods proposed here, it is necessary to compute (or approximate) the full posterior distribution, not just the MAP estimate. [sent-222, score-0.352]
92 ) In the simulated experiments of [23], stimulating roughly 30 of 500 neurons per trial is found to be optimal; extrapolating from Fig. [sent-226, score-0.672]
93 First, the implementation of an inference algorithm which performs well on the full model such that we can recover the synaptic weights, the time constants, and the delays would allow us to avoid compressing the responses to scalar values and recover more information about the system. [sent-231, score-0.408]
94 Also, it may be necessary to improve the noise model as we currently assume that there are no spontaneous synaptic events which will confound the determination of each connection’s strength. [sent-232, score-0.266]
95 Nadal, “What can we learn from synaptic weight distributions? [sent-291, score-0.309]
96 Yuste, “Stereotyped position of local synaptic targets in neocortex,” Science, vol. [sent-307, score-0.266]
97 Reyes, “Spatial profile of excitatory and inhibitory synaptic connectivity in mouse primary auditory cortex,” The Journal of Neuroscience, vol. [sent-315, score-0.575]
98 Markram, “A synaptic organizing principle for cortical neuronal groups,” Proceedings of the National Academy of Sciences, vol. [sent-323, score-0.266]
99 Chklovskii, “Highly nonrandom features of o o synaptic connectivity in local cortical circuits. [sent-334, score-0.402]
100 Stephens, “Scalable variational inference for bayesian variable selection in regression, and its accuracy in genetic association studies,” Bayesian Analysis, vol. [sent-368, score-0.341]
wordName wordTfidf (topN-words)
[('neurons', 0.284), ('synaptic', 0.266), ('postsynaptic', 0.257), ('trial', 0.242), ('zn', 0.239), ('neuron', 0.228), ('variational', 0.198), ('ck', 0.179), ('stimulate', 0.173), ('presynaptic', 0.162), ('stimulating', 0.146), ('stimulus', 0.137), ('connectivity', 0.136), ('stimulation', 0.134), ('stimuli', 0.119), ('nre', 0.118), ('posterior', 0.116), ('yuste', 0.115), ('slab', 0.108), ('hardware', 0.103), ('design', 0.102), ('yn', 0.102), ('wk', 0.098), ('dnk', 0.094), ('eyn', 0.094), ('entropy', 0.093), ('inhibitory', 0.089), ('excitatory', 0.084), ('microcircuits', 0.083), ('xnk', 0.083), ('cell', 0.081), ('gibbs', 0.077), ('experimenter', 0.077), ('inference', 0.073), ('optogenetic', 0.071), ('slabs', 0.071), ('delays', 0.069), ('subthreshold', 0.062), ('stimulated', 0.061), ('neocortex', 0.054), ('experimental', 0.054), ('trials', 0.051), ('zk', 0.05), ('weights', 0.048), ('hirtz', 0.047), ('realistically', 0.047), ('spines', 0.047), ('ynt', 0.047), ('neuroscience', 0.045), ('weight', 0.043), ('trace', 0.042), ('meth', 0.042), ('prior', 0.041), ('sensing', 0.04), ('bayesian', 0.04), ('reconstruction', 0.039), ('fk', 0.039), ('marking', 0.038), ('neocortical', 0.038), ('quartiles', 0.038), ('sampling', 0.038), ('identity', 0.038), ('columbia', 0.038), ('xn', 0.038), ('cj', 0.037), ('cells', 0.036), ('charge', 0.036), ('putative', 0.036), ('ascent', 0.036), ('mapping', 0.036), ('biological', 0.035), ('brooks', 0.034), ('response', 0.034), ('realistic', 0.034), ('circuits', 0.034), ('connection', 0.033), ('chklovskii', 0.033), ('identities', 0.033), ('grossman', 0.033), ('packer', 0.033), ('event', 0.032), ('nonzero', 0.032), ('optimal', 0.032), ('nat', 0.032), ('physiology', 0.032), ('cyan', 0.032), ('dendritic', 0.032), ('designated', 0.032), ('excitation', 0.032), ('dependent', 0.031), ('ak', 0.031), ('ny', 0.031), ('liam', 0.03), ('army', 0.03), ('entropies', 0.03), ('july', 0.03), ('issues', 0.03), ('selection', 0.03), ('online', 0.03), ('transmission', 0.03)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000005 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits
Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski
Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1
2 0.27663532 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity
Author: Christian Albers, Maren Westkott, Klaus Pawelzik
Abstract: Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of spiking neurons in the brain. It is not known, however, how the computational power of the Perceptron might be accomplished by the plasticity mechanisms of real synapses. Here we prove that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if these respective plasticity mechanisms act in concert with the hyperpolarisation of the post-synaptic neurons. We also show that with these simple yet biologically realistic dynamics Tempotrons and Chronotrons are learned. The proposed mechanism enables incremental associative learning from a continuous stream of patterns and might therefore underly the acquisition of long term memories in cortex. Our results underline that learning processes in realistic networks of spiking neurons depend crucially on the interactions of synaptic plasticity mechanisms with the dynamics of participating neurons.
3 0.25557739 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval
Author: Cristina Savin, Peter Dayan, Mate Lengyel
Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1
4 0.25300887 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data
Author: Jasper Snoek, Richard Zemel, Ryan P. Adams
Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1
5 0.21885498 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke
Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1
6 0.18413562 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables
7 0.16776522 15 nips-2013-A memory frontier for complex synapses
8 0.16746035 121 nips-2013-Firing rate predictions in optimal balanced networks
9 0.15884729 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking
10 0.14879288 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles
11 0.1485471 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?
12 0.14235538 229 nips-2013-Online Learning of Nonparametric Mixture Models via Sequential Variational Approximation
13 0.13171151 210 nips-2013-Noise-Enhanced Associative Memories
14 0.12367879 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions
15 0.11048979 157 nips-2013-Learning Multi-level Sparse Representations
16 0.10969258 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels
17 0.10028106 64 nips-2013-Compete to Compute
18 0.099060684 187 nips-2013-Memoized Online Variational Inference for Dirichlet Process Mixture Models
19 0.098507538 205 nips-2013-Multisensory Encoding, Decoding, and Identification
20 0.097387135 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems
topicId topicWeight
[(0, 0.233), (1, 0.1), (2, -0.163), (3, -0.09), (4, -0.39), (5, 0.025), (6, 0.028), (7, -0.092), (8, 0.116), (9, 0.089), (10, 0.05), (11, 0.121), (12, 0.01), (13, 0.001), (14, 0.003), (15, -0.062), (16, -0.049), (17, -0.045), (18, -0.029), (19, -0.064), (20, 0.002), (21, -0.02), (22, 0.002), (23, 0.143), (24, -0.076), (25, 0.093), (26, 0.021), (27, -0.013), (28, 0.003), (29, 0.048), (30, 0.007), (31, 0.03), (32, -0.005), (33, -0.011), (34, 0.035), (35, -0.02), (36, 0.031), (37, 0.008), (38, -0.052), (39, 0.044), (40, -0.006), (41, -0.04), (42, 0.06), (43, 0.029), (44, 0.044), (45, 0.035), (46, -0.037), (47, 0.071), (48, 0.025), (49, -0.033)]
simIndex simValue paperId paperTitle
same-paper 1 0.93817848 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits
Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski
Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1
2 0.80315423 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity
Author: Christian Albers, Maren Westkott, Klaus Pawelzik
Abstract: Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of spiking neurons in the brain. It is not known, however, how the computational power of the Perceptron might be accomplished by the plasticity mechanisms of real synapses. Here we prove that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if these respective plasticity mechanisms act in concert with the hyperpolarisation of the post-synaptic neurons. We also show that with these simple yet biologically realistic dynamics Tempotrons and Chronotrons are learned. The proposed mechanism enables incremental associative learning from a continuous stream of patterns and might therefore underly the acquisition of long term memories in cortex. Our results underline that learning processes in realistic networks of spiking neurons depend crucially on the interactions of synaptic plasticity mechanisms with the dynamics of participating neurons.
3 0.76578921 86 nips-2013-Demixing odors - fast inference in olfaction
Author: Agnieszka Grabska-Barwinska, Jeff Beck, Alexandre Pouget, Peter Latham
Abstract: The olfactory system faces a difficult inference problem: it has to determine what odors are present based on the distributed activation of its receptor neurons. Here we derive neural implementations of two approximate inference algorithms that could be used by the brain. One is a variational algorithm (which builds on the work of Beck. et al., 2012), the other is based on sampling. Importantly, we use a more realistic prior distribution over odors than has been used in the past: we use a “spike and slab” prior, for which most odors have zero concentration. After mapping the two algorithms onto neural dynamics, we find that both can infer correct odors in less than 100 ms. Thus, at the behavioral level, the two algorithms make very similar predictions. However, they make different assumptions about connectivity and neural computations, and make different predictions about neural activity. Thus, they should be distinguishable experimentally. If so, that would provide insight into the mechanisms employed by the olfactory system, and, because the two algorithms use very different coding strategies, that would also provide insight into how networks represent probabilities. 1
4 0.73455191 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval
Author: Cristina Savin, Peter Dayan, Mate Lengyel
Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1
5 0.69249189 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke
Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1
6 0.67982602 121 nips-2013-Firing rate predictions in optimal balanced networks
7 0.64705414 210 nips-2013-Noise-Enhanced Associative Memories
8 0.64462686 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking
9 0.63756669 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data
10 0.61004907 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively
11 0.59862632 15 nips-2013-A memory frontier for complex synapses
12 0.59112954 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables
13 0.55489653 205 nips-2013-Multisensory Encoding, Decoding, and Identification
14 0.55269849 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?
15 0.5359717 61 nips-2013-Capacity of strong attractor patterns to model behavioural and cognitive prototypes
16 0.53328323 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems
17 0.51213014 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models
18 0.46367034 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels
19 0.46273252 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions
20 0.45090726 157 nips-2013-Learning Multi-level Sparse Representations
topicId topicWeight
[(2, 0.018), (16, 0.047), (33, 0.136), (34, 0.111), (41, 0.036), (49, 0.071), (56, 0.079), (70, 0.083), (85, 0.03), (86, 0.214), (89, 0.069), (93, 0.027), (95, 0.015)]
simIndex simValue paperId paperTitle
same-paper 1 0.81033027 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits
Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski
Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1
2 0.79656613 4 nips-2013-A Comparative Framework for Preconditioned Lasso Algorithms
Author: Fabian L. Wauthier, Nebojsa Jojic, Michael Jordan
Abstract: The Lasso is a cornerstone of modern multivariate data analysis, yet its performance suffers in the common situation in which covariates are correlated. This limitation has led to a growing number of Preconditioned Lasso algorithms that pre-multiply X and y by matrices PX , Py prior to running the standard Lasso. A direct comparison of these and similar Lasso-style algorithms to the original Lasso is difficult because the performance of all of these methods depends critically on an auxiliary penalty parameter λ. In this paper we propose an agnostic framework for comparing Preconditioned Lasso algorithms to the Lasso without having to choose λ. We apply our framework to three Preconditioned Lasso instances and highlight cases when they will outperform the Lasso. Additionally, our theory reveals fragilities of these algorithms to which we provide partial solutions. 1
3 0.78796226 265 nips-2013-Reconciling "priors" & "priors" without prejudice?
Author: Remi Gribonval, Pierre Machart
Abstract: There are two major routes to address linear inverse problems. Whereas regularization-based approaches build estimators as solutions of penalized regression optimization problems, Bayesian estimators rely on the posterior distribution of the unknown, given some assumed family of priors. While these may seem radically different approaches, recent results have shown that, in the context of additive white Gaussian denoising, the Bayesian conditional mean estimator is always the solution of a penalized regression problem. The contribution of this paper is twofold. First, we extend the additive white Gaussian denoising results to general linear inverse problems with colored Gaussian noise. Second, we characterize conditions under which the penalty function associated to the conditional mean estimator can satisfy certain popular properties such as convexity, separability, and smoothness. This sheds light on some tradeoff between computational efficiency and estimation accuracy in sparse regularization, and draws some connections between Bayesian estimation and proximal optimization. 1
4 0.74174786 348 nips-2013-Variational Policy Search via Trajectory Optimization
Author: Sergey Levine, Vladlen Koltun
Abstract: In order to learn effective control policies for dynamical systems, policy search methods must be able to discover successful executions of the desired task. While random exploration can work well in simple domains, complex and highdimensional tasks present a serious challenge, particularly when combined with high-dimensional policies that make parameter-space exploration infeasible. We present a method that uses trajectory optimization as a powerful exploration strategy that guides the policy search. A variational decomposition of a maximum likelihood policy objective allows us to use standard trajectory optimization algorithms such as differential dynamic programming, interleaved with standard supervised learning for the policy itself. We demonstrate that the resulting algorithm can outperform prior methods on two challenging locomotion tasks. 1
5 0.71604347 121 nips-2013-Firing rate predictions in optimal balanced networks
Author: David G. Barrett, Sophie Denève, Christian K. Machens
Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1
6 0.71536309 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
7 0.70366371 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking
8 0.70266104 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval
9 0.69969165 15 nips-2013-A memory frontier for complex synapses
10 0.6983223 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables
11 0.69573694 157 nips-2013-Learning Multi-level Sparse Representations
12 0.69414073 22 nips-2013-Action is in the Eye of the Beholder: Eye-gaze Driven Model for Spatio-Temporal Action Localization
13 0.69166905 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles
14 0.69142854 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions
15 0.69114244 56 nips-2013-Better Approximation and Faster Algorithm Using the Proximal Average
16 0.68992633 173 nips-2013-Least Informative Dimensions
17 0.68896347 64 nips-2013-Compete to Compute
18 0.68798947 310 nips-2013-Statistical analysis of coupled time series with Kernel Cross-Spectral Density operators.
19 0.68753177 16 nips-2013-A message-passing algorithm for multi-agent trajectory planning
20 0.68623531 303 nips-2013-Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis