nips nips2013 nips2013-121 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: David G. Barrett, Sophie Denève, Christian K. Machens
Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Firing rate predictions in optimal balanced networks Sophie Den` ve e Group for Neural Theory ´ Ecole Normale Sup´ rieure e Paris, France sophie. [sent-1, score-0.376]
2 org Abstract How are firing rates in a spiking network related to neural input, connectivity and network function? [sent-11, score-0.948]
3 This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. [sent-12, score-0.569]
4 However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. [sent-13, score-0.576]
5 We develop a new technique for calculating firing rates in optimal balanced networks. [sent-14, score-0.404]
6 These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. [sent-15, score-0.629]
7 We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. [sent-16, score-0.748]
8 Our firing rate calculation relates network firing rates directly to network input, connectivity and function. [sent-18, score-0.752]
9 A large, sometimes bewildering, diversity of firing rate responses to stimuli have since been observed [2], ranging from sigmoidal-shaped tuning curves [3, 4], to bump-shaped tuning curves [5], with much diversity in between [6]. [sent-21, score-0.748]
10 What is the computational role of these firing rate responses and how are firing rates determined by neuron dynamics, network connectivity and neural input? [sent-22, score-0.822]
11 However, most approaches have struggled to deal with the non-linearity of neural spike-generation mechanisms and the strong interaction between neurons as mediated through network connectivity. [sent-24, score-0.383]
12 These calculations have led to important insights into how neural network connectivity and input determine firing rates. [sent-29, score-0.455]
13 We develop a new technique for calculating firing rates, by directly identifying the non-linear structure of tightly balanced networks. [sent-31, score-0.395]
14 Balanced network theory has come to be regarded as the standard model of cortical activity [12, 13], accounting for a large proportion of observed activity through a dynamic balance of excitation and inhibition [14]. [sent-32, score-0.577]
15 Recently, it was found that tightly balanced networks are synonymous with efficient coding, in which a signal is represented optimally subject to metabolic costs [15]. [sent-33, score-0.514]
16 This observation allows us, here, to interpret balanced network activity as an optimisation algorithm. [sent-34, score-0.581]
17 We use this technique to calculate firing rates in a variety of balanced network models, thereby exploring the computational role and underlying network mechanisms of monotonic firing rate tuning curves, bump-shaped tuning curves and tuning curve inhomogeneity. [sent-36, score-1.669]
18 2 Optimal balanced network models We calculate firing rates in a balanced network consisting of N recurrently connected leaky integrate-and-fire neurons (Fig. [sent-37, score-1.165]
19 , sN ), where si (t) = k δ(t − ti ) is the spike train of neuron i with spike times ti . [sent-52, score-0.566]
20 A spike is produced k k whenever the membrane potential Vi exceeds the spiking threshold Ti of neuron i. [sent-53, score-0.746]
21 The membrane potential has the following dynamics: dVi = −λVi + dt M N Fij Ij , Ωik sk + (1) j=1 k=1 where λ is the neuron leak, Ωik is connection strength from neuron k to neuron i and Fij is the connection strength from input j to neuron i [16]. [sent-55, score-1.047]
22 When a neuron spikes, the membrane potential is reset to Ri ≡ Ti + Ωii . [sent-56, score-0.364]
23 We are interested in networks where a balance of excitation and inhibition coincides with optimal signal representation. [sent-59, score-0.363]
24 Not all choices of network connectivity and spiking thresholds will give both [12, 13], but if certain conditions are satisfied, this can be possible. [sent-60, score-0.595]
25 rk = (4) 0 and xj is a temporal filtering of the j th input ∞ xj = 0 All the excitatory and inhibitory inputs received by neuron i are included in this summation (Eqn. [sent-64, score-0.353]
26 Now, we can use this expression to derive the conditions that connectivity must satisfy so that the network operates in an optimal balanced state. [sent-67, score-0.569]
27 In balanced networks, excitation and inhibition cancel to produce an input that is the same order of magnitude as the spiking threshold. [sent-68, score-0.664]
28 In tightly balanced networks, which we consider, this cancellation is so precise that Vi → 0 in the large network limit (for all active neurons) [15, 17, 18]. [sent-70, score-0.554]
29 This has two implications for our choice of network connectivity and spiking thresholds. [sent-73, score-0.595]
30 Secondly, the spiking threshold of each neuron must be chosen so that each spike acts to minimise the cost function. [sent-76, score-0.637]
31 Finally, (A) (C) x x ˆ x, x ˆ (B) time (sec) Figure 1: Optimal balanced network example. [sent-79, score-0.394]
32 (A) Schematic of a balanced neural network providing an optimal spike-based representation x of a signal x. [sent-80, score-0.554]
33 (B) A tightly balanced network can ˆ produce an output x1 (blue, top panel) that closely matches the signal x1 (black, top panel). [sent-81, score-0.616]
34 Popˆ ulation spiking activity is represented here using a raster plot (middle panel), where each spike is represented with a dot. [sent-82, score-0.455]
35 For a randomly chosen neuron (red, middle panel), we plot the total excitatory input (green, bottom panel) and the total inhibitory input (red, bottom panel). [sent-83, score-0.456]
36 The sum of excitation and inhibition (black, bottom panel) fluctuates about the spiking threshold (thin black line, bottom panel) indicating that this network is tightly balanced. [sent-84, score-0.803]
37 A spike is produced whenever this sum exceeds the spiking threshold. [sent-85, score-0.382]
38 (C) Firing rate tuning curves are measured during simulations of our balanced network. [sent-86, score-0.632]
39 Therefore, the spiking threshold for each neuron must be set to Tk ≡ −Ωkk /2, though this condition can be relaxed considerably if our loss function has an additional linear cost term1 . [sent-90, score-0.452]
40 We are interested in networks that are both tightly balanced and optimal. [sent-92, score-0.39]
41 This is an important result, because it relates balanced network dynamics to a neural computation. [sent-95, score-0.486]
42 Specifically, it allows us to interpret the spiking activity of our tightly balanced network as an algorithm that optimises a loss function (Eqn. [sent-96, score-0.901]
43 (8) The second term of equation 7 is a metabolic cost term that penalises neurons for spiking excessively, ˆ and the first term quantifies the difference between the signal value x and a linear read-out, x, where ˆ x is computed using the linear decoder FT (Eqn. [sent-103, score-0.572]
44 Therefore, a network with this connectivity ˆ produces spike trains that optimise equation 7, thereby producing an output x that is close to the signal value x. [sent-105, score-0.661]
45 We find that our network produces spike trains (Fig. [sent-108, score-0.353]
46 We measure firing rate tuning curves using a fixed value of x2 while varying x1 . [sent-114, score-0.416]
47 We use this signal because it can produce interesting, non-linear tuning curves (Fig. [sent-115, score-0.426]
48 In the next section, we will attempt to understand this tuning curve non-linearity by calculating firing rates analytically. [sent-117, score-0.456]
49 3 Firing rate analysis with quadratic programming Our goal is to calculate the firing rates f of all the neurons in these tightly balanced network models as a function of the network input, the recurrent network connectivity Ω, and the feedforward connectivity F. [sent-118, score-1.767]
50 On the surface, this may seem to be a difficult problem, because individual neurons have complicated non-linear integrate-and-fire dynamics and they interact strongly through network connectivity. [sent-119, score-0.399]
51 Then, we find that the optimal spiking thresholds for this network are given by Ti ≡ (−Ωii + bi )/2 ≥ −Ωii /2. [sent-127, score-0.42]
52 We can now calculate firing rates using this relationship and by exploiting the algorithmic nature of tightly balanced networks. [sent-133, score-0.566]
53 Therefore, the firing rates of our network are those that minimise E(f /λ), under the constraint that firing rates must be positive: {fi } = arg min E(f /λ) . [sent-136, score-0.497]
54 When both neurons are active, we can solve equation 10 exactly, to see that firing rates are related to network connectivity according to f = −λΩ−1 Fx. [sent-146, score-0.696]
55 When one of the neurons becomes silent, the other neuron must compensate by adjusting its firing rate slope. [sent-147, score-0.461]
56 For example, when neuron 1 becomes silent, we have f1 = 0 and the firing rate of neuron 2 increases to f2 = λF2 x/(F2 FT + βI), where F2 denotes the second row of F. [sent-148, score-0.504]
57 Predicted firing rates closely match measured firing rates for both neurons, and for all signal values (right). [sent-153, score-0.368]
58 We also measure the firing rate trajectory (right panel) as the network evolves towards the minimum of the cost function E(x1 = 1) (blue cross, right panel), where neuron 2 is silent. [sent-157, score-0.472]
59 In general, we can interpret tuning curve shape to be the solution of a quadratic programming problem, which can be written as a piece-wise linear function f = M (x) · x, where M(x) is a matrix whose entries depend on the region of signal space occupied by x. [sent-163, score-0.476]
60 For example, in the two-neuron system that we just discussed, the signal space is partitioned into three regions: one region where neuron 1 is active and where neuron 2 is silent, a second region where both neurons are active and a third region where neuron 1 is silent and neuron 2 is active (Fig. [sent-164, score-1.342]
61 The boundaries of these regions occur at points in signal space where an active neuron becomes silent (or where a silent neuron becomes active). [sent-167, score-0.786]
62 We can also use quadratic programming to describe the spiking dynamics underlying these nonlinear networks. [sent-169, score-0.376]
63 The step-size is λ because when neuron i spikes, ri → ri + 1, according to equation 3, and therefore, fi → fi +λ, according to equation 9. [sent-173, score-0.41]
64 Eventually, when the firing rate has decayed too far from the optimal solution, another spike is fired and the network moves closer to the optimum. [sent-178, score-0.402]
65 In this way, spiking dynamics can be interpreted as a quadratic programming algorithm. [sent-179, score-0.376]
66 4 Analysing tuning curve shape with quadratic programming Now that we have a framework for relating firing rates to network connectivity and input, we can explore the computational function of tuning curve shapes and the network mechanisms that generate these tuning curves. [sent-183, score-1.484]
67 We will investigate systems that have monotonic tuning curves and systems that have bump-shaped tuning curves, which together constitute a large proportion of firing rate observations [2, 3, 4, 5]. [sent-184, score-0.653]
68 We begin by considering a system of monotonic tuning curves, similar to the examples that we have considered already where recurrent connectivity is given by Ω = −FFT − βI. [sent-185, score-0.469]
69 In these systems, the recurrent connectivity and hence the tuning curve shape is largely determined by the form of the feedforward matrix F. [sent-186, score-0.475]
70 This matrix also determines the contribution of tuning curves to computational function, through its role as a linear decoder for signal representation (Eqn. [sent-187, score-0.454]
71 This system produces monotonically increasing and decreasing tuning curves (Fig. [sent-191, score-0.357]
72 3, blue tuning curves), and neurons with negative F values have negative firing rate slopes (Fig. [sent-194, score-0.519]
73 If the values of F are regularly spaced, then the tuning curves of individual neurons are regularly spaced, and, if we manipulate this regularity by adding some random noise to the connectivity, we obtain inhomogeneous and highly irregular tuning curves (Fig. [sent-196, score-0.99]
74 This inhomogeneous monotonic tuning is reminiscent of tuning in many neural systems, including the oculomotor system [4]. [sent-199, score-0.606]
75 The oculomotor system represents eye position, using neurons with negative slopes to represent left side eye positions and neurons with positive slopes to represent right side eye positions. [sent-200, score-0.714]
76 (A) Each dot represents the contribution of a neuron to a signal representation (when the firing rate is 10 × 16 Hz) (1st column). [sent-202, score-0.416]
77 We simulate a network of neurons and measure firing rates (2nd column). [sent-204, score-0.482]
78 These measurements closely match our algorithmically predicted firing rates (3rd column), where each point in the 4th column represents the firing rate of an individual neuron for a given stimulus. [sent-205, score-0.485]
79 The representation error (bottom panels, column 2 and column 3) is similar to the network without connectivity noise. [sent-207, score-0.437]
80 Each dot represents the contribution of a neuron to a signal representation (when the firing rate is 20 × 16 Hz) (1st column). [sent-209, score-0.416]
81 This signal produces bump-shaped tuning curves (2nd column), which we can also predict accurately (3rd and 4th column). [sent-210, score-0.426]
82 (A) (B) leak membrane potential noise ⌘ Figure 4: Performance of quadratic programming in firing rate prediction. [sent-211, score-0.357]
83 Now, we can use the relationship that we have developed between tuning curves and computational function to interpret oculomotor tuning as an attempt to represent eye positions optimally. [sent-218, score-0.719]
84 Bump-shaped tuning curves can be produced by networks representing circular variables x1 = cos θ, x2 = sin θ, where θ is the orientation of the signal (Fig. [sent-219, score-0.472]
85 As before, the tuning curves of individual neurons are regularly spaced if the values of F are regularly spaced. [sent-221, score-0.634]
86 If we add some noise to the connectivity F, the tuning curves become inhomogeneous and highly irregular. [sent-222, score-0.562]
87 The success of our algorithmic approach in calculating firing rates depends on the success of spiking networks in algorithmically optimising a cost function. [sent-226, score-0.532]
88 The resolution of this spiking algorithm is determined by the leak λ and membrane potential noise. [sent-227, score-0.435]
89 5 Discussion and Conclusions We have developed a new algorithmic technique for calculating firing rates in tightly balanced networks. [sent-233, score-0.532]
90 Identifying such relationships is a long-standing problem in systems neuroscience, largely because the mathematical language that we use to describe information representation is very different to the language that we use to describe neural network spiking statistics. [sent-236, score-0.486]
91 For tightly balanced networks, we have essentially solved this problem, by matching the firing rate statistics of neural activity to the structure of neural signal representation. [sent-237, score-0.671]
92 Previous studies have also interpreted firing rates to be the result of a constrained optimisation problem [21], but for a population coding model, not for a network of spiking neurons. [sent-239, score-0.691]
93 In a more recent study, a spiking network was used to solve an optimisation problem, although this network required positive and negative spikes, which is difficult to reconcile with biological spiking [22]. [sent-240, score-0.92]
94 The firing rate tuning curves that we calculate have allowed us to investigate poorly understood features of experimentally recorded tuning curves. [sent-241, score-0.655]
95 In particular, we have been able to evaluate the impact of tuning curve inhomogeneity on neural computation. [sent-242, score-0.381]
96 We find that tuning curve inhomogeneity is not necessarily noise because it does not necessarily harm signal representation. [sent-244, score-0.437]
97 Therefore, we propose that tuning curves are inhomogeneous simply because they can be. [sent-245, score-0.387]
98 3 Membrane potential noise can be included in our network model by adding a Wiener process noise term to our membrane potential equation (Eqn. [sent-249, score-0.401]
99 (2006) Neocortical network activity in vivo is generated through a dynamic balance of excitation and inhibition. [sent-330, score-0.397]
100 (2012) Predictive coding of dynamical variables in balanced spiking networks. [sent-345, score-0.486]
wordName wordTfidf (topN-words)
[('ring', 0.605), ('spiking', 0.242), ('balanced', 0.216), ('neuron', 0.21), ('tuning', 0.2), ('network', 0.178), ('connectivity', 0.175), ('neurons', 0.167), ('panel', 0.144), ('spike', 0.14), ('rates', 0.137), ('curves', 0.132), ('tightly', 0.128), ('membrane', 0.124), ('silent', 0.12), ('excitation', 0.102), ('signal', 0.094), ('rate', 0.084), ('optimisation', 0.08), ('inhibition', 0.077), ('inhomogeneity', 0.075), ('hz', 0.073), ('activity', 0.073), ('machens', 0.069), ('curve', 0.068), ('firing', 0.068), ('slopes', 0.068), ('eye', 0.056), ('inhomogeneous', 0.055), ('dynamics', 0.054), ('regularly', 0.052), ('oculomotor', 0.051), ('calculating', 0.051), ('networks', 0.046), ('relationship', 0.046), ('minimise', 0.045), ('spikes', 0.045), ('balance', 0.044), ('neuroscience', 0.043), ('quadratic', 0.043), ('inhibitory', 0.043), ('prediction', 0.042), ('deneve', 0.041), ('excitatory', 0.04), ('leak', 0.039), ('equation', 0.039), ('calculate', 0.039), ('plos', 0.038), ('ti', 0.038), ('neural', 0.038), ('bottom', 0.038), ('calculations', 0.037), ('monotonic', 0.037), ('timescale', 0.037), ('kk', 0.037), ('programming', 0.037), ('ik', 0.037), ('ft', 0.036), ('sompolinsky', 0.035), ('trains', 0.035), ('fij', 0.034), ('boerlin', 0.034), ('champalimaud', 0.034), ('ckm', 0.034), ('linearising', 0.034), ('recurrently', 0.034), ('uctuate', 0.034), ('vreeswijk', 0.034), ('zotterman', 0.034), ('interpret', 0.034), ('receptive', 0.033), ('middle', 0.033), ('fi', 0.033), ('temporal', 0.033), ('recurrent', 0.032), ('active', 0.032), ('spaced', 0.031), ('uctuations', 0.031), ('fx', 0.03), ('normale', 0.03), ('rieure', 0.03), ('fft', 0.03), ('metabolic', 0.03), ('optimises', 0.03), ('optimising', 0.03), ('cortical', 0.03), ('potential', 0.03), ('representation', 0.028), ('vi', 0.028), ('coding', 0.028), ('ri', 0.028), ('thin', 0.028), ('column', 0.028), ('input', 0.027), ('population', 0.026), ('algorithmically', 0.026), ('generalise', 0.026), ('adrian', 0.026), ('dt', 0.026), ('system', 0.025)]
simIndex simValue paperId paperTitle
same-paper 1 1.0 121 nips-2013-Firing rate predictions in optimal balanced networks
Author: David G. Barrett, Sophie Denève, Christian K. Machens
Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1
2 0.31905791 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data
Author: Jasper Snoek, Richard Zemel, Ryan P. Adams
Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1
3 0.23052126 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking
Author: David Carlson, Vinayak Rao, Joshua T. Vogelstein, Lawrence Carin
Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. 1
4 0.17724694 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles
Author: David Pfau, Eftychios A. Pnevmatikakis, Liam Paninski
Abstract: Recordings from large populations of neurons make it possible to search for hypothesized low-dimensional dynamics. Finding these dynamics requires models that take into account biophysical constraints and can be fit efficiently and robustly. Here, we present an approach to dimensionality reduction for neural data that is convex, does not make strong assumptions about dynamics, does not require averaging over many trials and is extensible to more complex statistical models that combine local and global influences. The results can be combined with spectral methods to learn dynamical systems models. The basic method extends PCA to the exponential family using nuclear norm minimization. We evaluate the effectiveness of this method using an exact decomposition of the Bregman divergence that is analogous to variance explained for PCA. We show on model data that the parameters of latent linear dynamical systems can be recovered, and that even if the dynamics are not stationary we can still recover the true latent subspace. We also demonstrate an extension of nuclear norm minimization that can separate sparse local connections from global latent dynamics. Finally, we demonstrate improved prediction on real neural data from monkey motor cortex compared to fitting linear dynamical models without nuclear norm smoothing. 1
5 0.17611752 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke
Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1
6 0.17375542 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity
7 0.16921237 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems
8 0.16746035 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits
9 0.1619112 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables
10 0.13822277 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions
11 0.12294561 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval
12 0.11696489 210 nips-2013-Noise-Enhanced Associative Memories
13 0.10979135 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?
14 0.10913055 51 nips-2013-Bayesian entropy estimation for binary spike train data using parametric prior knowledge
15 0.1009387 64 nips-2013-Compete to Compute
16 0.097871095 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively
17 0.096595049 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models
18 0.095120125 205 nips-2013-Multisensory Encoding, Decoding, and Identification
19 0.093580455 341 nips-2013-Universal models for binary spike patterns using centered Dirichlet processes
20 0.087653272 329 nips-2013-Third-Order Edge Statistics: Contour Continuation, Curvature, and Cortical Connections
topicId topicWeight
[(0, 0.168), (1, 0.095), (2, -0.133), (3, -0.114), (4, -0.381), (5, -0.072), (6, -0.065), (7, -0.095), (8, 0.033), (9, 0.065), (10, 0.044), (11, -0.022), (12, 0.043), (13, 0.007), (14, 0.035), (15, -0.034), (16, 0.014), (17, 0.042), (18, 0.02), (19, 0.032), (20, -0.001), (21, 0.01), (22, 0.05), (23, -0.032), (24, 0.025), (25, -0.043), (26, -0.022), (27, 0.022), (28, 0.068), (29, -0.028), (30, -0.08), (31, -0.068), (32, -0.015), (33, -0.043), (34, -0.021), (35, 0.018), (36, 0.012), (37, -0.016), (38, 0.038), (39, -0.032), (40, -0.025), (41, -0.016), (42, 0.005), (43, 0.047), (44, -0.009), (45, -0.008), (46, -0.017), (47, -0.01), (48, 0.009), (49, 0.036)]
simIndex simValue paperId paperTitle
same-paper 1 0.97174656 121 nips-2013-Firing rate predictions in optimal balanced networks
Author: David G. Barrett, Sophie Denève, Christian K. Machens
Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1
2 0.87641221 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data
Author: Jasper Snoek, Richard Zemel, Ryan P. Adams
Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1
3 0.87350261 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke
Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1
4 0.75094497 205 nips-2013-Multisensory Encoding, Decoding, and Identification
Author: Aurel A. Lazar, Yevgeniy Slutskiy
Abstract: We investigate a spiking neuron model of multisensory integration. Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of biophysical spike generators. We demonstrate that stimuli of different dimensions can be faithfully multiplexed and encoded in the spike domain and derive tractable algorithms for decoding each stimulus from the common pool of spikes. We also show that the identification of multisensory processing in a single neuron is dual to the recovery of stimuli encoded with a population of multisensory neurons, and prove that only a projection of the circuit onto input stimuli can be identified. We provide an example of multisensory integration using natural audio and video and discuss the performance of the proposed decoding and identification algorithms. 1
5 0.74302942 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking
Author: David Carlson, Vinayak Rao, Joshua T. Vogelstein, Lawrence Carin
Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. 1
6 0.70428741 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?
7 0.66152936 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables
8 0.65309632 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity
9 0.64949089 210 nips-2013-Noise-Enhanced Associative Memories
10 0.63536155 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively
11 0.63294905 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits
12 0.62194574 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems
13 0.62010109 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models
14 0.56708193 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles
15 0.56363755 64 nips-2013-Compete to Compute
16 0.56164676 157 nips-2013-Learning Multi-level Sparse Representations
17 0.56103081 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions
18 0.5574919 86 nips-2013-Demixing odors - fast inference in olfaction
19 0.49804765 341 nips-2013-Universal models for binary spike patterns using centered Dirichlet processes
topicId topicWeight
[(9, 0.153), (16, 0.046), (33, 0.115), (34, 0.09), (36, 0.01), (41, 0.022), (49, 0.159), (56, 0.098), (70, 0.097), (85, 0.036), (89, 0.026), (93, 0.05), (95, 0.015)]
simIndex simValue paperId paperTitle
same-paper 1 0.88331407 121 nips-2013-Firing rate predictions in optimal balanced networks
Author: David G. Barrett, Sophie Denève, Christian K. Machens
Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1
2 0.81471157 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data
Author: Jasper Snoek, Richard Zemel, Ryan P. Adams
Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1
3 0.80791157 266 nips-2013-Recurrent linear models of simultaneously-recorded neural populations
Author: Marius Pachitariu, Biljana Petreska, Maneesh Sahani
Abstract: Population neural recordings with long-range temporal structure are often best understood in terms of a common underlying low-dimensional dynamical process. Advances in recording technology provide access to an ever-larger fraction of the population, but the standard computational approaches available to identify the collective dynamics scale poorly with the size of the dataset. We describe a new, scalable approach to discovering low-dimensional dynamics that underlie simultaneously recorded spike trains from a neural population. We formulate the Recurrent Linear Model (RLM) by generalising the Kalman-filter-based likelihood calculation for latent linear dynamical systems to incorporate a generalised-linear observation process. We show that RLMs describe motor-cortical population data better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations. We also introduce the cascaded generalised-linear model (CGLM) to capture low-dimensional instantaneous correlations in neural populations. The CGLM describes the cortical recordings better than either Ising or Gaussian models and, like the RLM, can be fit exactly and quickly. The CGLM can also be seen as a generalisation of a lowrank Gaussian model, in this case factor analysis. The computational tractability of the RLM and CGLM allow both to scale to very high-dimensional neural data. 1
4 0.80661899 323 nips-2013-Synthesizing Robust Plans under Incomplete Domain Models
Author: Tuan A. Nguyen, Subbarao Kambhampati, Minh Do
Abstract: Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task, thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In such cases, the goal should be to synthesize plans that are robust with respect to any known incompleteness of the domain. In this paper, we first introduce annotations expressing the knowledge of the domain incompleteness and formalize the notion of plan robustness with respect to an incomplete domain model. We then show an approach to compiling the problem of finding robust plans to the conformant probabilistic planning problem, and present experimental results with Probabilistic-FF planner. 1
5 0.80375236 221 nips-2013-On the Expressive Power of Restricted Boltzmann Machines
Author: James Martens, Arkadev Chattopadhya, Toni Pitassi, Richard Zemel
Abstract: This paper examines the question: What kinds of distributions can be efficiently represented by Restricted Boltzmann Machines (RBMs)? We characterize the RBM’s unnormalized log-likelihood function as a type of neural network, and through a series of simulation results relate these networks to ones whose representational properties are better understood. We show the surprising result that RBMs can efficiently capture any distribution whose density depends on the number of 1’s in their input. We also provide the first known example of a particular type of distribution that provably cannot be efficiently represented by an RBM, assuming a realistic exponential upper bound on the weights. By formally demonstrating that a relatively simple distribution cannot be represented efficiently by an RBM our results provide a new rigorous justification for the use of potentially more expressive generative models, such as deeper ones. 1
6 0.79839081 274 nips-2013-Relevance Topic Model for Unstructured Social Group Activity Recognition
7 0.79327166 70 nips-2013-Contrastive Learning Using Spectral Methods
8 0.79296941 131 nips-2013-Geometric optimisation on positive definite matrices for elliptically contoured distributions
9 0.79057837 13 nips-2013-A Scalable Approach to Probabilistic Latent Space Inference of Large-Scale Networks
10 0.77490801 303 nips-2013-Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis
11 0.76765794 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
12 0.76202333 345 nips-2013-Variance Reduction for Stochastic Gradient Optimization
13 0.75438511 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking
14 0.7474854 157 nips-2013-Learning Multi-level Sparse Representations
15 0.74363512 64 nips-2013-Compete to Compute
16 0.73679608 94 nips-2013-Distributed $k$-means and $k$-median Clustering on General Topologies
17 0.73069465 22 nips-2013-Action is in the Eye of the Beholder: Eye-gaze Driven Model for Spatio-Temporal Action Localization
18 0.72639418 16 nips-2013-A message-passing algorithm for multi-agent trajectory planning
19 0.72536337 56 nips-2013-Better Approximation and Faster Algorithm Using the Proximal Average
20 0.72292012 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval