nips nips2013 nips2013-86 knowledge-graph by maker-knowledge-mining

86 nips-2013-Demixing odors - fast inference in olfaction


Source: pdf

Author: Agnieszka Grabska-Barwinska, Jeff Beck, Alexandre Pouget, Peter Latham

Abstract: The olfactory system faces a difficult inference problem: it has to determine what odors are present based on the distributed activation of its receptor neurons. Here we derive neural implementations of two approximate inference algorithms that could be used by the brain. One is a variational algorithm (which builds on the work of Beck. et al., 2012), the other is based on sampling. Importantly, we use a more realistic prior distribution over odors than has been used in the past: we use a “spike and slab” prior, for which most odors have zero concentration. After mapping the two algorithms onto neural dynamics, we find that both can infer correct odors in less than 100 ms. Thus, at the behavioral level, the two algorithms make very similar predictions. However, they make different assumptions about connectivity and neural computations, and make different predictions about neural activity. Thus, they should be distinguishable experimentally. If so, that would provide insight into the mechanisms employed by the olfactory system, and, because the two algorithms use very different coding strategies, that would also provide insight into how networks represent probabilities. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Demixing odors — fast inference in olfaction ´ Agnieszka Grabska-Barwinska Gatsby Computational Neuroscience Unit UCL agnieszka@gatsby. [sent-1, score-0.889]

2 ch Abstract The olfactory system faces a difficult inference problem: it has to determine what odors are present based on the distributed activation of its receptor neurons. [sent-13, score-1.27]

3 Importantly, we use a more realistic prior distribution over odors than has been used in the past: we use a “spike and slab” prior, for which most odors have zero concentration. [sent-18, score-1.703]

4 After mapping the two algorithms onto neural dynamics, we find that both can infer correct odors in less than 100 ms. [sent-19, score-0.843]

5 If so, that would provide insight into the mechanisms employed by the olfactory system, and, because the two algorithms use very different coding strategies, that would also provide insight into how networks represent probabilities. [sent-23, score-0.296]

6 For the olfactory system, the input spikes come from a few hundred different types of olfactory receptor neurons, and the problem is to infer which odors caused them. [sent-25, score-1.492]

7 As there are more than 10,000 possible odors, and more than one can be present at a time, the search space for mixtures of odors is combinatorially large. [sent-26, score-0.85]

8 Nevertheless, olfactory processing is fast: organisms can typically determine what odors are present in a few hundred ms. [sent-27, score-1.133]

9 Since our focus is on inference, not learning: we assume that the olfactory system has learned both the statistics of odors in the world and the mapping from those odors to olfactory receptor neuron activity. [sent-29, score-2.368]

10 We begin by introducing a generative model for spikes in a population of olfactory receptor neurons. [sent-38, score-0.407]

11 We simulate those equations, and find that both the variational and sampling approaches work well, and require less than 100 ms to converge to a reasonable solution. [sent-41, score-0.177]

12 It is known that each odor, by itself, activates a different subset of the olfactory receptor neurons; typically on the order of 10%-30% [2]. [sent-47, score-0.386]

13 Here we assume, for simplicity, that activation is linear, for which the activity of odorant receptor neuron i, denoted ri is linearly related to the concentrations, cj of the various odors which are present in a given olfactory scene, plus some background rate, r0 . [sent-48, score-1.636]

14 Assuming Poisson noise, the response distribution has the form ri r0 + P (r|c) = j wij cj ri ! [sent-49, score-0.383]

15 i In a nutshell, ri is Poisson with mean r0 + j e− r0 + j wij cj . [sent-50, score-0.327]

16 With this prior, there is a finite probability that the concentration of any particular odor is zero. [sent-54, score-0.289]

17 This prior is much more realistic than a smooth one, as it allows only a small number of odors (out of ∼10,000) to be present in any given olfactory scene. [sent-55, score-1.151]

18 It is modeled by introducing a binary variable, sj , which is 1 if odor j is present and 0 otherwise. [sent-56, score-0.403]

19 For simplicity we assume that odors are independent and statistically homogeneous, (1 − sj )δ(cj ) + sj Γ(cj |α1 , β1 ) (2. [sent-57, score-1.161]

20 1 Inference Variational inference Because of the delta-function in the prior, performing efficient variational inference in our model is difficult. [sent-61, score-0.167]

21 2a)) prior on c is (1 − sj )Γ(cj |α0 , β0 ) + sj Γ(cj |α1 , β1 ) . [sent-65, score-0.357]

22 1) The approximate prior allows absent odors to have nonzero concentration. [sent-67, score-0.9]

23 We can partially compensate for that by setting the background firing rate, r0 to zero, and choosing α0 and β0 such that the effective background firing rate (due to the small concentration when sj = 0) is equal to r0 ; see Sec. [sent-68, score-0.245]

24 This distribution, denoted Q(c, s|r),was set to Q(c|s, r)Q(s|r) where (1 − sj )Γ(cj |α0j , β0j ) + sj Γ(cj |α1j , β1j ) Q(c|s, r) = (3. [sent-71, score-0.332]

25 To simplify those equations, we set α1 to α0 + 1, resulting in ri wij Fj (λj , α0j ) k=1 wik Fk (λk , α0k ) α0j = α0 + i Lj ≡ log λj = L0j + log(α0j /α0 ) + α0j log(β0j /β1j ) 1 − λj (3. [sent-75, score-0.209]

26 The remaining two parameters, β0j and β1j , are fixed by our choice of weights and priors: β0j = β0 + i wij and β1j = β1 + i wij . [sent-80, score-0.182]

27 3), τρ τα dρi = ri − ρi dt wij Fj (λj , α0j ) dα0j = α0 + Fj (λj , α0j ) dt τλ (3. [sent-85, score-0.223]

28 3d) might raise some concerns: (i) ρ and α are reciprocally and symmetrically connected; (ii) there are multiplicative interactions between F (λj , α0j ) and ρ; and (iii) the neurons need to compute nontrivial nonlinearities, such as logarithm, exponent and a mixture of digamma functions. [sent-96, score-0.176]

29 However: (i) reciprocal and symmetric connectivity exists in the early olfactory processing system [4, 5, 6]; (ii) although multiplicative interactions are in general not easy for neurons, the divisive normalization (Eq. [sent-97, score-0.371]

30 5)) has been observed in the olfactory bulb [7], and (iii) the nonlinearities in our algorithms are not extreme (the logarithm is defined only on the positive range (α0j > α0 , Eq. [sent-99, score-0.399]

31 To sample efficiently from our model, we introduce a new set of variables, cj , ˜ cj = cj sj . [sent-108, score-0.706]

32 6) When written in terms of cj rather than cj , the likelihood becomes ˜ (r0 + P (r|˜, s) = c j wij cj sj )ri ˜ ri ! [sent-110, score-0.853]

33 7) Because the value of cj is unconstrained when sj = 0, we have complete freedom in choosing ˜ P (˜j |sj = 0), the piece of the prior corresponding to the absence of odor j. [sent-113, score-0.608]

34 It is convenient to set it c to the same prior we use when sj = 1, which is Γ(˜j |α1 , β1 ). [sent-114, score-0.191]

35 Note that this set of manipulations does not change the model: the likelihood doesn’t change, since by definition cj sj = cj ; when sj = 1, cj is drawn ˜ ˜ from the correct prior; and when sj = 0, cj does not appear in the likelihood. [sent-120, score-1.232]

36 The former is standard, τc d˜j c ∂ log P (˜, s|r) c α1 − 1 = + ξ(t) = − β1 + sj dt ∂˜j c cj ˜ wij i r0 + ri − 1 + ξ(t) ˜ k wik ck sk (3. [sent-122, score-0.625]

37 This can be done by discretizing time into steps of length dt, and computing the update probability for each odor on each time step. [sent-125, score-0.25]

38 This is a valid Gibbs sampler only in the limit dt → 0, where no more than one odor can be updated per time step that’s the limit of interest here. [sent-126, score-0.294]

39 The update rule is T (sj |˜, s, r) = ν0 dtP (sj |˜, s, r) + (1 − ν0 dt) ∆(sj − sj ) c c (3. [sent-127, score-0.166]

40 10) where sj ≡ sj (t + dt), s and ˜ should be evaluated at time t, and ∆(s) is the Kronecker delta: c ∆(s) = 1 if s = 0 and 0 otherwise. [sent-128, score-0.332]

41 Computing P (sj = 1|˜, s, r) is straightforward, and we find that c P (sj = 1|˜, s, r) = c Φj = log π + 1−π 1 1 + exp[−Φj ] ri log i r0 + k=j r0 + wik ck sk + wij cj ˜ ˜ k=j wik ck sk ˜ − cj wij ˜ . [sent-130, score-0.786]

42 Thus, as with the variational approach, we expect a biophysical model to introduce approximations, and, therefore — as with the variational algorithm — degrade slightly the quality of the inference. [sent-138, score-0.196]

43 For both algorithms, the odors were generated from the true prior, Eq. [sent-152, score-0.829]

44 We modeled a small olfactory system, with 40 olfactory receptor types (compared to approximately 350 in humans and 1000 in mice [8]). [sent-155, score-0.682]

45 To keep the ratio of identifiable odors to receptor types similar to the one in humans [8], we assumed 400 possible odors, with 3 odors expected to be present in the scene (π = 3/400). [sent-156, score-1.786]

46 If an odor was present, its concentration was drawn from a Gamma distribution with α1 = 1. [sent-157, score-0.276]

47 Our remaining parameter, β0 , was set to ensure that, for the variational algorithm, the absent odors (those with sj = 0) contributed a background firing rate of r0 on average. [sent-171, score-1.152]

48 This average background rate is given by j wij cj = pc Nodors α0 /β0 . [sent-172, score-0.307]

49 Figure 2 shows how the inference process evolves over time for a typical set of odors and concentrations. [sent-182, score-0.867]

50 The top panel shows concentration, with variational inference on the left (where we plot the mean of the posterior distribution over concentration, (1 − λj )α0j (t)/β0j (t) + λj α1j (t)/β1j (t); see Eq. [sent-183, score-0.187]

51 2)) and sampling on the right (where we plot cj , the output of our Langevin sampler; see ˜ Eq. [sent-185, score-0.228]

52 The three colored lines correspond to the odors that Variational Sampling 150 100 100 c(t) Concentrations 150 50 50 0 0 400 300 −2 odors Log−probabilities 0 −4 −6 0 200 100 0. [sent-188, score-1.68]

53 2 we plot the log-probability that each of the odors is present, λj (t). [sent-204, score-0.846]

54 The present odors quickly approach probabilities of 1; the absent odors all have probabilities below 10−4 within about 200 ms. [sent-205, score-1.742]

55 The bottom right panel shows samples from sj for all the odors, with dots denoting present odors (sj (t) = 1) and blanks absent odors (sj (t) = 0). [sent-206, score-1.892]

56 Beyond about 500 ms, the true odors (the colored lines at the bottom) are on continuously, and for the odors that were not present, sj is still occasionally 1, but relatively rarely. [sent-207, score-1.846]

57 3 we show the time course of the probability of odors when between 1 and 5 odors were presented. [sent-209, score-1.671]

58 Therefore, we plot the probability of the most likely non-presented odor (red); the average probability of the non-presented odors (green), and the probability of guessing the correct odors via simple template matching (dashed; see Fig. [sent-215, score-2.012]

59 Although odors are inferred relatively rapidly (they exceed template matching within 20 ms), there were almost always false positives. [sent-217, score-0.921]

60 Even with just one odor present, both algorithms consistently report the existence of another odor (red). [sent-218, score-0.474]

61 This problem diminishes with time if fewer odors are presented than the expected three, but it persists for more complex mixtures. [sent-219, score-0.829]

62 The false positives are in fact consistent with human behavior: humans have difficulty correctly identify more than one odor in a mixture, with the most common problem being false positives [9]. [sent-220, score-0.308]

63 4, we show the log-probability, L (left), and probability, λ (right), averaged across 400 scenes containing 3 odors (see Supplementary Fig. [sent-223, score-0.829]

64 The probability of absent odors drops from log(3/400) ≈ e−5 (the prior) to e−12 (the final inferred probability). [sent-225, score-0.92]

65 For the variational approach, this represents a drop in activity of 7 log units, comparable to the increase of about 5 log units for the present odors (whose probability is inferred to be near 1). [sent-226, score-1.118]

66 Thus, for the variational algorithm the average activity associated with the absent odors exhibits a large drop, whereas for the sampling based approach the average activity associated with the absent odors starts small and stays small. [sent-228, score-2.008]

67 5 Discussion We introduced two algorithms for inferring odors from the activity of the odorant receptor neurons. [sent-229, score-1.073]

68 The two algorithms performed with striking similarity: they both inferred odors within about 100 ms and they both had about the same accuracy. [sent-232, score-0.916]

69 4c), for variational inference the log probability of concentration and presence/absence are related to the dynamical variables via log Q(cj ) ∼ α1j log cj − β1j cj (5. [sent-237, score-0.624]

70 If we interpret α0j and Lj as firing rates, then these equations correspond to a linear probabilistic population code [10]: the log probability inferred by the approximate algorithm is linear in firing rate, with a parameter-dependent offset (the term −β1j cj in Eq. [sent-240, score-0.289]

71 For the sampling-based algorithm, on the other hand, activity generates samples from the posterior; an average of those samples codes for the probability of an odor being present. [sent-243, score-0.348]

72 Thus, if the olfactory system uses variational inference, activity should code for log probability, whereas if it uses sampling, activity should code for probability. [sent-244, score-0.544]

73 5 0 4 odors 1 4 odors 1 2 odors 1 0 0. [sent-250, score-2.487]

74 5 0 0 1 odor 1 Sampling 1 odor 1 20 40 60 80 0. [sent-254, score-0.474]

75 Shaded areas represent 25th–75th percentile of values across 400 olfactory scenes. [sent-257, score-0.308]

76 (Template matching finds odors (the j’s) that maximize the dot product between the activity, ri , and the weights, wij , associated, 1/2 2 2 with odor j; that is, it chooses j’s that maximize i ri wij / . [sent-261, score-1.375]

77 The number of i ri i wij odors chosen by template matching was set to the number of odors presented. [sent-262, score-1.852]

78 7 Variational Sampling 3 odors −5 λ L 3 odors 1 0 0. [sent-265, score-1.658]

79 For the variational algorithm, the activity of the neurons codes for log probability (relative to some background to keep firing rates non-negative). [sent-268, score-0.302]

80 For this algorithm, the drop in probability of the non-presented odors from about e−5 to e−12 corresponds to a large drop in firing rate. [sent-269, score-0.92]

81 For the sampling based algorithm, on the other hand, activity codes for probability, and there is almost no drop in activity. [sent-270, score-0.168]

82 One is to note that for the variational algorithm there is a large drop in the average activity of the neurons coding for the non-present odors (Fig. [sent-272, score-1.103]

83 Unfortunately, it is not clear where exactly one needs to stick the electrode to record the trace of the olfactory inference. [sent-282, score-0.292]

84 A good place to start would be the olfactory bulb, where odor representations have been studied extensively [12, 13, 14]. [sent-283, score-0.514]

85 We should also point out that although the olfactory bulb is a likely location for at least part of our two inference algorithms, both are sufficiently complicated that they may need to be performed by higher cortical structures, such as the anterior piriform cortex, [18, 19]. [sent-288, score-0.413]

86 For instance, the generative model was very simple: we assumed that concentrations added linearly, that weights were binary (so that each odor activated a subset of the olfactory receptor neurons at a finite value, and did not activate the rest at all), and that noise was Poisson. [sent-291, score-0.731]

87 And we considered priors such that all odors were independent. [sent-293, score-0.842]

88 This too is unlikely to be true – for instance, the set of odors one expects in a restaurant are very different than the ones one expects in a toxic waste dump, consistent with the fact that responses in the olfactory bulb are modulated by task-relevant behavior [20]. [sent-294, score-1.203]

89 We have also focused solely on inference: we assumed that the network knew perfectly both the mapping from odors to odorant receptor neurons and the priors. [sent-296, score-1.062]

90 Finally, the neurons in our network had to implement relatively complicated nonlinearities: logs, exponents, and digamma and quadratic functions, and neurons had to be reciprocally connected. [sent-298, score-0.182]

91 Dense representation of natural odorants in the mouse olfactory bulb. [sent-319, score-0.277]

92 Theoretical reconstruction of field potentials and dendrodendritic synaptic interactions in olfactory bulb. [sent-330, score-0.291]

93 Sparse incomplete representations: a potential role of olfactory granule cells. [sent-341, score-0.277]

94 The capacity of humans to identify odors in mixtures. [sent-358, score-0.848]

95 Spatio-temporal dynamics of odor representations in the mammalian olfactory bulb. [sent-398, score-0.532]

96 Robust odor coding via inhalation-coupled transient activity in the mammalian olfactory bulb. [sent-401, score-0.619]

97 Modeling the olfactory bulb and its neural oscillatory processings. [sent-407, score-0.346]

98 Sparse distributed representation of odors in a large-scale olfactory bulb circuit. [sent-419, score-1.175]

99 A model of olfactory adaptation and sensitivity enhancement in the olfactory bulb. [sent-425, score-0.554]

100 Odor representations in olfactory cortex: distributed rate coding and decorrelated population activity. [sent-431, score-0.317]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('odors', 0.829), ('olfactory', 0.277), ('odor', 0.237), ('cj', 0.18), ('sj', 0.166), ('receptor', 0.109), ('wij', 0.091), ('variational', 0.091), ('bulb', 0.069), ('activity', 0.068), ('odorant', 0.067), ('neurons', 0.057), ('ri', 0.056), ('ms', 0.055), ('concentrations', 0.051), ('absent', 0.046), ('drop', 0.039), ('wik', 0.039), ('concentration', 0.039), ('dt', 0.038), ('inference', 0.038), ('nonlinearities', 0.037), ('ring', 0.034), ('ppm', 0.034), ('divisive', 0.033), ('template', 0.032), ('inferred', 0.032), ('percentile', 0.031), ('fj', 0.031), ('sampling', 0.031), ('codes', 0.03), ('neuron', 0.03), ('reciprocally', 0.03), ('lj', 0.028), ('coffee', 0.027), ('langevin', 0.027), ('organisms', 0.027), ('apr', 0.026), ('prior', 0.025), ('digamma', 0.024), ('log', 0.023), ('nov', 0.022), ('agnieszka', 0.022), ('bacon', 0.022), ('cybern', 0.022), ('naoshige', 0.022), ('nodors', 0.022), ('olfaction', 0.022), ('shepherd', 0.022), ('lines', 0.022), ('panel', 0.022), ('mixtures', 0.021), ('population', 0.021), ('realistic', 0.02), ('background', 0.02), ('equations', 0.02), ('spike', 0.019), ('humans', 0.019), ('sampler', 0.019), ('posterior', 0.019), ('probabilities', 0.019), ('gamma', 0.019), ('coding', 0.019), ('mammalian', 0.018), ('raise', 0.018), ('sk', 0.018), ('jeff', 0.017), ('beck', 0.017), ('symmetrically', 0.017), ('slab', 0.017), ('latham', 0.017), ('plot', 0.017), ('behavioral', 0.017), ('system', 0.017), ('pc', 0.016), ('ucl', 0.016), ('logarithm', 0.016), ('nontrivial', 0.016), ('dashed', 0.016), ('collapses', 0.016), ('reciprocal', 0.016), ('matching', 0.015), ('biol', 0.015), ('electrode', 0.015), ('cortical', 0.015), ('expects', 0.014), ('alexandre', 0.014), ('complicated', 0.014), ('interactions', 0.014), ('dynamical', 0.014), ('ck', 0.014), ('correct', 0.014), ('connectivity', 0.014), ('degrade', 0.014), ('delta', 0.013), ('predictions', 0.013), ('priors', 0.013), ('probability', 0.013), ('false', 0.013), ('positives', 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 86 nips-2013-Demixing odors - fast inference in olfaction

Author: Agnieszka Grabska-Barwinska, Jeff Beck, Alexandre Pouget, Peter Latham

Abstract: The olfactory system faces a difficult inference problem: it has to determine what odors are present based on the distributed activation of its receptor neurons. Here we derive neural implementations of two approximate inference algorithms that could be used by the brain. One is a variational algorithm (which builds on the work of Beck. et al., 2012), the other is based on sampling. Importantly, we use a more realistic prior distribution over odors than has been used in the past: we use a “spike and slab” prior, for which most odors have zero concentration. After mapping the two algorithms onto neural dynamics, we find that both can infer correct odors in less than 100 ms. Thus, at the behavioral level, the two algorithms make very similar predictions. However, they make different assumptions about connectivity and neural computations, and make different predictions about neural activity. Thus, they should be distinguishable experimentally. If so, that would provide insight into the mechanisms employed by the olfactory system, and, because the two algorithms use very different coding strategies, that would also provide insight into how networks represent probabilities. 1

2 0.08277759 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski

Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1

3 0.068907581 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

4 0.061579105 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke

Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1

5 0.058214653 210 nips-2013-Noise-Enhanced Associative Memories

Author: Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney

Abstract: Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms allow reliable learning and recall of exponential numbers of patterns. Though these designs correct external errors in recall, they assume neurons compute noiselessly, in contrast to highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as internal noise is less than a specified threshold, error probability in the recall phase can be made exceedingly small. More surprisingly, we show internal noise actually improves performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks. 1

6 0.056437198 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

7 0.054279484 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

8 0.050740335 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

9 0.049717676 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

10 0.040760696 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

11 0.038380891 312 nips-2013-Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex

12 0.037400201 229 nips-2013-Online Learning of Nonparametric Mixture Models via Sequential Variational Approximation

13 0.035599779 357 nips-2013-k-Prototype Learning for 3D Rigid Structures

14 0.035499237 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

15 0.03438827 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

16 0.033994131 211 nips-2013-Non-Linear Domain Adaptation with Boosting

17 0.033691227 61 nips-2013-Capacity of strong attractor patterns to model behavioural and cognitive prototypes

18 0.031770036 135 nips-2013-Heterogeneous-Neighborhood-based Multi-Task Local Learning Algorithms

19 0.031582166 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

20 0.030826241 51 nips-2013-Bayesian entropy estimation for binary spike train data using parametric prior knowledge


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.082), (1, 0.036), (2, -0.057), (3, -0.036), (4, -0.097), (5, 0.012), (6, 0.026), (7, -0.029), (8, 0.036), (9, 0.004), (10, 0.004), (11, 0.036), (12, 0.028), (13, 0.013), (14, -0.001), (15, -0.014), (16, -0.005), (17, -0.005), (18, -0.003), (19, -0.024), (20, -0.001), (21, -0.012), (22, -0.018), (23, 0.038), (24, -0.031), (25, -0.009), (26, 0.041), (27, -0.014), (28, 0.016), (29, 0.032), (30, -0.031), (31, 0.022), (32, 0.016), (33, 0.046), (34, -0.0), (35, -0.008), (36, 0.061), (37, -0.012), (38, -0.06), (39, -0.021), (40, -0.071), (41, 0.02), (42, -0.023), (43, 0.062), (44, -0.005), (45, -0.022), (46, -0.011), (47, 0.032), (48, 0.033), (49, 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.87706423 86 nips-2013-Demixing odors - fast inference in olfaction

Author: Agnieszka Grabska-Barwinska, Jeff Beck, Alexandre Pouget, Peter Latham

Abstract: The olfactory system faces a difficult inference problem: it has to determine what odors are present based on the distributed activation of its receptor neurons. Here we derive neural implementations of two approximate inference algorithms that could be used by the brain. One is a variational algorithm (which builds on the work of Beck. et al., 2012), the other is based on sampling. Importantly, we use a more realistic prior distribution over odors than has been used in the past: we use a “spike and slab” prior, for which most odors have zero concentration. After mapping the two algorithms onto neural dynamics, we find that both can infer correct odors in less than 100 ms. Thus, at the behavioral level, the two algorithms make very similar predictions. However, they make different assumptions about connectivity and neural computations, and make different predictions about neural activity. Thus, they should be distinguishable experimentally. If so, that would provide insight into the mechanisms employed by the olfactory system, and, because the two algorithms use very different coding strategies, that would also provide insight into how networks represent probabilities. 1

2 0.70829225 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski

Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1

3 0.64382839 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke

Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1

4 0.5810619 210 nips-2013-Noise-Enhanced Associative Memories

Author: Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney

Abstract: Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms allow reliable learning and recall of exponential numbers of patterns. Though these designs correct external errors in recall, they assume neurons compute noiselessly, in contrast to highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as internal noise is less than a specified threshold, error probability in the recall phase can be made exceedingly small. More surprisingly, we show internal noise actually improves performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks. 1

5 0.57015061 157 nips-2013-Learning Multi-level Sparse Representations

Author: Ferran Diego Andilla, Fred A. Hamprecht

Abstract: Bilinear approximation of a matrix is a powerful paradigm of unsupervised learning. In some applications, however, there is a natural hierarchy of concepts that ought to be reflected in the unsupervised analysis. For example, in the neurosciences image sequence considered here, there are the semantic concepts of pixel → neuron → assembly that should find their counterpart in the unsupervised analysis. Driven by this concrete problem, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts. In addition, we learn the nature of these relations rather than imposing them. Finally, we describe an optimization scheme that allows to optimize the decomposition over all levels jointly, rather than in a greedy level-by-level fashion. The proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. Experiments show that the proposed model fully recovers the structure from difficult synthetic data designed to imitate the experimental data. More importantly, bilevel SHMF yields plausible interpretations of real-world Calcium imaging data. 1

6 0.52694792 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

7 0.51839697 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively

8 0.51638567 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

9 0.50701743 121 nips-2013-Firing rate predictions in optimal balanced networks

10 0.473638 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

11 0.46183729 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

12 0.46007463 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

13 0.45918226 312 nips-2013-Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex

14 0.42567104 317 nips-2013-Streaming Variational Bayes

15 0.42450419 345 nips-2013-Variance Reduction for Stochastic Gradient Optimization

16 0.42264533 187 nips-2013-Memoized Online Variational Inference for Dirichlet Process Mixture Models

17 0.41993919 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

18 0.41795602 229 nips-2013-Online Learning of Nonparametric Mixture Models via Sequential Variational Approximation

19 0.40577519 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

20 0.39762205 234 nips-2013-Online Variational Approximations to non-Exponential Family Change Point Models: With Application to Radar Tracking


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.017), (16, 0.062), (33, 0.082), (34, 0.136), (41, 0.037), (49, 0.052), (51, 0.281), (56, 0.065), (70, 0.055), (85, 0.02), (89, 0.02), (93, 0.026), (95, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.73738432 86 nips-2013-Demixing odors - fast inference in olfaction

Author: Agnieszka Grabska-Barwinska, Jeff Beck, Alexandre Pouget, Peter Latham

Abstract: The olfactory system faces a difficult inference problem: it has to determine what odors are present based on the distributed activation of its receptor neurons. Here we derive neural implementations of two approximate inference algorithms that could be used by the brain. One is a variational algorithm (which builds on the work of Beck. et al., 2012), the other is based on sampling. Importantly, we use a more realistic prior distribution over odors than has been used in the past: we use a “spike and slab” prior, for which most odors have zero concentration. After mapping the two algorithms onto neural dynamics, we find that both can infer correct odors in less than 100 ms. Thus, at the behavioral level, the two algorithms make very similar predictions. However, they make different assumptions about connectivity and neural computations, and make different predictions about neural activity. Thus, they should be distinguishable experimentally. If so, that would provide insight into the mechanisms employed by the olfactory system, and, because the two algorithms use very different coding strategies, that would also provide insight into how networks represent probabilities. 1

2 0.56762433 331 nips-2013-Top-Down Regularization of Deep Belief Networks

Author: Hanlin Goh, Nicolas Thome, Matthieu Cord, Joo-Hwee Lim

Abstract: Designing a principled and effective algorithm for learning deep architectures is a challenging problem. The current approach involves two training phases: a fully unsupervised learning followed by a strongly discriminative optimization. We suggest a deep learning strategy that bridges the gap between the two phases, resulting in a three-phase learning procedure. We propose to implement the scheme using a method to regularize deep belief networks with top-down information. The network is constructed from building blocks of restricted Boltzmann machines learned by combining bottom-up and top-down sampled signals. A global optimization procedure that merges samples from a forward bottom-up pass and a top-down pass is used. Experiments on the MNIST dataset show improvements over the existing algorithms for deep belief networks. Object recognition results on the Caltech-101 dataset also yield competitive results. 1

3 0.55807418 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

Author: Cristina Savin, Peter Dayan, Mate Lengyel

Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1

4 0.54612732 125 nips-2013-From Bandits to Experts: A Tale of Domination and Independence

Author: Noga Alon, Nicolò Cesa-Bianchi, Claudio Gentile, Yishay Mansour

Abstract: We consider the partial observability model for multi-armed bandits, introduced by Mannor and Shamir [14]. Our main result is a characterization of regret in the directed observability model in terms of the dominating and independence numbers of the observability graph (which must be accessible before selecting an action). In the undirected case, we show that the learner can achieve optimal regret without even accessing the observability graph before selecting an action. Both results are shown using variants of the Exp3 algorithm operating on the observability graph in a time-efficient manner. 1

5 0.53879368 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

6 0.53815883 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

7 0.53581262 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

8 0.53525966 15 nips-2013-A memory frontier for complex synapses

9 0.53502899 238 nips-2013-Optimistic Concurrency Control for Distributed Unsupervised Learning

10 0.53305119 148 nips-2013-Latent Maximum Margin Clustering

11 0.53295749 210 nips-2013-Noise-Enhanced Associative Memories

12 0.5299381 39 nips-2013-Approximate Gaussian process inference for the drift function in stochastic differential equations

13 0.52970225 143 nips-2013-Integrated Non-Factorized Variational Inference

14 0.52861595 187 nips-2013-Memoized Online Variational Inference for Dirichlet Process Mixture Models

15 0.52808881 347 nips-2013-Variational Planning for Graph-based MDPs

16 0.5277614 48 nips-2013-Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC

17 0.52776116 122 nips-2013-First-order Decomposition Trees

18 0.52661377 100 nips-2013-Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture

19 0.52655703 173 nips-2013-Least Informative Dimensions

20 0.52621764 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits