nips nips2013 nips2013-264 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Wenhao Zhang, Si Wu
Abstract: Psychophysical experiments have demonstrated that the brain integrates information from multiple sensory cues in a near Bayesian optimal manner. The present study proposes a novel mechanism to achieve this. We consider two reciprocally connected networks, mimicking the integration of heading direction information between the dorsal medial superior temporal (MSTd) and the ventral intraparietal (VIP) areas. Each network serves as a local estimator and receives an independent cue, either the visual or the vestibular, as direct input for the external stimulus. We find that positive reciprocal interactions can improve the decoding accuracy of each individual network as if it implements Bayesian inference from two cues. Our model successfully explains the experimental finding that both MSTd and VIP achieve Bayesian multisensory integration, though each of them only receives a single cue as direct external input. Our result suggests that the brain may implement optimal information integration distributively at each local estimator through the reciprocal connections between cortical regions. 1
Reference: text
sentIndex sentText sentNum sentScore
1 cn 1 Abstract Psychophysical experiments have demonstrated that the brain integrates information from multiple sensory cues in a near Bayesian optimal manner. [sent-8, score-0.35]
2 We consider two reciprocally connected networks, mimicking the integration of heading direction information between the dorsal medial superior temporal (MSTd) and the ventral intraparietal (VIP) areas. [sent-10, score-0.563]
3 Each network serves as a local estimator and receives an independent cue, either the visual or the vestibular, as direct input for the external stimulus. [sent-11, score-0.432]
4 We find that positive reciprocal interactions can improve the decoding accuracy of each individual network as if it implements Bayesian inference from two cues. [sent-12, score-0.607]
5 Our model successfully explains the experimental finding that both MSTd and VIP achieve Bayesian multisensory integration, though each of them only receives a single cue as direct external input. [sent-13, score-0.409]
6 Our result suggests that the brain may implement optimal information integration distributively at each local estimator through the reciprocal connections between cortical regions. [sent-14, score-0.631]
7 For instance, while walking, we perceive heading direction through either the visual cue (optic flow), or the vestibular cue generated by body movement, or both of them [1, 2]. [sent-16, score-1.092]
8 In order to achieve an accurate or improved representation of the input information, it is critical for the brain to integrate information from multiple sensory modalities. [sent-18, score-0.211]
9 Consider the task of inferring heading direction θ based on the visual and vestibular cues. [sent-20, score-0.604]
10 Suppose that with a single cue cl (l = vi, ve correspond to the visual and the vestibular cues, respectively), the estimation of heading direction satisfies the 2 Gaussian distribution p(cl |θ), which has the mean µl and the variance σl . [sent-21, score-0.949]
11 Under the condition that noises from different cues are independent to each other, the Bayes’ theorem states that p(θ|cvi , cve ) ∝ p(cvi |θ)p(cve |θ)p(θ), (1) where p(θ|cvi , cve ) is the posterior distribution of the stimulus when two cues are presented, and p(θ) the prior distribution. [sent-22, score-0.961]
12 , p(θ) is uniform, p(θ|cvi , cve ) also 1 satisfies the Gaussian distribution with the mean and variance given by µb = 1 2 σb = 2 σ2 σve µvi + 2 vi 2 µve , 2 + σve σvi + σve 1 1 2 + σ2 . [sent-25, score-0.227]
13 In particular, in their model, the improved decoding accuracy after combining input cues (i. [sent-33, score-0.438]
14 Moreover, it is not clear where this centralized network responsible for information integration locates in the cortex. [sent-38, score-0.294]
15 In this work, we propose a novel mechanism to implement Bayesian information integration, which relies on the excitatory reciprocal interactions between local estimators, with each local estimator receiving an independent cue as external input. [sent-39, score-0.804]
16 Although our idea may be applicable to general cases, the present study focuses on two reciprocally connected networks, mimicking the integration of heading direction information between the dorsal medial superior temporal (MSTd) area and ventral intraparietal (VIP) area. [sent-40, score-0.563]
17 It is known that MSTd and VIP receive the visual and the vestibular cues as external input, respectively. [sent-41, score-0.756]
18 We model each network as a continuous attractor neural network (CANN), reflecting the property that neurons in MSTd and VTP are widely tuned by heading direction [10, 11]. [sent-42, score-0.638]
19 Interestingly, we find that with positive reciprocal interactions, both networks read out heading direction optimally in Bayesian sense, despite the fact that each network only receives a single cue as directly external input. [sent-43, score-1.197]
20 This agrees well with the experimental finding that both MSTd and VIP integrate the visual and vestibular cues optimally [6, 7]. [sent-44, score-0.712]
21 Our result suggests that the brain may implement Bayesian information integration distributively at each local area through reciprocal connections between cortical regions. [sent-45, score-0.631]
22 2 The Model We consider two reciprocally connected networks, each of which receives the stimulus information from an independent sensory cue (see Fig. [sent-46, score-0.507]
23 Anatomical and fMRI data have revealed that there exist abundant reciprocal interactions between MSTd and VIP [12–14]. [sent-49, score-0.344]
24 Neurons in MSTd and VIP are tuned to heading direction, relying on the visual and the vestibular cues [10, 15]. [sent-50, score-0.808]
25 the heading direction) encoded by both networks, and the neuronal preferred stimuli are in the range of −π < θ ≤ π with periodic boundary condition. [sent-55, score-0.21]
26 Denote Ul (θ, t), for l = 1, 2, the synaptic input at time t to the neurons having the preferred stimulus θ in the l-th network. [sent-56, score-0.275]
27 The dynamics of Ul (θ, t) is determined by the recurrent inputs from other neurons in the same network, the reciprocal inputs from neurons in the other network, the external input Ilext (θ, t), and its own relaxation. [sent-57, score-0.823]
28 It is written as [ ] ][ ] [ ext ] [ ] ∫ [ ∂ U1 (θ, t) W11 W12 r1 (θ′ , t) I1 (θ, t) U1 (θ, t) ′ τ =ρ dθ + − , (4) ext W21 W22 r2 (θ′ , t) U2 (θ, t) I2 (θ, t) ∂t U2 (θ, t) where τ is the time constant for synaptic current, which is typically in the order of 2-5ms. [sent-58, score-0.203]
29 rl (θ, t) is the firing rate of neurons, which increases with the synaptic input but saturates when the synaptic input is sufficiently large. [sent-60, score-0.2]
30 The two networks are reciprocally connected and each of them forms a CANN. [sent-63, score-0.21]
31 Each disk represents an excitatory neuron with its preferred heading direction indicated by the arrow inside. [sent-64, score-0.312]
32 The gray disk in the middle of the network represents the inhibitory neuron pool which sums the total activities of excitatory neurons and generates divisive normalization (Eq. [sent-65, score-0.351]
33 Wlm (θ, θ′ ) denotes the connection from the neurons θ′ in the network m to the neurons θ in the network l. [sent-79, score-0.486]
34 W11 (θ, θ′ ) and W22 (θ, θ′ ) are the recurrent connections within the same network, and W12 (θ, θ′ ) and W21 (θ, θ′ ) the reciprocal connections between the networks. [sent-80, score-0.489]
35 We choose Jlm > 0, for l, m = 1, 2, implying excitatory recurrent and reciprocal neuronal interactions. [sent-85, score-0.392]
36 The external inputs to two networks are given by [ Ilext (θ, t) = 2 (θ − µl ) αl exp − 4(all )2 ] + ηl ξl (θ, t), (7) where µl denotes the stimulus value conveyed to the network l by the corresponding sensory cue. [sent-87, score-0.519]
37 This can be understood as Ilext drives the network l to be stable at µl when no reciprocal interaction and noise exist. [sent-88, score-0.457]
38 The noise term causes the uncertainty of the input information, which induces fluctuations of the network state. [sent-90, score-0.208]
39 1 The dynamics of uncoupled networks It is instructive to first review the dynamics of two networks without reciprocal connection (by setting Wlm = 0 for l ̸= m in Eq. [sent-93, score-0.749]
40 In this case, the dynamics of each network is independent 3 from the other. [sent-95, score-0.234]
41 Because of the translation-invariance of the recurrent connections Wll (θ, θ′ ), each network can support a continuous family of active stationary states even when the external input is removed [19]. [sent-96, score-0.45]
42 These attractor states are of Gaussian-shape, called bumps, which are given by, ] [ 2 ˜l (x) = Ul0 exp − (θ − zl ) , U (8) 4(all )2 where zl is a free parameter, representing the peak position of the bump, and Ul0 = [1 + (1 − √ √ √ Jc /Jll )1/2 ]Jll /(4all k π). [sent-97, score-0.36]
43 In response to external inputs, the bump position zl is interpreted as the population decoding result of the network. [sent-99, score-0.664]
44 It has been proven that for a strong transient or a weak constant input, the network bump will move toward and be stable at a position having the maximum overlap with the noisy input, realizing the so called template-matching operation [17, 18]. [sent-100, score-0.535]
45 For temporally fluctuating inputs, the bump position also fluctuates in time, and the variance of bump position measures the network decoding uncertainty. [sent-101, score-0.977]
46 In a CANN, its stationary states form a continuous manifold in which the network is neutrally stable, i. [sent-102, score-0.181]
47 , the network state can translate smoothly when the external input changes continuously [18, 20]. [sent-104, score-0.347]
48 Due to the special structure of a CANN, it has been proved that the dynamics of a CANN is dominated by a few motion modes, corresponding to distortions in the height, position and other higher order features of the Gaussian bump [19]. [sent-106, score-0.483]
49 In the weak input limit, it is enough to project the network dynamics onto the first few dominating motion modes and neglect the higher order ones then simplify the network dynamics significantly. [sent-107, score-0.696]
50 The first two dominating motion modes we are going to use are, [ ] (θ − z)2 height : ϕ0 (θ|z) = exp − , (9) 4a2 ( ) [ ] θ−z (θ − z)2 position : ϕ1 (θ|z) = (10) exp − , a 4a2 where a is the width of the basis function, whose value is determined by the bump width the network holds. [sent-108, score-0.673]
51 θ When reciprocal connections are included, the dynamics of the two networks interact with each other. [sent-110, score-0.57]
52 The bump position of each network is no longer solely determined by its own input, but is also affected by the input to the other network, enabling both networks to integrate two sensory cues via reciprocal connections. [sent-111, score-1.319]
53 We consider the reciprocal connections, Wlm (θ, θ′ ), for l ̸= m, also translation-invariant (Eq. [sent-112, score-0.312]
54 In the text below, we will consider the weak input limit and use a projection method to simplify the network dynamics. [sent-117, score-0.283]
55 The simplified model allows us to solve the network decoding performances analytically and gives us insight into the understanding of how reciprocal connections help both networks to integrate information optimally from independent cues. [sent-118, score-0.829]
56 , J11 = J22 ≡ Jrc , J12 = J21 ≡ Jrp , and alm = a; and they receive the same mean input value and input strength, i. [sent-123, score-0.201]
57 , ⟨ξ1 ξ2 ⟩ = 0, implying that two cues are independent to each other given the stimulus. [sent-128, score-0.257]
58 Two networks receive the same external input, whose value jumps from −1 to 1 abruptly. [sent-133, score-0.225]
59 The network states move smoothly from the initial to the target position, and their main changes are the height and the position of the Gaussian bumps. [sent-134, score-0.372]
60 (C) The simplified network dynamics after projecting onto the two dominating motion modes. [sent-136, score-0.321]
61 (13)), we can get the necessary condition for the networks holding self-sustained bump states, which is (see Supplemental information 2) √ √ Jrc + Jrp ≥ 2 2(2π)1/4 ka/ρ. [sent-149, score-0.352]
62 (16) It indicates that positive reciprocal interactions Jrp help the networks to retain attractor states. [sent-150, score-0.475]
63 To get clear understanding of the effect of reciprocal connections, we decouple the dynamics of z1 and z2 by studying the dynamics of their their difference, zd = z1 − z2 , and their summation, zs = z1 + z2 . [sent-151, score-0.677]
64 (14) and (15), we obtain √ √ ˜ α + 2Jrp B 2 2η a dzd = − zd + τ ϵd (t), (17) dt A (2π)1/4 A √ √ dzs α 2α 2 2η a τ ϵs (t), (18) = − zs + µ+ dt A A (2π)1/4 A 5 where ϵd (t) and ϵs (t) are independent Gaussian white noises re-organized from ξ1 (t) and ξ2 (t) √ ( 2ϵ = ξ1 ± ξ2 ). [sent-153, score-0.297]
65 (20) indicates that the positive re˜ ciprocal connections Jrp tend to decrease the variance of zd , i. [sent-156, score-0.205]
66 The decoding error of each network, measured by the variance of zl , is calculated to be (two networks have the same result due to the symmetry), ⟨zl ⟩ Var(zl ) = µ, for l = 1, 2, = [Var(zd ) + Var(zs )] /4, ( ) η2 a 1 1 = √ + . [sent-158, score-0.343]
67 ˜ 2πτ A α α + 2Jrp B (22) (23) We see that the network decoding is unbiased and their errors tend to decrease with the reciprocal ˜ connection strength Jrp (see the second term in the right-hand of Eq. [sent-159, score-0.651]
68 It is easy to check that in the extreme cases and assuming the bump shape is unchanged (which is not true but is still a good ˜ indication), the network decoding variance with vanishing reciprocal interaction (Jrp = 0) is two˜ fold of that with infinitely strong reciprocal interactions (Jrp = ∞). [sent-161, score-1.229]
69 Thus, reciprocal connections between networks do provide an effective way to integrate information from independent input cues. [sent-162, score-0.599]
70 To further display the advantage of reciprocal connection, we also calculate the situation when a single network receives both input cues. [sent-163, score-0.554]
71 This equals to setting the external input to a single CANN √ 2 2 to be I ext (x, t) = 2αe−(x−µ) /4a + 2ηξ(x, t) (see Eq. [sent-164, score-0.248]
72 In the weak input limit, the decoding errors in general situations when two networks are not symmetric can also be calculated, (see Supplemental information 3) 2 2 ˜ ˜ ˜ ˜ 2a [(J12 B2 α2 + J21 B1 α1 + α1 α2 )A2 /A1 + (J21 B1 + α2 )2 ]η1 + (J12 B2 )2 η2 Var(z1 ) = √ . [sent-169, score-0.324]
73 Mimicking the experimental setting for exploring the integration of visual and vestibular cues in the inference of heading direction [6, 7], we apply three input conditions to two networks (see Fig. [sent-172, score-1.139]
74 3A), which are: • Only visual cue: • Only vestibular cue: • Combined cues: α1 = α, α1 = 0, α1 = α, α2 = 0. [sent-173, score-0.37]
75 In three conditions, the noise amplitude is unchanged and the reciprocal connections are intact. [sent-176, score-0.411]
76 6 A Input condition Both cues Vestibular cue Visual cue I Net 2 (VIP) Iext 2 ext 2 Net 1 (MSTd) Iext 1 B ext (z1|cvi) (z1|c ve) (z1|c b ) I1 z1 0. [sent-177, score-0.911]
77 The bump position of the network 1 fluctuates around the true stimulus value 0. [sent-189, score-0.555]
78 The right panel displays the bump position distributions in three input conditions, from which we estimate the mean and variance of the decoding results. [sent-190, score-0.552]
79 Compared the network decoding results with two cues with the predictions of Bayesian inference. [sent-192, score-0.52]
80 Different combinations of the input strengths αl and the reciprocal connection strengths Jrp are chosen. [sent-194, score-0.487]
81 Considering the symmetric structures of two networks and ignoring the mild changes in the bump shape in the weak input limit, we can obtain from Eq. [sent-203, score-0.462]
82 We run the network dynamics under three input conditions for many trials, and calculate the means and variances of the bump positions in each condition. [sent-210, score-0.553]
83 3B shows that the bump position fluctuations become 7 ( µ vi ) 0. [sent-212, score-0.393]
84 5 Var(z1|c vi )/Var(z1|c ve) 2 Figure 4: The decoding mean of the network 1 shifts toward the more reliable cue. [sent-219, score-0.355]
85 The color encodes the ratio of the input strengths to two networks α1 /α2 , which generates varied reliability for two cues. [sent-220, score-0.234]
86 , the vestibular cue becomes more reliable than the visual one, the network estimation shifts to the stimulus value µ2 conveyed by the vestibular cue. [sent-223, score-1.177]
87 narrower in the combined cue input condition, indicating greater accuracy in the decoding. [sent-229, score-0.307]
88 We compare the result when both cues are presented with the prediction of the Bayesian inference, obtained by using Eqs. [sent-230, score-0.257]
89 3C and D show that two networks indeed achieve near Bayesian optimal inference for a wide range of input amplitudes and reciprocal connection strengths. [sent-233, score-0.509]
90 The reliability of cues is quantified by their variance ratio, e. [sent-235, score-0.323]
91 , (σvi )2 < (σve )2 means that visual cue is more reliable than vestibular cue. [sent-237, score-0.656]
92 This property has been used as a criterion in experiment to check the implementation of Bayesian inference, called “reliability based cue weighting” [23]. [sent-240, score-0.244]
93 To achieve different reliability of the cues, we adjust the input strength α1 , and keep the other input parameters unchanged, mimicking the experimental finding that the firing rate of MT neuron, the earlier stage before MSTd, increases with the input coherence for its preferred stimuli [24]. [sent-242, score-0.357]
94 With varying input strengths α1 , and hence varied ratios Var(z1 |cvi )/Var(z1 |cve ), we calculate the mean of the network decoding. [sent-243, score-0.245]
95 4 shows that the decoded mean in the combined cues condition indeed shifts towards to the more reliable cue, agreeing with the experimental finding and the property of Bayesian inference. [sent-245, score-0.299]
96 We consider two networks which are reciprocally connected, and each of them is modeled as a CANN receiving the stimulus information from an independent cue. [sent-247, score-0.251]
97 Our network model may be regarded as mimicking the information integration on heading direction between the neural circuits in MSTd and VIP. [sent-248, score-0.561]
98 Experimental data has revealed that the two areas are densely connected in reciprocity and that neurons in both areas are widely tuned by heading direction, favoring our model assumptions. [sent-249, score-0.286]
99 We use a projection method to solve the network dynamics in the weak input limit analytically and get insights into how positive reciprocal connections enable one network to effectively integrate information from the other. [sent-250, score-0.957]
100 It suggests that the brain can implement efficient information integration in a distributive manner through reciprocal connections between cortical regions. [sent-254, score-0.614]
wordName wordTfidf (topN-words)
[('reciprocal', 0.312), ('jrp', 0.299), ('vestibular', 0.282), ('cues', 0.257), ('bump', 0.256), ('cue', 0.244), ('mstd', 0.232), ('heading', 0.181), ('cvi', 0.166), ('vip', 0.166), ('var', 0.158), ('cve', 0.149), ('network', 0.145), ('integration', 0.119), ('decoding', 0.118), ('zd', 0.104), ('external', 0.102), ('zl', 0.101), ('angelaki', 0.1), ('cann', 0.1), ('deangelis', 0.1), ('networks', 0.096), ('dynamics', 0.089), ('visual', 0.088), ('reciprocally', 0.088), ('position', 0.087), ('zs', 0.083), ('ext', 0.083), ('neuroscience', 0.079), ('neurons', 0.079), ('connections', 0.073), ('bayesian', 0.067), ('stimulus', 0.067), ('height', 0.067), ('ilext', 0.066), ('jrc', 0.066), ('wlm', 0.066), ('mimicking', 0.063), ('input', 0.063), ('bumps', 0.059), ('iext', 0.059), ('integrate', 0.055), ('ul', 0.054), ('direction', 0.053), ('motion', 0.051), ('jc', 0.05), ('vi', 0.05), ('excitatory', 0.049), ('alm', 0.048), ('sensory', 0.048), ('weak', 0.047), ('noises', 0.046), ('brain', 0.045), ('distributively', 0.044), ('ve', 0.044), ('reliable', 0.042), ('inhibitory', 0.042), ('net', 0.04), ('reliability', 0.038), ('coupled', 0.038), ('strength', 0.038), ('implement', 0.038), ('connection', 0.038), ('synaptic', 0.037), ('strengths', 0.037), ('smoothly', 0.037), ('divisive', 0.036), ('pouget', 0.036), ('states', 0.036), ('dominating', 0.036), ('attractor', 0.035), ('receives', 0.034), ('inputs', 0.034), ('canns', 0.033), ('intraparietal', 0.033), ('jll', 0.033), ('jlm', 0.033), ('interactions', 0.032), ('dt', 0.032), ('modes', 0.031), ('recurrent', 0.031), ('centralized', 0.03), ('uctuations', 0.03), ('optimally', 0.03), ('cb', 0.029), ('cl', 0.029), ('shanghai', 0.029), ('multisensory', 0.029), ('ungerleider', 0.029), ('uncoupled', 0.029), ('preferred', 0.029), ('limit', 0.028), ('variance', 0.028), ('conveyed', 0.027), ('distributive', 0.027), ('psychophysical', 0.027), ('mechanism', 0.027), ('receive', 0.027), ('connected', 0.026), ('unchanged', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999982 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively
Author: Wenhao Zhang, Si Wu
Abstract: Psychophysical experiments have demonstrated that the brain integrates information from multiple sensory cues in a near Bayesian optimal manner. The present study proposes a novel mechanism to achieve this. We consider two reciprocally connected networks, mimicking the integration of heading direction information between the dorsal medial superior temporal (MSTd) and the ventral intraparietal (VIP) areas. Each network serves as a local estimator and receives an independent cue, either the visual or the vestibular, as direct input for the external stimulus. We find that positive reciprocal interactions can improve the decoding accuracy of each individual network as if it implements Bayesian inference from two cues. Our model successfully explains the experimental finding that both MSTd and VIP achieve Bayesian multisensory integration, though each of them only receives a single cue as direct external input. Our result suggests that the brain may implement optimal information integration distributively at each local estimator through the reciprocal connections between cortical regions. 1
2 0.14241254 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems
Author: Hesham Mostafa, Lorenz. K. Mueller, Giacomo Indiveri
Abstract: We present a recurrent neuronal network, modeled as a continuous-time dynamical system, that can solve constraint satisfaction problems. Discrete variables are represented by coupled Winner-Take-All (WTA) networks, and their values are encoded in localized patterns of oscillations that are learned by the recurrent weights in these networks. Constraints over the variables are encoded in the network connectivity. Although there are no sources of noise, the network can escape from local optima in its search for solutions that satisfy all constraints by modifying the effective network connectivity through oscillations. If there is no solution that satisfies all constraints, the network state changes in a seemingly random manner and its trajectory approximates a sampling procedure that selects a variable assignment with a probability that increases with the fraction of constraints satisfied by this assignment. External evidence, or input to the network, can force variables to specific values. When new inputs are applied, the network re-evaluates the entire set of variables in its search for states that satisfy the maximum number of constraints, while being consistent with the external input. Our results demonstrate that the proposed network architecture can perform a deterministic search for the optimal solution to problems with non-convex cost functions. The network is inspired by canonical microcircuit models of the cortex and suggests possible dynamical mechanisms to solve constraint satisfaction problems that can be present in biological networks, or implemented in neuromorphic electronic circuits. 1
3 0.12422514 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval
Author: Cristina Savin, Peter Dayan, Mate Lengyel
Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1
4 0.097871095 121 nips-2013-Firing rate predictions in optimal balanced networks
Author: David G. Barrett, Sophie Denève, Christian K. Machens
Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1
5 0.09567669 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels
Author: Matjaz Jogan, Alan Stocker
Abstract: How do humans perceive the speed of a coherent motion stimulus that contains motion energy in multiple spatiotemporal frequency bands? Here we tested the idea that perceived speed is the result of an integration process that optimally combines speed information across independent spatiotemporal frequency channels. We formalized this hypothesis with a Bayesian observer model that combines the likelihood functions provided by the individual channel responses (cues). We experimentally validated the model with a 2AFC speed discrimination experiment that measured subjects’ perceived speed of drifting sinusoidal gratings with different contrasts and spatial frequencies, and of various combinations of these single gratings. We found that the perceived speeds of the combined stimuli are independent of the relative phase of the underlying grating components. The results also show that the discrimination thresholds are smaller for the combined stimuli than for the individual grating components, supporting the cue combination hypothesis. The proposed Bayesian model fits the data well, accounting for the full psychometric functions of both simple and combined stimuli. Fits are improved if we assume that the channel responses are subject to divisive normalization. Our results provide an important step toward a more complete model of visual motion perception that can predict perceived speeds for coherent motion stimuli of arbitrary spatial structure. 1
6 0.093830399 210 nips-2013-Noise-Enhanced Associative Memories
7 0.092560895 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?
8 0.085148975 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data
9 0.082289845 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables
10 0.081128165 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits
11 0.07560008 331 nips-2013-Top-Down Regularization of Deep Belief Networks
12 0.071152613 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
13 0.069884084 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles
14 0.069152683 64 nips-2013-Compete to Compute
16 0.058698867 176 nips-2013-Linear decision rule as aspiration for simple decision heuristics
17 0.058422685 205 nips-2013-Multisensory Encoding, Decoding, and Identification
18 0.056161247 183 nips-2013-Mapping paradigm ontologies to and from the brain
19 0.049673058 334 nips-2013-Training and Analysing Deep Recurrent Neural Networks
20 0.047763385 108 nips-2013-Error-Minimizing Estimates and Universal Entry-Wise Error Bounds for Low-Rank Matrix Completion
topicId topicWeight
[(0, 0.117), (1, 0.058), (2, -0.104), (3, -0.058), (4, -0.141), (5, -0.053), (6, -0.016), (7, -0.051), (8, 0.015), (9, 0.022), (10, 0.055), (11, -0.0), (12, 0.03), (13, 0.01), (14, -0.03), (15, 0.02), (16, -0.051), (17, -0.017), (18, -0.056), (19, -0.053), (20, 0.042), (21, -0.052), (22, -0.011), (23, 0.072), (24, -0.035), (25, 0.039), (26, -0.019), (27, 0.042), (28, -0.007), (29, -0.027), (30, -0.009), (31, -0.099), (32, -0.065), (33, -0.062), (34, -0.037), (35, -0.086), (36, 0.012), (37, -0.045), (38, -0.029), (39, -0.049), (40, -0.067), (41, -0.007), (42, -0.143), (43, 0.074), (44, 0.039), (45, 0.016), (46, -0.066), (47, 0.008), (48, -0.027), (49, 0.012)]
simIndex simValue paperId paperTitle
same-paper 1 0.94794202 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively
Author: Wenhao Zhang, Si Wu
Abstract: Psychophysical experiments have demonstrated that the brain integrates information from multiple sensory cues in a near Bayesian optimal manner. The present study proposes a novel mechanism to achieve this. We consider two reciprocally connected networks, mimicking the integration of heading direction information between the dorsal medial superior temporal (MSTd) and the ventral intraparietal (VIP) areas. Each network serves as a local estimator and receives an independent cue, either the visual or the vestibular, as direct input for the external stimulus. We find that positive reciprocal interactions can improve the decoding accuracy of each individual network as if it implements Bayesian inference from two cues. Our model successfully explains the experimental finding that both MSTd and VIP achieve Bayesian multisensory integration, though each of them only receives a single cue as direct external input. Our result suggests that the brain may implement optimal information integration distributively at each local estimator through the reciprocal connections between cortical regions. 1
2 0.74968427 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems
Author: Hesham Mostafa, Lorenz. K. Mueller, Giacomo Indiveri
Abstract: We present a recurrent neuronal network, modeled as a continuous-time dynamical system, that can solve constraint satisfaction problems. Discrete variables are represented by coupled Winner-Take-All (WTA) networks, and their values are encoded in localized patterns of oscillations that are learned by the recurrent weights in these networks. Constraints over the variables are encoded in the network connectivity. Although there are no sources of noise, the network can escape from local optima in its search for solutions that satisfy all constraints by modifying the effective network connectivity through oscillations. If there is no solution that satisfies all constraints, the network state changes in a seemingly random manner and its trajectory approximates a sampling procedure that selects a variable assignment with a probability that increases with the fraction of constraints satisfied by this assignment. External evidence, or input to the network, can force variables to specific values. When new inputs are applied, the network re-evaluates the entire set of variables in its search for states that satisfy the maximum number of constraints, while being consistent with the external input. Our results demonstrate that the proposed network architecture can perform a deterministic search for the optimal solution to problems with non-convex cost functions. The network is inspired by canonical microcircuit models of the cortex and suggests possible dynamical mechanisms to solve constraint satisfaction problems that can be present in biological networks, or implemented in neuromorphic electronic circuits. 1
3 0.69324064 210 nips-2013-Noise-Enhanced Associative Memories
Author: Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney
Abstract: Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms allow reliable learning and recall of exponential numbers of patterns. Though these designs correct external errors in recall, they assume neurons compute noiselessly, in contrast to highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as internal noise is less than a specified threshold, error probability in the recall phase can be made exceedingly small. More surprisingly, we show internal noise actually improves performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks. 1
4 0.6606847 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables
Author: Zhuo Wang, Alan Stocker, Daniel Lee
Abstract: In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1
5 0.64123285 121 nips-2013-Firing rate predictions in optimal balanced networks
Author: David G. Barrett, Sophie Denève, Christian K. Machens
Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1
6 0.63302249 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels
7 0.62008786 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?
9 0.56175232 86 nips-2013-Demixing odors - fast inference in olfaction
10 0.55631882 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits
11 0.52981812 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
12 0.52717596 183 nips-2013-Mapping paradigm ontologies to and from the brain
13 0.51413405 205 nips-2013-Multisensory Encoding, Decoding, and Identification
14 0.51358896 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models
15 0.50926465 64 nips-2013-Compete to Compute
16 0.50304222 61 nips-2013-Capacity of strong attractor patterns to model behavioural and cognitive prototypes
17 0.48747396 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval
18 0.45351201 221 nips-2013-On the Expressive Power of Restricted Boltzmann Machines
19 0.44076762 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data
20 0.37647995 329 nips-2013-Third-Order Edge Statistics: Contour Continuation, Curvature, and Cortical Connections
topicId topicWeight
[(16, 0.035), (19, 0.012), (26, 0.351), (33, 0.097), (34, 0.116), (41, 0.032), (49, 0.044), (56, 0.088), (70, 0.069), (85, 0.019), (89, 0.029), (93, 0.016)]
simIndex simValue paperId paperTitle
same-paper 1 0.7808969 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively
Author: Wenhao Zhang, Si Wu
Abstract: Psychophysical experiments have demonstrated that the brain integrates information from multiple sensory cues in a near Bayesian optimal manner. The present study proposes a novel mechanism to achieve this. We consider two reciprocally connected networks, mimicking the integration of heading direction information between the dorsal medial superior temporal (MSTd) and the ventral intraparietal (VIP) areas. Each network serves as a local estimator and receives an independent cue, either the visual or the vestibular, as direct input for the external stimulus. We find that positive reciprocal interactions can improve the decoding accuracy of each individual network as if it implements Bayesian inference from two cues. Our model successfully explains the experimental finding that both MSTd and VIP achieve Bayesian multisensory integration, though each of them only receives a single cue as direct external input. Our result suggests that the brain may implement optimal information integration distributively at each local estimator through the reciprocal connections between cortical regions. 1
2 0.48564968 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval
Author: Cristina Savin, Peter Dayan, Mate Lengyel
Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1
3 0.48449594 15 nips-2013-A memory frontier for complex synapses
Author: Subhaneil Lahiri, Surya Ganguli
Abstract: An incredible gulf separates theoretical models of synapses, often described solely by a single scalar value denoting the size of a postsynaptic potential, from the immense complexity of molecular signaling pathways underlying real synapses. To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises the fundamental question, how does synaptic complexity give rise to memory? To address this, we develop new mathematical theorems elucidating the relationship between the structural organization and memory properties of complex synapses that are themselves molecular networks. Moreover, in proving such theorems, we uncover a framework, based on first passage time theory, to impose an order on the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function. 1
4 0.47991309 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit
Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke
Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1
5 0.47592637 121 nips-2013-Firing rate predictions in optimal balanced networks
Author: David G. Barrett, Sophie Denève, Christian K. Machens
Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1
6 0.47368535 56 nips-2013-Better Approximation and Faster Algorithm Using the Proximal Average
7 0.47057122 86 nips-2013-Demixing odors - fast inference in olfaction
8 0.47053874 16 nips-2013-A message-passing algorithm for multi-agent trajectory planning
9 0.46724942 157 nips-2013-Learning Multi-level Sparse Representations
10 0.46718857 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits
11 0.46595392 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking
12 0.46516466 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables
13 0.46462318 173 nips-2013-Least Informative Dimensions
14 0.46249866 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles
15 0.46157634 239 nips-2013-Optimistic policy iteration and natural actor-critic: A unifying view and a non-optimality result
16 0.46152517 22 nips-2013-Action is in the Eye of the Beholder: Eye-gaze Driven Model for Spatio-Temporal Action Localization
17 0.46079367 101 nips-2013-EDML for Learning Parameters in Directed and Undirected Graphical Models
18 0.4603799 278 nips-2013-Reward Mapping for Transfer in Long-Lived Agents
19 0.45960203 148 nips-2013-Latent Maximum Margin Clustering