nips nips2013 nips2013-210 knowledge-graph by maker-knowledge-mining

210 nips-2013-Noise-Enhanced Associative Memories


Source: pdf

Author: Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney

Abstract: Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms allow reliable learning and recall of exponential numbers of patterns. Though these designs correct external errors in recall, they assume neurons compute noiselessly, in contrast to highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as internal noise is less than a specified threshold, error probability in the recall phase can be made exceedingly small. More surprisingly, we show internal noise actually improves performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms allow reliable learning and recall of exponential numbers of patterns. [sent-12, score-0.702]

2 Though these designs correct external errors in recall, they assume neurons compute noiselessly, in contrast to highly variable neurons in hippocampus and olfactory cortex. [sent-13, score-1.205]

3 Here we consider associative memories with noisy internal computations and analytically characterize performance. [sent-14, score-0.957]

4 As long as internal noise is less than a specified threshold, error probability in the recall phase can be made exceedingly small. [sent-15, score-0.809]

5 More surprisingly, we show internal noise actually improves performance of the recall phase. [sent-16, score-0.778]

6 1 Introduction Hippocampus, olfactory cortex, and other brain regions are thought to operate as associative memories [1,2], having the ability to learn patterns from presented inputs, store a large number of patterns, and retrieve them reliably in the face of noisy or corrupted queries [3–5]. [sent-19, score-0.715]

7 Although such information storage and recall seemingly falls into the information-theoretic framework, where an exponential number of messages can be communicated reliably with a linear number of symbols, classical associative memory models could only store a linear number of patterns [4]. [sent-21, score-0.773]

8 Information-theoretic and associative memory models of storage have been used to predict experimentally measurable properties of synapses in the mammalian brain [10,11]. [sent-24, score-0.615]

9 But contrary to the fact that noise is present in computational operations of the brain [12, 13], associative memory models with exponential capacities have assumed no internal noise in the computational nodes. [sent-25, score-1.312]

10 The purpose here is to model internal noise and study whether such associative memories still operate reliably. [sent-26, score-1.089]

11 Surprisingly, we find internal noise actually enhances recall performance, suggesting a functional role for variability in the brain. [sent-27, score-0.748]

12 In particular we consider a multi-level, graph code-based, associative memory model [9] and find that even if all components are noisy, the final error probability in recall can be made exceedingly small. [sent-28, score-0.615]

13 We characterize a threshold phenomenon and show how to optimize algorithm parameters when knowing statistical properties of internal noise. [sent-29, score-0.577]

14 Rather counterintuitively the performance 1 of the memory model improves in the presence of internal neural noise, as observed previously as stochastic resonance [13, 14]. [sent-30, score-0.709]

15 There are mathematical connections to perturbed simplex algorithms for linear programing [15], where internal noise pushes the algorithm out of local minima. [sent-31, score-0.657]

16 The benefit of internal noise has been noted previously in associative memory models with stochastic update rules, cf. [sent-32, score-1.082]

17 Second, and perhaps most importantly, pattern retrieval capacity in previous approaches decreases with internal noise, cf. [sent-36, score-0.727]

18 1], in that increasing internal noise helps correct more external errors, but also reduces the number of memorizable patterns. [sent-39, score-1.168]

19 In our framework, internal noise does not affect pattern retrieval capacity (up to a threshold) but improves recall performance. [sent-40, score-1.042]

20 Finally, our noise model has bounded rather than Gaussian noise, and so a suitable network may achieve perfect recall despite internal noise. [sent-41, score-0.798]

21 Although direct comparison is difficult since notions of circuit complexity are different, our work also demonstrates that associative memory architectures constructed from unreliable components can store information reliably. [sent-43, score-0.495]

22 Building on the idea of structured pattern sets [20], our associative memory model [9] relies on the fact that all patterns to be learned lie in a low-dimensional subspace. [sent-44, score-0.657]

23 It first computes a weighted sum i=1 n h = i=1 wi si + ζ, where wi is the weight of the link from si and ζ is the internal noise, and then applies nonlinear function f : R → S to h. [sent-51, score-0.463]

24 An associative memory is represented by a weighted bipartite graph, G, with pattern neurons and constraint neurons. [sent-52, score-0.908]

25 Figure 1: The proposed neural associative memory with overlapping clusters. [sent-106, score-0.472]

26 Thus, if pattern neuron xj is connected to cluster i, Wij = 1; otherwise Wij = 0. [sent-109, score-0.38]

27 Noise model: There are two types of noise in our model: external errors and internal noise. [sent-113, score-1.188]

28 As mentioned earlier, a neural network should be able to retrieve memorized pattern x from its corrupted ˆ version x due to external errors. [sent-114, score-0.677]

29 We assume the external error is an additive vector of size n, denoted by z satisfying x = x + z, whose entries assume values independently from {−1, 0, +1}1 with ˆ corresponding probabilities p−1 = p+1 = /2 and p0 = 1 − . [sent-115, score-0.42]

30 The realization of the external error on subpattern x(i) is denoted z (i) . [sent-116, score-0.504]

31 The goal of recall is to filter the external error z to obtain the desired pattern x as the correct states of the pattern neurons. [sent-122, score-0.929]

32 Rather surprisingly, we show that eliminating external errors is not only possible in the presence of internal noise, but that neural networks with moderate internal noise demonstrate better external noise resilience. [sent-125, score-2.315]

33 Recall algorithms: To efficiently deal with external errors, we use a combination of Alg. [sent-126, score-0.387]

34 1 is to correct at least a single external error in each cluster. [sent-130, score-0.502]

35 2 exploits the overlaps: it helps clusters with external errors recover their correct states by using the reliable information from clusters that do not have external errors. [sent-133, score-1.171]

36 1 performs a series of forward and backward iterations in each cluster G(l) to remove (at least) one external error from its input domain. [sent-139, score-0.561]

37 At each iteration, the pattern neurons locally decide whether to update their current state: if the amount of feedback received by a pattern neuron exceeds a threshold, the neuron updates its state, and otherwise remains as is. [sent-140, score-0.805]

38 With abuse of notation, let us denote messages transmitted by pattern node i and constraint node j at round t by xi (t) and yj (t), respectively. [sent-141, score-0.377]

39 In round 0, pattern nodes are initialized by a pattern x, sampled from dataset X , ˆ perturbed by external errors z, i. [sent-142, score-0.833]

40 In round t, the pattern and constraint neurons update their states using feedback from neighbors. [sent-146, score-0.477]

41 To minimize effects of internal noise, we use the following update rule for pattern node i in cluster : ( ) ( ) xi (t + 1) = ( ) ( ) xi (t) − sign(gi (t)), if |gi (t)| ≥ ϕ ( ) xi (t), otherwise, (2) 1 Note that the proposed algorithms also work with larger noise values, i. [sent-148, score-0.932]

42 ( ) state of pattern neurons connected 3: Backward iteration: Each neuron xj computes to v ( ) to their initial state. [sent-164, score-0.517]

43 , xn 1: for t = 1 → tmax do ( ) 2: Forward iteration: Calculate the input hi = ( ) ( ) ( ) n and j=1 Wij xj + vi , for each neuron yi ( ) ( ) ( ) where ϕ is the update threshold and gi (t) = (sign(W ( ) ) · y ( ) (t) i /di + ui . [sent-176, score-0.376]

44 , ym (t)] is the vector cluster , and ui is the random noise affecting ( ) the degree of pattern node i in cluster of messages transmitted by the constraint neurons in pattern node ( ) i. [sent-180, score-1.192]

45 Basically, the term gi (t) reflects the (average) belief of constraint nodes connected to pattern ( ) neuron i about its correct value. [sent-181, score-0.477]

46 Note this average belief is diluted by the internal noise of neuron i. [sent-183, score-0.794]

47 , xn (t)] is the vector of messages transmitted by the pattern neurons and vi is the random noise affecting node i. [sent-189, score-0.733]

48 1 can correct one external error with high probability, but degrades terribly against two or more external errors. [sent-196, score-0.889]

49 Working independently, clusters cannot correct more than a few external errors, but their combined performance is much better. [sent-197, score-0.52]

50 As clusters overlap, they help each other in resolving external errors: a cluster whose pattern neurons are in their correct states can always provide truthful information to neighboring clusters. [sent-198, score-1.026]

51 Clusters either eliminate their internal noise in which case they keep their new states and can now help other clusters, or revert back to their original states. [sent-202, score-0.728]

52 The following lemma shows that if ϕ and ψ are chosen properly, then in the absence of external errors the constraints remain satisfied and internal noise cannot result in violations. [sent-206, score-1.216]

53 4 a cluster has successfully eliminated external errors (Step 4 of algorithm) by merely checking the satisfaction of all constraint nodes. [sent-211, score-0.686]

54 In the absence of external errors, the probability that a constraint neuron (resp. [sent-213, score-0.587]

55 pat( ) tern neuron) in cluster makes a wrong decision due to its internal noise is given by π0 = ( ) max 0, ν−ψ (resp. [sent-214, score-0.749]

56 However, an external error combined with internal noise may still push neurons to an incorrect state. [sent-218, score-1.306]

57 2, a neural network with internal noise outperforms one without. [sent-220, score-0.754]

58 Let us define the fraction of errors corrected by the noiseless and noisy neural network (parametrized by υ and ν) after T iterations of Alg. [sent-221, score-0.468]

59 Let us choose ϕ and ψ so that π0 = 0 and P0 same realization of external errors, we have Λ∗ ≥ Λ∗ . [sent-226, score-0.387]

60 These are realizations of external errors where the iterative Alg. [sent-233, score-0.531]

61 We show that the stopping set shrinks as we add internal noise. [sent-235, score-0.509]

62 In other words, we show that in the limit of T → ∞ the noisy network can correct any error pattern that can be corrected by the noiseless version and it can also get out of stopping sets that cause the noiseless network to fail. [sent-236, score-0.674]

63 Thus, the supposedly harmful internal noise will help Alg. [sent-237, score-0.657]

64 2 suggests the only possible downside with using a noisy network is its possible running time in eliminating external errors: the noisy neural network may need more iterations to achieve the same error correction performance. [sent-240, score-0.78]

65 2 indicates that noisy neural networks (under our model) outperform noiseless ones, but does not specify the level of errors that such networks can correct. [sent-243, score-0.409]

66 To this end, let Pci be the average probability that a cluster can correct i external errors in its domain. [sent-245, score-0.705]

67 2 can correct a linear fraction of external errors (in terms of n) with high probability. [sent-247, score-0.613]

68 3 takes into account the contribution of all Pci terms and as we will see, their values change as we incorporate the effect of internal noise υ and ν. [sent-260, score-0.685]

69 Our results show that the maximum value of Pci does not occur when the internal noise is equal to zero, i. [sent-261, score-0.657]

70 υ = ν = 0, but instead when the neurons are contaminated with internal noise! [sent-263, score-0.692]

71 This finding suggests that even individual clusters are able to correct more errors in the presence of internal noise. [sent-266, score-0.74]

72 Figure 2: The value of Pci as a function of pattern neurons noise υ for i = 1, . [sent-293, score-0.574]

73 2 is used and results are reported in terms of Symbol Error Rate (SER) as the level of external error ( ) or internal noise (υ, ν) is changed; this involves counting positions where the output of Alg. [sent-310, score-1.077]

74 Recall that υ and ν quantify the level of noise in pattern and constraint neurons, respectively. [sent-316, score-0.408]

75 3 is the fact that internal noise helps in achieving better performance, as predicted by theoretical analysis (Thm. [sent-323, score-0.657]

76 As we see, a moderate amount of internal noise at both pattern and constraint neurons improves performance. [sent-329, score-1.13]

77 2 for correcting the external errors when is fixed to 0. [sent-337, score-0.595]

78 We stop whenever the algorithm corrects all external errors or declare a recall error if all errors were not corrected in 40 iterations. [sent-339, score-0.877]

79 The amount of internal noise drastically affects the speed of Alg. [sent-344, score-0.657]

80 5 and 6b observe that running time is more sensitive to noise at constraint neurons than pattern neurons and that the algorithms become slower as noise at constraint neurons is increased. [sent-347, score-1.352]

81 In contrast, note that internal noise at the pattern neurons may improve the running time, as seen in Fig. [sent-348, score-1.037]

82 internal noise parameters at pattern and constraint neurons for = 0. [sent-376, score-1.1]

83 5 υ Figure 5: The effect of internal noise on the number of iterations of Alg. [sent-387, score-0.734]

84 125, where the noiseless decoder encounters stopping sets while the noisy decoder is still capable of correcting external errors; there we see that the optimal running time occurs when the neurons have a fair amount of internal noise. [sent-393, score-1.413]

85 In [23] we also provide results of a study for a slightly modified scenario where there is only internal noise and no external errors. [sent-394, score-1.044]

86 Thus, the internal noise can now cause neurons to make wrong decisions, even in the absence of external errors. [sent-396, score-1.273]

87 There, we witness the more familiar phenomenon where increasing the amount of internal noise results in a worse performance. [sent-397, score-0.718]

88 Thus, in order to show that the pattern retrieval capacity is exponential in n, all we need to demonstrate is that there exists a training set X with C patterns of length n for which C ∝ arn , for some a > 1 and 0 < r. [sent-402, score-0.387]

89 4 υ υ υ υ 10 0 (a) Effect of internal noise at pattern neurons side. [sent-418, score-1.037]

90 4 (b) Effect of internal noise at constraint neurons side. [sent-426, score-0.949]

91 Figure 6: The effect of internal noise on the number of iterations performed by Alg. [sent-427, score-0.734]

92 After all, brain regions modeled as associative memories, such as the hippocampus and the olfactory cortex, certainly do display internal noise [12, 13, 26]. [sent-440, score-1.122]

93 We found a threshold phenomenon for reliable operation, which manifests the tradeoff between the amount of internal noise and the amount of external noise that the system can handle. [sent-444, score-1.387]

94 In fact, we showed that internal noise actually improves the performance of the network in dealing with external errors, up to some optimal value. [sent-445, score-1.124]

95 The associative memory design developed herein uses thresholding operations in the messagepassing algorithm for recall; as part of our investigation, we optimized these neural firing thresholds based on the statistics of the internal noise. [sent-447, score-0.962]

96 One key to our result is capturing this benefit of digital processing (thresholding to prevent the build up of errors due to internal noise) as well as a modular architecture which allows us to correct a linear number of external errors (in terms of the pattern length). [sent-452, score-1.416]

97 This paper focused on recall, however learning is the other critical stage of associative memory operation. [sent-453, score-0.425]

98 Indeed, information storage in nervous systems is said to be subject to storage (or learning) noise, in situ noise, and retrieval (or recall) noise [11, Fig. [sent-454, score-0.436]

99 It should be noted, however, there is no essential loss by combining learning noise and in situ noise into what we have called external error herein, cf. [sent-456, score-0.845]

100 Going forward, it is of interest to investigate other neural information processing models that explicitly incorporate internal noise and see whether they provide insight into observed empirical phenomena. [sent-461, score-0.704]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('internal', 0.463), ('external', 0.387), ('associative', 0.295), ('neurons', 0.229), ('noise', 0.194), ('ser', 0.182), ('pattern', 0.151), ('errors', 0.144), ('neuron', 0.137), ('memories', 0.137), ('memory', 0.13), ('varshney', 0.111), ('pci', 0.102), ('cluster', 0.092), ('recall', 0.091), ('salavati', 0.084), ('shokrollahi', 0.084), ('subpattern', 0.084), ('subpatterns', 0.084), ('noiseless', 0.084), ('correct', 0.082), ('patterns', 0.081), ('storage', 0.078), ('karbasi', 0.074), ('hippocampus', 0.073), ('unreliable', 0.07), ('capacity', 0.064), ('final', 0.064), ('correcting', 0.064), ('constraint', 0.063), ('noisy', 0.062), ('phenomenon', 0.061), ('olfactory', 0.061), ('messages', 0.055), ('threshold', 0.053), ('hi', 0.052), ('clusters', 0.051), ('unsatis', 0.051), ('network', 0.05), ('iterations', 0.049), ('retrieval', 0.049), ('subspace', 0.048), ('wij', 0.047), ('neural', 0.047), ('stopping', 0.046), ('declare', 0.046), ('digital', 0.045), ('gi', 0.044), ('transmitted', 0.044), ('reliably', 0.043), ('synapses', 0.042), ('arn', 0.042), ('federale', 0.042), ('memorizable', 0.042), ('memorized', 0.042), ('noiselessly', 0.042), ('peeling', 0.042), ('sarpeshkar', 0.042), ('wmi', 0.042), ('bipartite', 0.04), ('correction', 0.04), ('decoder', 0.039), ('resonance', 0.039), ('overlaps', 0.038), ('graph', 0.038), ('gj', 0.037), ('contracted', 0.037), ('resilience', 0.037), ('revert', 0.037), ('situ', 0.037), ('sign', 0.036), ('brain', 0.036), ('networks', 0.036), ('reliable', 0.035), ('mammalian', 0.034), ('faulty', 0.034), ('states', 0.034), ('error', 0.033), ('stages', 0.033), ('node', 0.032), ('tmax', 0.032), ('polytechnique', 0.032), ('limt', 0.032), ('spielman', 0.032), ('lausanne', 0.032), ('corrected', 0.032), ('analog', 0.031), ('ui', 0.03), ('memorizing', 0.03), ('improves', 0.03), ('constraints', 0.028), ('exceedingly', 0.028), ('ecole', 0.028), ('effect', 0.028), ('vi', 0.028), ('convolutional', 0.028), ('wj', 0.027), ('degree', 0.027), ('amin', 0.027), ('herein', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000007 210 nips-2013-Noise-Enhanced Associative Memories

Author: Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney

Abstract: Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms allow reliable learning and recall of exponential numbers of patterns. Though these designs correct external errors in recall, they assume neurons compute noiselessly, in contrast to highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as internal noise is less than a specified threshold, error probability in the recall phase can be made exceedingly small. More surprisingly, we show internal noise actually improves performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks. 1

2 0.17897303 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

Author: Hesham Mostafa, Lorenz. K. Mueller, Giacomo Indiveri

Abstract: We present a recurrent neuronal network, modeled as a continuous-time dynamical system, that can solve constraint satisfaction problems. Discrete variables are represented by coupled Winner-Take-All (WTA) networks, and their values are encoded in localized patterns of oscillations that are learned by the recurrent weights in these networks. Constraints over the variables are encoded in the network connectivity. Although there are no sources of noise, the network can escape from local optima in its search for solutions that satisfy all constraints by modifying the effective network connectivity through oscillations. If there is no solution that satisfies all constraints, the network state changes in a seemingly random manner and its trajectory approximates a sampling procedure that selects a variable assignment with a probability that increases with the fraction of constraints satisfied by this assignment. External evidence, or input to the network, can force variables to specific values. When new inputs are applied, the network re-evaluates the entire set of variables in its search for states that satisfy the maximum number of constraints, while being consistent with the external input. Our results demonstrate that the proposed network architecture can perform a deterministic search for the optimal solution to problems with non-convex cost functions. The network is inspired by canonical microcircuit models of the cortex and suggests possible dynamical mechanisms to solve constraint satisfaction problems that can be present in biological networks, or implemented in neuromorphic electronic circuits. 1

3 0.15615876 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

Author: Cristina Savin, Peter Dayan, Mate Lengyel

Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1

4 0.15462418 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

Author: Jasper Snoek, Richard Zemel, Ryan P. Adams

Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1

5 0.14662914 15 nips-2013-A memory frontier for complex synapses

Author: Subhaneil Lahiri, Surya Ganguli

Abstract: An incredible gulf separates theoretical models of synapses, often described solely by a single scalar value denoting the size of a postsynaptic potential, from the immense complexity of molecular signaling pathways underlying real synapses. To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises the fundamental question, how does synaptic complexity give rise to memory? To address this, we develop new mathematical theorems elucidating the relationship between the structural organization and memory properties of complex synapses that are themselves molecular networks. Moreover, in proving such theorems, we uncover a framework, based on first passage time theory, to impose an order on the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function. 1

6 0.14486611 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

7 0.13171151 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

8 0.12362168 61 nips-2013-Capacity of strong attractor patterns to model behavioural and cognitive prototypes

9 0.11696489 121 nips-2013-Firing rate predictions in optimal balanced networks

10 0.11038092 64 nips-2013-Compete to Compute

11 0.11033179 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

12 0.10074054 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

13 0.098726921 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

14 0.09406665 278 nips-2013-Reward Mapping for Transfer in Long-Lived Agents

15 0.093830399 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively

16 0.08570455 157 nips-2013-Learning Multi-level Sparse Representations

17 0.081346668 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

18 0.079910792 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

19 0.078258738 171 nips-2013-Learning with Noisy Labels

20 0.077515714 5 nips-2013-A Deep Architecture for Matching Short Texts


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.192), (1, 0.081), (2, -0.091), (3, -0.064), (4, -0.218), (5, -0.041), (6, -0.048), (7, -0.101), (8, -0.023), (9, 0.039), (10, 0.084), (11, 0.001), (12, 0.14), (13, 0.037), (14, -0.026), (15, 0.02), (16, -0.032), (17, 0.043), (18, 0.041), (19, -0.016), (20, 0.011), (21, -0.042), (22, 0.02), (23, 0.151), (24, -0.003), (25, 0.069), (26, -0.001), (27, 0.019), (28, -0.043), (29, -0.059), (30, 0.001), (31, -0.002), (32, 0.039), (33, 0.037), (34, -0.017), (35, -0.069), (36, 0.064), (37, -0.024), (38, 0.014), (39, -0.027), (40, -0.032), (41, -0.049), (42, -0.04), (43, 0.116), (44, -0.011), (45, -0.001), (46, -0.077), (47, -0.022), (48, 0.005), (49, 0.038)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96424079 210 nips-2013-Noise-Enhanced Associative Memories

Author: Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney

Abstract: Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms allow reliable learning and recall of exponential numbers of patterns. Though these designs correct external errors in recall, they assume neurons compute noiselessly, in contrast to highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as internal noise is less than a specified threshold, error probability in the recall phase can be made exceedingly small. More surprisingly, we show internal noise actually improves performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks. 1

2 0.78901416 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

Author: Hesham Mostafa, Lorenz. K. Mueller, Giacomo Indiveri

Abstract: We present a recurrent neuronal network, modeled as a continuous-time dynamical system, that can solve constraint satisfaction problems. Discrete variables are represented by coupled Winner-Take-All (WTA) networks, and their values are encoded in localized patterns of oscillations that are learned by the recurrent weights in these networks. Constraints over the variables are encoded in the network connectivity. Although there are no sources of noise, the network can escape from local optima in its search for solutions that satisfy all constraints by modifying the effective network connectivity through oscillations. If there is no solution that satisfies all constraints, the network state changes in a seemingly random manner and its trajectory approximates a sampling procedure that selects a variable assignment with a probability that increases with the fraction of constraints satisfied by this assignment. External evidence, or input to the network, can force variables to specific values. When new inputs are applied, the network re-evaluates the entire set of variables in its search for states that satisfy the maximum number of constraints, while being consistent with the external input. Our results demonstrate that the proposed network architecture can perform a deterministic search for the optimal solution to problems with non-convex cost functions. The network is inspired by canonical microcircuit models of the cortex and suggests possible dynamical mechanisms to solve constraint satisfaction problems that can be present in biological networks, or implemented in neuromorphic electronic circuits. 1

3 0.74484169 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke

Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1

4 0.73048425 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively

Author: Wenhao Zhang, Si Wu

Abstract: Psychophysical experiments have demonstrated that the brain integrates information from multiple sensory cues in a near Bayesian optimal manner. The present study proposes a novel mechanism to achieve this. We consider two reciprocally connected networks, mimicking the integration of heading direction information between the dorsal medial superior temporal (MSTd) and the ventral intraparietal (VIP) areas. Each network serves as a local estimator and receives an independent cue, either the visual or the vestibular, as direct input for the external stimulus. We find that positive reciprocal interactions can improve the decoding accuracy of each individual network as if it implements Bayesian inference from two cues. Our model successfully explains the experimental finding that both MSTd and VIP achieve Bayesian multisensory integration, though each of them only receives a single cue as direct external input. Our result suggests that the brain may implement optimal information integration distributively at each local estimator through the reciprocal connections between cortical regions. 1

5 0.72792226 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

6 0.72535437 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

7 0.67396086 61 nips-2013-Capacity of strong attractor patterns to model behavioural and cognitive prototypes

8 0.63415164 86 nips-2013-Demixing odors - fast inference in olfaction

9 0.62975371 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

10 0.62721431 15 nips-2013-A memory frontier for complex synapses

11 0.6161778 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

12 0.58183807 157 nips-2013-Learning Multi-level Sparse Representations

13 0.58032215 64 nips-2013-Compete to Compute

14 0.56218064 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

15 0.51705551 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

16 0.51452368 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

17 0.50286496 205 nips-2013-Multisensory Encoding, Decoding, and Identification

18 0.48522389 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

19 0.44878638 221 nips-2013-On the Expressive Power of Restricted Boltzmann Machines

20 0.4229461 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(16, 0.045), (33, 0.074), (34, 0.519), (41, 0.028), (49, 0.049), (56, 0.078), (70, 0.039), (85, 0.035), (89, 0.025), (93, 0.027), (95, 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.96368515 256 nips-2013-Probabilistic Principal Geodesic Analysis

Author: Miaomiao Zhang, P.T. Fletcher

Abstract: Principal geodesic analysis (PGA) is a generalization of principal component analysis (PCA) for dimensionality reduction of data on a Riemannian manifold. Currently PGA is defined as a geometric fit to the data, rather than as a probabilistic model. Inspired by probabilistic PCA, we present a latent variable model for PGA that provides a probabilistic framework for factor analysis on manifolds. To compute maximum likelihood estimates of the parameters in our model, we develop a Monte Carlo Expectation Maximization algorithm, where the expectation is approximated by Hamiltonian Monte Carlo sampling of the latent variables. We demonstrate the ability of our method to recover the ground truth parameters in simulated sphere data, as well as its effectiveness in analyzing shape variability of a corpus callosum data set from human brain images. 1

same-paper 2 0.96035242 210 nips-2013-Noise-Enhanced Associative Memories

Author: Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney

Abstract: Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms allow reliable learning and recall of exponential numbers of patterns. Though these designs correct external errors in recall, they assume neurons compute noiselessly, in contrast to highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as internal noise is less than a specified threshold, error probability in the recall phase can be made exceedingly small. More surprisingly, we show internal noise actually improves performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks. 1

3 0.95719689 351 nips-2013-What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach

Author: Zhenwen Dai, Georgios Exarchakis, Jörg Lücke

Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex. 1

4 0.94274104 202 nips-2013-Multiclass Total Variation Clustering

Author: Xavier Bresson, Thomas Laurent, David Uminsky, James von Brecht

Abstract: Ideas from the image processing literature have recently motivated a new set of clustering algorithms that rely on the concept of total variation. While these algorithms perform well for bi-partitioning tasks, their recursive extensions yield unimpressive results for multiclass clustering tasks. This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. The results greatly outperform previous total variation algorithms and compare well with state-of-the-art NMF approaches. 1

5 0.94236141 129 nips-2013-Generalized Random Utility Models with Multiple Types

Author: Hossein Azari Soufiani, Hansheng Diao, Zhenyu Lai, David C. Parkes

Abstract: We propose a model for demand estimation in multi-agent, differentiated product settings and present an estimation algorithm that uses reversible jump MCMC techniques to classify agents’ types. Our model extends the popular setup in Berry, Levinsohn and Pakes (1995) to allow for the data-driven classification of agents’ types using agent-level data. We focus on applications involving data on agents’ ranking over alternatives, and present theoretical conditions that establish the identifiability of the model and uni-modality of the likelihood/posterior. Results on both real and simulated data provide support for the scalability of our approach. 1

6 0.93594539 48 nips-2013-Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC

7 0.93078601 122 nips-2013-First-order Decomposition Trees

8 0.92261201 38 nips-2013-Approximate Dynamic Programming Finally Performs Well in the Game of Tetris

9 0.90247816 219 nips-2013-On model selection consistency of penalized M-estimators: a geometric theory

10 0.89839149 143 nips-2013-Integrated Non-Factorized Variational Inference

11 0.79655689 347 nips-2013-Variational Planning for Graph-based MDPs

12 0.79248023 346 nips-2013-Variational Inference for Mahalanobis Distance Metrics in Gaussian Process Regression

13 0.79126674 39 nips-2013-Approximate Gaussian process inference for the drift function in stochastic differential equations

14 0.76626843 86 nips-2013-Demixing odors - fast inference in olfaction

15 0.75624859 278 nips-2013-Reward Mapping for Transfer in Long-Lived Agents

16 0.75442529 148 nips-2013-Latent Maximum Margin Clustering

17 0.75133461 322 nips-2013-Symbolic Opportunistic Policy Iteration for Factored-Action MDPs

18 0.7491948 312 nips-2013-Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex

19 0.7437197 100 nips-2013-Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture

20 0.74015123 238 nips-2013-Optimistic Concurrency Control for Distributed Unsupervised Learning