nips nips2002 nips2002-11 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Wolfgang Maass, Thomas Natschläger, Henry Markram
Abstract: A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model that is based on principles of high dimensional dynamical systems in combination with statistical learning theory. It can be implemented on generic evolved or found recurrent circuitry.
Reference: text
sentIndex sentText sentNum sentScore
1 ch ¡ ¢ Abstract A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. [sent-6, score-0.495]
2 It can be implemented on generic evolved or found recurrent circuitry. [sent-8, score-0.265]
3 1 Introduction Diverse real-time information processing tasks are carried out by neural microcircuits in the cerebral cortex whose anatomical and physiological structure is quite similar in many brain areas and species. [sent-9, score-0.317]
4 However a model that could explain the potentially universal computational capabilities of such recurrent circuits of neurons has been missing. [sent-10, score-0.494]
5 Common models for the organization of computations, such as for example Turing machines or attractor neural networks, are not suitable since cortical microcircuits carry out computations on continuous streams of inputs. [sent-11, score-0.42]
6 Furthermore biological data prove that cortical microcircuits can support several real-time computational tasks in parallel, a fact that is inconsistent with most modeling approaches. [sent-13, score-0.323]
7 In addition the components of biological neural microcircuits, neurons and synapses, are highly diverse [1] and exhibit complex dynamical responses on several temporal scales. [sent-14, score-0.294]
8 This makes them completely unsuitable as building blocks of computational models that require simple uniform components, such as virtually all models inspired by computer science or artificial neural nets. [sent-15, score-0.144]
9 Finally computations in common computational models are partitioned into discrete steps, each of which require convergence to some stable internal state, whereas the dynamics of cortical microcircuits appears to be continuously changing. [sent-16, score-0.448]
10 In this article we present a new conceptual framework for the organization of computations in cortical microcircuits that is not only compatible with all these constraints, but actually requires these biologically realistic features of neural computation. [sent-17, score-0.528]
11 Furthermore like Turing machines this conceptual approach is supported by theoretical results that prove the universality of the computational model, but for the biologically more relevant case of real-time computing on continuous input streams. [sent-18, score-0.254]
12 Plotted on the -axis is the value of , where denotes the Euclidean norm, and , denote the liquid states at time for Poisson spike trains and as inputs, averaged over many and with the same distance . [sent-33, score-0.617]
13 We as human observers may not be able to understand the “code” by which this information about is encoded in the current circuit state , but that is obviously not essential. [sent-39, score-0.252]
14 Essential is whether a readout neuron that has to extract such information at time for a specific task can accomplish this. [sent-40, score-0.435]
15 But this amounts to a classical pattern recognition problem, since the temporal dynamics of the input stream has been transformed by the recurrent circuit into a high dimensional spatial pattern . [sent-41, score-0.431]
16 Furthermore if this analog state is sufficiently high dimensional and its dynamics is sufficiently complex, then it has embedded in it the states and transition functions of many concrete finite state machines. [sent-47, score-0.197]
17 a onto streams , where may depend not just on function that maps input streams , but in a quite arbitrary nonlinear fashion also on previous inputs ; in mathematical ), and a (potentially memoryless) readout terminology this is written function that maps at any time the filter output (i. [sent-50, score-0.612]
18 The liquid state is that part of the internal circuit state at time that is accessible to readout neurons. [sent-55, score-0.815]
19 An example where consists of 4 spike trains is shown in Fig. [sent-56, score-0.384]
20 The generic microcircuit model (270 neurons) was drawn from the distribution discussed in section 3. [sent-58, score-0.803]
21 7¥6% £¡ § 7¥6% £¡ ¥§ ¤¡ 2 input ¦¤£ ¡ §¥¢¢ : sum of rates of inputs 1&2 in the interval [ -30 ms, ] ¨ ¨ 0. [sent-59, score-0.223]
22 2 ¦¤£ © §¥ : sum of rates of inputs 3&4 in the interval [ -30 ms, ] ¨ ¨ 0. [sent-61, score-0.15]
23 6 0 ¦¤£ §¥ : sum of rates of inputs 1-4 in the interval [ -60 ms, -30 ms] ¨ ¨ 0. [sent-62, score-0.15]
24 8 0 ¦¤£ §¥ : sum of rates of inputs 1-4 in the interval [ -150 ms, ] ¨ ¨ 0. [sent-63, score-0.15]
25 2 ¦¤£ §¥¢ : spike coincidences of inputs 1&3 in the interval [ -20 ms, ] ¨ ¨ 3 0 # " 1 ) # " ' % # " $¨ 2 0$¨ ( &$¨ ! [sent-65, score-0.388]
26 : nonlinear combination % # " 4 &$¨ 5 ¦¤£ §¥ : nonlinear combination PSfrag replacements 0 ¦¤£ 3 §¥¢ 0 0. [sent-66, score-0.147]
27 Input spike trains were randomly generated in such a way that at any time the input contained no information about preceding input more than 30 ms ago. [sent-74, score-0.904]
28 Firing rates were randomly drawn from the uniform distribution over [0 Hz, 80 Hz] every 30 ms, and input spike trains 1 and 2 were generated for the present 30 ms time segment as independent Poisson spike trains with this firing rate . [sent-75, score-1.266]
29 This process was repeated (with independent drawings of and Poission spike trains) for each 30 ms time segment. [sent-76, score-0.528]
30 Spike trains 3 and 4 were generated in the same way, but with independent drawings of another firing rate every 30 ms. [sent-77, score-0.182]
31 Below the 4 input spike trains the target (dashed curves) and actual outputs (solid curves) of 7 linear readout neurons are shown in real-time (on the same time axis). [sent-79, score-0.997]
32 Since that all readouts were linear nonlinear combination units, these nonlinear combinations are computed implicitly within the generic microcircuit model. [sent-81, score-0.857]
33 Since the readout neurons had a biologically realistic short time constant of just 30 ms, additional temporally integrated information had to be contained at any instance in the current firing state of the recurrent circuit (its “liquid state”). [sent-91, score-0.945]
34 7¥6% £¡ § ¥§ ¤¡ 2 3 The Generic Neural Microcircuit Model We used a randomly connected circuit consisting of leaky integrate-and-fire (I&F;) neurons, 20% of which were randomly chosen to be inhibitory, as generic neural microcircuit model. [sent-94, score-1.157]
35 1 Parameters were chosen to fit data from microcircuits in rat somatosensory cortex (based on [1], [4] and unpublished data from the Markram Lab). [sent-95, score-0.274]
36 3 We have shown in [5] that without such synaptic dynamics the computational power of these microcircuit models decays significantly. [sent-98, score-0.702]
37 the membrane voltage at time , were drawn randomly (uniform distribution) from the interval [13. [sent-101, score-0.271]
38 The “liquid state” of the recurrent circuit consisting of neurons was modeled by an -dimensional vector computed by applying a low pass filter with a time constant of 30 ms to the spike trains generated by the neurons in the recurrent microcicuit. [sent-104, score-1.417]
39 Neuron parameters: membrane time constant 30 ms, absolute refractory period 3 ms (excitatory neurons), 2 ms (inhibitory neurons), threshold 15 mV (for a resting membrane potential assumed to be 0), reset voltage 13. [sent-109, score-0.557]
40 We assumed that the neurons were located on the integer points of a 3 dimensional grid in space, where is the Euclidean distance between neurons and . [sent-112, score-0.402]
41 Depending on whether and were excitatory ( ) or inhibitory ( ), the value of was 0. [sent-113, score-0.12]
42 3 Depending on whether and were excitatory ( ) or inhibitory ( ), the mean values of these three parameters (with , expressed in seconds, s) were chosen to be . [sent-118, score-0.152]
43 In the case of input synapses the parameter had a value of 18 nA if projecting onto a excitatory neuron and 9 nA if projecting onto an inhibitory neuron. [sent-133, score-0.304]
44 The postsynaptic with ms ( ms) for excitatory current was modeled as an exponential decay (inhibitory) synapses. [sent-135, score-0.275]
45 The transmission delays between liquid neurons were chosen uniformly to be 1. [sent-136, score-0.425]
46 Furthermore we present a theoretical result which implies that within this framework the computational units of the system can be quite arbitrary, provided that sufficiently diverse units are available (see the separation property and approximation property discussed below). [sent-147, score-0.313]
47 Instead sufficiently large and complex “found” circuits (such as the generic circuit used as the main building block for Fig. [sent-149, score-0.383]
48 We say that this class has the point-wise separation property if for any two input functions with for some there exists some with . [sent-152, score-0.174]
49 4 There exist completely different classes of filters that satisfy this point-wise separation property: = all delay lines , = all linear filters , and biologically more relevant = models for dynamic synapses (see [6]). [sent-153, score-0.207]
50 Actually, we found that if the neural microcircuit model is not too small, it usually suffices to use linear readouts. [sent-158, score-0.606]
51 Thus the microcircuit automatically assumes “on the side” the computational role of a kernel for support vector machines. [sent-159, score-0.6]
52 1B for the case where are Poisson spike trains and is a generic neural microcircuit model. [sent-162, score-1.17]
53 It turns out, that the difference between the liquid states scales roughly proportionally to the difference between the two input histories. [sent-163, score-0.265]
54 This appears to be desirable from the practical point of view, since it implies that saliently different input histories can be distinguished more easily and in a more noise robust fashion by the readout. [sent-164, score-0.117]
55 We propose to use such evaluation of the separation capability of neural microcircuits as a new standard test for their computational capabilities. [sent-165, score-0.393]
56 In order to evaluate this somewhat surprising theoretical prediction, we use a well-studied computational benchmark task for which data have been made publicly available 5 : the speech recognition task considered in [7] and [8]. [sent-168, score-0.145]
57 The task was to construct a network of I&F; neurons that could recognize each of the 10 spoken words . [sent-173, score-0.374]
58 Each of the 500 input files had been encoded in the form of 40 spike trains, with at most one spike per spike train 6 signaling onset, peak, or offset of activity in a particular frequency band. [sent-174, score-0.862]
59 The network constructed in [8] transformed the 40 input spike trains into linearly decaying input currents from 800 pools, each consisting of a “large set of closely similar unsynchronized neurons” [8]. [sent-178, score-0.687]
60 Each of the 800 currents was delivered to a separate pair of neurons consisting of an excitatory “ -neuron” and an inhibitory “ -neuron”. [sent-179, score-0.36]
61 A particular achievement of this network (resulting from the smoothly and linearly decaying firing activity of the 800 pools of neurons) is that it is robust with regard to linear timewarping of the input spike pattern. [sent-181, score-0.436]
62 ¡ ¢ ¢ ¡ We tested our generic neural microcircuit model on the same task (in fact on exactly the same 500 input files). [sent-182, score-0.898]
63 A randomly chosen subset of 300 input files was used for training, the other 200 for testing. [sent-183, score-0.175]
64 The generic neural microcircuit model was drawn from the distribution described in section 3, hence from the same distribution as the circuit drawn for the completely different task discussed in Fig. [sent-184, score-1.143]
65 2, with randomly connected I&F; neurons located on the integer points of a column. [sent-185, score-0.271]
66 The synaptic weights of 10 linear readout neurons which received inputs from the 135 I&F; neurons in the circuit were optimized (like for SVMs with linear kernels) to fire whenever the input encoded the spoken word . [sent-186, score-1.199]
67 Hence the whole circuit consisted of 145 I&F; neurons, less than of the size of the network constructed in [8] for the same task 8 . [sent-187, score-0.288]
68 Nevertheless the average error achieved after training by these randomly generated generic microcircuit models was 0. [sent-188, score-0.808]
69 14 (measured in the same way, for the same word ”one”), hence slightly better than that of the 30 times larger network custom designed for this task. [sent-189, score-0.126]
70 The score given is the average for 50 randomly drawn generic microcircuit models. [sent-190, score-0.873]
71 html The network constructed in [8] required that each spike train contained at most one spike. [sent-194, score-0.354]
72 For the competition ˜ the networks were allowed to be constructed especially for their task, but only one single pattern for each word could be used for setting the synaptic weights. [sent-202, score-0.154]
73 Since our microcircuit models were not prepared for this task, they had to be trained with substantially more examples. [sent-203, score-0.584]
74 8 If one assumes that each of the 800 ”large” pools of neurons in that network would consist of just 5 neurons, it contains together with the and -neurons 5600 neurons. [sent-204, score-0.298]
75 6 9 A1 6 4 71 9 2 @81 C B 4 2 531 )& )" 0# "one", speaker 5 "one", speaker 3 "five", speaker 1 "eight", speaker 4 20 0 135 90 45 0 ¡£¤ ¢ PSfrag replacements readout microcircuit input 40 PSfrag replacementsPSfrag replacementsPSfrag replacements 0 0 0. [sent-205, score-1.223]
76 2 time [s] time [s] Figure 3: Application of our generic neural microcircuit model to the speech recognition from [8]. [sent-211, score-0.893]
77 Second row: spiking response of the 135 I&F; neurons in the neural microcircuit model. [sent-213, score-0.807]
78 Third row: output of an I&F; neuron that was trained to fire as soon as possible when the word “one” was spoken, and as little as possible else. [sent-214, score-0.133]
79 Whereas the network of [8] implements an algorithm that needs a few hundred ms of processing time between the end of the input pattern and the answer to the classification task (450 ms in the example of Fig. [sent-216, score-0.643]
80 2 in [8]), the readout neurons from the generic neural microcircuit were trained to provide their answer (through firing or non-firing) immediately when the input pattern ended. [sent-217, score-1.384]
81 3, one can even train the readout neurons quite successfully to provide provisional answers long before the input pattern has ended (thereby implementing an ”anytime” algorithm). [sent-219, score-0.63]
82 More precisely, each of the 10 linear readout neurons was trained to recognize the spoken word at any multiple of 20 ms while the word was spoken. [sent-220, score-0.932]
83 We also compared the noise robustness of the generic microcircuit models with that of [8], which had been constructed to be robust with regard to linear time warping of the input pattern. [sent-223, score-0.964]
84 Since no benchmark input data were available to calculate this noise robustness, we constructed such data by creating as templates 10 patterns consisting each of 40 randomly drawn Poisson spike trains at 4 Hz over 0. [sent-224, score-0.716]
85 Noisy variations of these templates were ) created by first multiplying their time scale with a randomly drawn factor from (thereby allowing for a 9 fold time warp), and subsequently dislocating each spike by an amount drawn independently from a Gaussian distribution with mean 0 and SD 32 ms. [sent-226, score-0.56]
86 These spike patterns were given as inputs to the same generic neural microcircuit models consisting of 135 I&F; neurons as discussed before. [sent-227, score-1.324]
87 10 linear readout neurons were trained (with 1000 randomly drawn training examples) to recognize which of the 10 templates had been used to generate a particular input. [sent-228, score-0.731]
88 As a consequence of achieving this noise robustness generically, rather then by a construction tailored to a specific type of noise, we found that the same generic microcircuit models are also robust with regard to nonlinear time warp of the input. [sent-231, score-0.932]
89 For the case of nonlinear (sinusoidal) time warp 9 an average (50 microcircuits) error of 0. [sent-232, score-0.123]
90 2 ¥ 1 @ 2G 00 ) C # AG 1 E ( 1V # ( ¨ C ' 1V ©¨V V A spike at time was transformed into a spike at time with Hz, randomly drawn from [0. [sent-233, score-0.714]
91 ¦ §¨ " ¨ ¥£ 9 randomly drawn from 0( C 0 G 1 BCC (( V &% G 1 $#"! [sent-235, score-0.135]
92 A randomly generated microcircuit model has at least the same noise robustness as a circuit especially constructed to achieve that. [sent-238, score-0.868]
93 Whereas the network of [8] was only able to classify spike patterns consisting of at most one spike per spike train, a generic neural microcircuit model can classify spike patterns without that restriction. [sent-240, score-1.809]
94 Hence this microcircuit model appears to have quite universal capabilities for real-time computing on time-varying inputs. [sent-243, score-0.739]
95 6 Discussion We have presented a new conceptual framework for analyzing computations in generic neural microcircuit models that satisfies the biological constraints listed in section 1. [sent-244, score-0.887]
96 Thus for the first time one can now take computer models of neural microcircuits, that can be as realistic as one wants to, and use them not just for demonstrating dynamic effects such as synchronization or oscillations, but to really carry out demanding computations with these models. [sent-245, score-0.18]
97 Furthermore our new conceptual framework for analyzing computations in neural circuits not only provides theoretical support for their seemingly universal capabilities for real-time computing, but also throws new light on key concepts such as neural coding. [sent-246, score-0.363]
98 Temporal information transformed into a spatial code by a neural network with realistic properties. [sent-259, score-0.159]
99 The ”echo state” approach to analysing and training recurrent neural networks. [sent-264, score-0.133]
100 A new approach towards vision suggested by biologically realistic neural microcircuit models. [sent-321, score-0.704]
wordName wordTfidf (topN-words)
[('microcircuit', 0.558), ('readout', 0.298), ('microcircuits', 0.242), ('spike', 0.234), ('ms', 0.221), ('neurons', 0.201), ('liquid', 0.192), ('generic', 0.18), ('circuit', 0.16), ('trains', 0.15), ('lsm', 0.093), ('recurrent', 0.085), ('universal', 0.074), ('psfrag', 0.074), ('input', 0.073), ('randomly', 0.07), ('maass', 0.068), ('inhibitory', 0.066), ('replacements', 0.065), ('drawn', 0.065), ('inputs', 0.064), ('biologically', 0.064), ('synaptic', 0.063), ('state', 0.062), ('separation', 0.061), ('spoken', 0.059), ('interval', 0.058), ('computations', 0.057), ('neuron', 0.057), ('anytime', 0.056), ('ring', 0.056), ('turing', 0.055), ('excitatory', 0.054), ('synapses', 0.054), ('word', 0.05), ('suf', 0.05), ('capabilities', 0.049), ('fading', 0.049), ('ger', 0.049), ('natschl', 0.049), ('pools', 0.049), ('network', 0.048), ('hz', 0.048), ('neural', 0.048), ('lter', 0.047), ('stream', 0.045), ('diverse', 0.045), ('mv', 0.045), ('na', 0.045), ('histories', 0.044), ('sd', 0.044), ('templates', 0.044), ('ciently', 0.044), ('conceptual', 0.044), ('circuits', 0.043), ('preceding', 0.042), ('computational', 0.042), ('lters', 0.042), ('warp', 0.041), ('time', 0.041), ('constructed', 0.041), ('speaker', 0.041), ('nonlinear', 0.041), ('property', 0.04), ('dynamics', 0.039), ('task', 0.039), ('robustness', 0.039), ('cortical', 0.039), ('consisting', 0.039), ('les', 0.038), ('lsms', 0.037), ('readouts', 0.037), ('replacementspsfrag', 0.037), ('markram', 0.037), ('qp', 0.037), ('membrane', 0.037), ('realistic', 0.034), ('analog', 0.034), ('streams', 0.034), ('poisson', 0.033), ('coincidences', 0.032), ('drawings', 0.032), ('chosen', 0.032), ('regard', 0.032), ('computing', 0.031), ('train', 0.031), ('encoded', 0.03), ('whereas', 0.029), ('transformed', 0.029), ('units', 0.029), ('rates', 0.028), ('completely', 0.028), ('custom', 0.028), ('memory', 0.027), ('recognize', 0.027), ('quite', 0.027), ('virtually', 0.026), ('signaling', 0.026), ('trained', 0.026), ('speech', 0.025)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999994 11 nips-2002-A Model for Real-Time Computation in Generic Neural Microcircuits
Author: Wolfgang Maass, Thomas Natschläger, Henry Markram
Abstract: A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model that is based on principles of high dimensional dynamical systems in combination with statistical learning theory. It can be implemented on generic evolved or found recurrent circuitry.
2 0.25247934 76 nips-2002-Dynamical Constraints on Computing with Spike Timing in the Cortex
Author: Arunava Banerjee, Alexandre Pouget
Abstract: If the cortex uses spike timing to compute, the timing of the spikes must be robust to perturbations. Based on a recent framework that provides a simple criterion to determine whether a spike sequence produced by a generic network is sensitive to initial conditions, and numerical simulations of a variety of network architectures, we argue within the limits set by our model of the neuron, that it is unlikely that precise sequences of spike timings are used for computation under conditions typically found in the cortex.
3 0.24436805 171 nips-2002-Reconstructing Stimulus-Driven Neural Networks from Spike Times
Author: Duane Q. Nykamp
Abstract: We present a method to distinguish direct connections between two neurons from common input originating from other, unmeasured neurons. The distinction is computed from the spike times of the two neurons in response to a white noise stimulus. Although the method is based on a highly idealized linear-nonlinear approximation of neural response, we demonstrate via simulation that the approach can work with a more realistic, integrate-and-fire neuron model. We propose that the approach exemplified by this analysis may yield viable tools for reconstructing stimulus-driven neural networks from data gathered in neurophysiology experiments.
4 0.19883949 50 nips-2002-Circuit Model of Short-Term Synaptic Dynamics
Author: Shih-Chii Liu, Malte Boegershausen, Pascal Suter
Abstract: We describe a model of short-term synaptic depression that is derived from a silicon circuit implementation. The dynamics of this circuit model are similar to the dynamics of some present theoretical models of shortterm depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also depends on the presynaptic frequency. The equations describing the steady-state and transient responses of this synaptic model fit the experimental results obtained from a fabricated silicon network consisting of leaky integrate-and-fire neurons and different types of synapses. We also show experimental data demonstrating the possible computational roles of depression. One possible role of a depressing synapse is that the input can quickly bring the neuron up to threshold when the membrane potential is close to the resting potential.
5 0.16274172 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule
Author: Michael Eisele, Kenneth D. Miller
Abstract: Cortical synaptic plasticity depends on the relative timing of pre- and postsynaptic spikes and also on the temporal pattern of presynaptic spikes and of postsynaptic spikes. We study the hypothesis that cortical synaptic plasticity does not associate individual spikes, but rather whole firing episodes, and depends only on when these episodes start and how long they last, but as little as possible on the timing of individual spikes. Here we present the mathematical background for such a study. Standard methods from hidden Markov models are used to define what “firing episodes” are. Estimating the probability of being in such an episode requires not only the knowledge of past spikes, but also of future spikes. We show how to construct a causal learning rule, which depends only on past spikes, but associates pre- and postsynaptic firing episodes as if it also knew future spikes. We also show that this learning rule agrees with some features of synaptic plasticity in superficial layers of rat visual cortex (Froemke and Dan, Nature 416:433, 2002).
6 0.16224514 154 nips-2002-Neuromorphic Bisable VLSI Synapses with Spike-Timing-Dependent Plasticity
7 0.1501026 43 nips-2002-Binary Coding in Auditory Cortex
8 0.13270867 180 nips-2002-Selectivity and Metaplasticity in a Unified Calcium-Dependent Model
9 0.12960166 187 nips-2002-Spikernels: Embedding Spiking Neurons in Inner-Product Spaces
10 0.12395098 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior
11 0.1202294 129 nips-2002-Learning in Spiking Neural Assemblies
12 0.1143726 186 nips-2002-Spike Timing-Dependent Plasticity in the Address Domain
13 0.11195768 5 nips-2002-A Digital Antennal Lobe for Pattern Equalization: Analysis and Design
14 0.10763298 184 nips-2002-Spectro-Temporal Receptive Fields of Subthreshold Responses in Auditory Cortex
15 0.099547543 141 nips-2002-Maximally Informative Dimensions: Analyzing Neural Responses to Natural Signals
16 0.098740824 148 nips-2002-Morton-Style Factorial Coding of Color in Primary Visual Cortex
17 0.092505254 28 nips-2002-An Information Theoretic Approach to the Functional Classification of Neurons
18 0.084654309 103 nips-2002-How Linear are Auditory Cortical Responses?
19 0.083645098 23 nips-2002-Adaptive Quantization and Density Estimation in Silicon
20 0.080637917 160 nips-2002-Optoelectronic Implementation of a FitzHugh-Nagumo Neural Model
topicId topicWeight
[(0, -0.221), (1, 0.301), (2, 0.031), (3, -0.132), (4, 0.027), (5, 0.147), (6, 0.063), (7, -0.008), (8, 0.022), (9, -0.014), (10, 0.036), (11, 0.003), (12, -0.036), (13, -0.034), (14, -0.0), (15, -0.004), (16, -0.079), (17, -0.004), (18, -0.036), (19, 0.058), (20, -0.054), (21, 0.024), (22, -0.07), (23, 0.023), (24, 0.017), (25, -0.044), (26, 0.01), (27, -0.023), (28, -0.015), (29, 0.113), (30, -0.033), (31, -0.027), (32, -0.01), (33, 0.066), (34, 0.077), (35, 0.08), (36, 0.005), (37, 0.054), (38, -0.131), (39, 0.073), (40, 0.066), (41, -0.106), (42, 0.016), (43, -0.021), (44, -0.05), (45, -0.044), (46, -0.063), (47, -0.029), (48, 0.028), (49, 0.038)]
simIndex simValue paperId paperTitle
same-paper 1 0.95526415 11 nips-2002-A Model for Real-Time Computation in Generic Neural Microcircuits
Author: Wolfgang Maass, Thomas Natschläger, Henry Markram
Abstract: A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model that is based on principles of high dimensional dynamical systems in combination with statistical learning theory. It can be implemented on generic evolved or found recurrent circuitry.
2 0.86650145 76 nips-2002-Dynamical Constraints on Computing with Spike Timing in the Cortex
Author: Arunava Banerjee, Alexandre Pouget
Abstract: If the cortex uses spike timing to compute, the timing of the spikes must be robust to perturbations. Based on a recent framework that provides a simple criterion to determine whether a spike sequence produced by a generic network is sensitive to initial conditions, and numerical simulations of a variety of network architectures, we argue within the limits set by our model of the neuron, that it is unlikely that precise sequences of spike timings are used for computation under conditions typically found in the cortex.
3 0.81410575 171 nips-2002-Reconstructing Stimulus-Driven Neural Networks from Spike Times
Author: Duane Q. Nykamp
Abstract: We present a method to distinguish direct connections between two neurons from common input originating from other, unmeasured neurons. The distinction is computed from the spike times of the two neurons in response to a white noise stimulus. Although the method is based on a highly idealized linear-nonlinear approximation of neural response, we demonstrate via simulation that the approach can work with a more realistic, integrate-and-fire neuron model. We propose that the approach exemplified by this analysis may yield viable tools for reconstructing stimulus-driven neural networks from data gathered in neurophysiology experiments.
4 0.75744742 50 nips-2002-Circuit Model of Short-Term Synaptic Dynamics
Author: Shih-Chii Liu, Malte Boegershausen, Pascal Suter
Abstract: We describe a model of short-term synaptic depression that is derived from a silicon circuit implementation. The dynamics of this circuit model are similar to the dynamics of some present theoretical models of shortterm depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also depends on the presynaptic frequency. The equations describing the steady-state and transient responses of this synaptic model fit the experimental results obtained from a fabricated silicon network consisting of leaky integrate-and-fire neurons and different types of synapses. We also show experimental data demonstrating the possible computational roles of depression. One possible role of a depressing synapse is that the input can quickly bring the neuron up to threshold when the membrane potential is close to the resting potential.
5 0.66137409 5 nips-2002-A Digital Antennal Lobe for Pattern Equalization: Analysis and Design
Author: Alex Holub, Gilles Laurent, Pietro Perona
Abstract: Re-mapping patterns in order to equalize their distribution may greatly simplify both the structure and the training of classifiers. Here, the properties of one such map obtained by running a few steps of discrete-time dynamical system are explored. The system is called 'Digital Antennal Lobe' (DAL) because it is inspired by recent studies of the antennallobe, a structure in the olfactory system of the grasshopper. The pattern-spreading properties of the DAL as well as its average behavior as a function of its (few) design parameters are analyzed by extending previous results of Van Vreeswijk and Sompolinsky. Furthermore, a technique for adapting the parameters of the initial design in order to obtain opportune noise-rejection behavior is suggested. Our results are demonstrated with a number of simulations. 1
6 0.64875811 43 nips-2002-Binary Coding in Auditory Cortex
7 0.64630967 154 nips-2002-Neuromorphic Bisable VLSI Synapses with Spike-Timing-Dependent Plasticity
8 0.62525076 160 nips-2002-Optoelectronic Implementation of a FitzHugh-Nagumo Neural Model
9 0.60925001 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule
10 0.58868325 180 nips-2002-Selectivity and Metaplasticity in a Unified Calcium-Dependent Model
11 0.54340947 186 nips-2002-Spike Timing-Dependent Plasticity in the Address Domain
12 0.48810259 129 nips-2002-Learning in Spiking Neural Assemblies
13 0.47174585 44 nips-2002-Binary Tuning is Optimal for Neural Rate Coding with High Temporal Resolution
14 0.45087075 187 nips-2002-Spikernels: Embedding Spiking Neurons in Inner-Product Spaces
15 0.43427995 23 nips-2002-Adaptive Quantization and Density Estimation in Silicon
16 0.42524159 60 nips-2002-Convergence Properties of Some Spike-Triggered Analysis Techniques
17 0.41828969 141 nips-2002-Maximally Informative Dimensions: Analyzing Neural Responses to Natural Signals
18 0.40432769 22 nips-2002-Adaptive Nonlinear System Identification with Echo State Networks
19 0.40297869 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior
20 0.3989954 200 nips-2002-Topographic Map Formation by Silicon Growth Cones
topicId topicWeight
[(1, 0.012), (11, 0.029), (23, 0.028), (42, 0.055), (47, 0.204), (54, 0.118), (55, 0.054), (57, 0.011), (64, 0.014), (67, 0.02), (68, 0.1), (74, 0.081), (83, 0.022), (87, 0.012), (92, 0.022), (98, 0.137)]
simIndex simValue paperId paperTitle
same-paper 1 0.8479954 11 nips-2002-A Model for Real-Time Computation in Generic Neural Microcircuits
Author: Wolfgang Maass, Thomas Natschläger, Henry Markram
Abstract: A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model that is based on principles of high dimensional dynamical systems in combination with statistical learning theory. It can be implemented on generic evolved or found recurrent circuitry.
2 0.73212433 62 nips-2002-Coulomb Classifiers: Generalizing Support Vector Machines via an Analogy to Electrostatic Systems
Author: Sepp Hochreiter, Michael C. Mozer, Klaus Obermayer
Abstract: We introduce a family of classifiers based on a physical analogy to an electrostatic system of charged conductors. The family, called Coulomb classifiers, includes the two best-known support-vector machines (SVMs), the ν–SVM and the C–SVM. In the electrostatics analogy, a training example corresponds to a charged conductor at a given location in space, the classification function corresponds to the electrostatic potential function, and the training objective function corresponds to the Coulomb energy. The electrostatic framework provides not only a novel interpretation of existing algorithms and their interrelationships, but it suggests a variety of new methods for SVMs including kernels that bridge the gap between polynomial and radial-basis functions, objective functions that do not require positive-definite kernels, regularization techniques that allow for the construction of an optimal classifier in Minkowski space. Based on the framework, we propose novel SVMs and perform simulation studies to show that they are comparable or superior to standard SVMs. The experiments include classification tasks on data which are represented in terms of their pairwise proximities, where a Coulomb Classifier outperformed standard SVMs. 1
3 0.725353 76 nips-2002-Dynamical Constraints on Computing with Spike Timing in the Cortex
Author: Arunava Banerjee, Alexandre Pouget
Abstract: If the cortex uses spike timing to compute, the timing of the spikes must be robust to perturbations. Based on a recent framework that provides a simple criterion to determine whether a spike sequence produced by a generic network is sensitive to initial conditions, and numerical simulations of a variety of network architectures, we argue within the limits set by our model of the neuron, that it is unlikely that precise sequences of spike timings are used for computation under conditions typically found in the cortex.
4 0.71814048 73 nips-2002-Dynamic Bayesian Networks with Deterministic Latent Tables
Author: David Barber
Abstract: The application of latent/hidden variable Dynamic Bayesian Networks is constrained by the complexity of marginalising over latent variables. For this reason either small latent dimensions or Gaussian latent conditional tables linearly dependent on past states are typically considered in order that inference is tractable. We suggest an alternative approach in which the latent variables are modelled using deterministic conditional probability tables. This specialisation has the advantage of tractable inference even for highly complex non-linear/non-Gaussian visible conditional probability tables. This approach enables the consideration of highly complex latent dynamics whilst retaining the benefits of a tractable probabilistic model. 1
5 0.7166692 5 nips-2002-A Digital Antennal Lobe for Pattern Equalization: Analysis and Design
Author: Alex Holub, Gilles Laurent, Pietro Perona
Abstract: Re-mapping patterns in order to equalize their distribution may greatly simplify both the structure and the training of classifiers. Here, the properties of one such map obtained by running a few steps of discrete-time dynamical system are explored. The system is called 'Digital Antennal Lobe' (DAL) because it is inspired by recent studies of the antennallobe, a structure in the olfactory system of the grasshopper. The pattern-spreading properties of the DAL as well as its average behavior as a function of its (few) design parameters are analyzed by extending previous results of Van Vreeswijk and Sompolinsky. Furthermore, a technique for adapting the parameters of the initial design in order to obtain opportune noise-rejection behavior is suggested. Our results are demonstrated with a number of simulations. 1
6 0.70961976 141 nips-2002-Maximally Informative Dimensions: Analyzing Neural Responses to Natural Signals
7 0.70941865 51 nips-2002-Classifying Patterns of Visual Motion - a Neuromorphic Approach
8 0.70687687 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule
9 0.70388889 50 nips-2002-Circuit Model of Short-Term Synaptic Dynamics
10 0.69965351 44 nips-2002-Binary Tuning is Optimal for Neural Rate Coding with High Temporal Resolution
11 0.69951874 28 nips-2002-An Information Theoretic Approach to the Functional Classification of Neurons
12 0.69877356 10 nips-2002-A Model for Learning Variance Components of Natural Images
13 0.69748235 127 nips-2002-Learning Sparse Topographic Representations with Products of Student-t Distributions
14 0.69588065 148 nips-2002-Morton-Style Factorial Coding of Color in Primary Visual Cortex
15 0.69513214 24 nips-2002-Adaptive Scaling for Feature Selection in SVMs
16 0.69474375 43 nips-2002-Binary Coding in Auditory Cortex
17 0.69338691 184 nips-2002-Spectro-Temporal Receptive Fields of Subthreshold Responses in Auditory Cortex
18 0.69282764 199 nips-2002-Timing and Partial Observability in the Dopamine System
19 0.69258624 123 nips-2002-Learning Attractor Landscapes for Learning Motor Primitives
20 0.68941689 41 nips-2002-Bayesian Monte Carlo