nips nips2003 nips2003-18 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Rock Z. Shi, Timothy K. Horiuchi
Abstract: Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5µm CMOS process. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Many different circuit implementations of synapse function exist with different computational goals in mind. [sent-7, score-0.71]
2 In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. [sent-8, score-0.721]
3 This circuit implements part of a commonly-used kinetic model of synaptic conductance. [sent-9, score-0.769]
4 There are perhaps as many different synapse circuit designs in use as there are brain areas being modeled. [sent-13, score-0.704]
5 This diversity of circuits reflects the diversity of the synapse’s computational function. [sent-14, score-0.127]
6 In many computations, a narrow, square pulse of current is all that is necessary to model the synaptic current. [sent-15, score-0.569]
7 In other situations, a longer post-synaptic current profile is desirable to extend the effects of extremely short spike durations (e. [sent-16, score-0.276]
8 Temporal summation or more complex forms of inter-spike interaction are also important areas of synaptic design that focus on the response to high-frequency stimulation. [sent-21, score-0.432]
9 Recent designs for fast-synaptic depression [6], [7], [8] and time-dependent plasticity [9], [10] are good examples of this where some type of memory is used to create interaction between incoming spikes. [sent-22, score-0.099]
10 Even simple summation of input current can be very important in addressevent systems where a common strategy to reduce hardware is to have a single synapse circuit mimic inputs from many different cells. [sent-23, score-0.814]
11 A very popular design for this purpose is the ”current-mirror synapse” [4] that is used extensively in its original form or in new extended forms [6], [8] to expand the time course of current and to provide summation for high-frequency spiking. [sent-24, score-0.169]
12 This circuit is simple, compact, and stable, but couples the leak, part of the synaptic gain, and the decay ”time-constant” in one control parameter. [sent-25, score-0.76]
13 Alternatively, the same components can be arranged to give the user manual-control of the decay to produce a true exponential decay when operating in the subthreshold region (see Figure 7 (b) of [11]). [sent-27, score-0.258]
14 This circuit, however, does not provide good summation of multiple synaptic events. [sent-28, score-0.393]
15 In this paper we describe a new CMOS synapse circuit, that utilizes current-mode feedback to produce a first-order dynamical system. [sent-29, score-0.357]
16 In the following sections, we describe the kinetic model of synaptic conductance, describe the circuit implementation and function, provide a theoretical analysis and finally compare our theory against testing results. [sent-30, score-0.769]
17 We also discuss the use of this circuit in various neuromorphic system contexts and conclude with a discussion of the circuit synthesis approach. [sent-31, score-0.814]
18 2 Proposed synapse model We consider a network of spiking neurons, each of which is modeled by the integrateand-fire model or the slightly more generous Spike Response Model (e. [sent-32, score-0.429]
19 Synaptic function in such neural networks are often modeled as a time-varying current. [sent-35, score-0.057]
20 The functional form of this current could be a δ function, or a limited jump at the time of the spike followed by an exponential decay. [sent-36, score-0.344]
21 A more general and practical framework is the neurotransmitter kinetics description proposed by Destexhe et al. [sent-38, score-0.052]
22 This approach can synthesize a complete description of synaptic transmission, as well as give an analytic expression for a post-synaptic current in some simplified schemes. [sent-40, score-0.427]
23 For a two-state ligand-gated channel model, the neurotransmitter molecules, T, are taken to bind to post-synaptic receptors modeled by the first order kinetic scheme [15]: α R + T ⇀ T R∗ (1) ↽ β where R and T R are the unbound and the bound form of the post-synaptic receptor, respectively. [sent-41, score-0.209]
24 α and β are the forward and backward rate constants for transmitter binding. [sent-42, score-0.063]
25 In this model, the fraction of bound receptors, r, is described by the equation: ∗ dr = α[T ](1 − r) − βr dt (2) If the transmitter concentration [T] can be modeled as a short pulse, then r(t) in (2) is a first order linear differential equation. [sent-43, score-0.115]
26 We propose a synapse model that can be implemented by a CMOS circuit working in the subthreshold region. [sent-44, score-0.799]
27 In our synapse model, the action potential is modeled as a narrow digital pulse. [sent-47, score-0.425]
28 The pulse width is assumed to be a fixed value tpw , however, in practice tpw may vary slightly from pulse to pulse. [sent-48, score-0.821]
29 Figure 1 illustrates the synaptic current response to a single pulse in such a model: 1. [sent-49, score-0.608]
30 A presynaptic spike occurs at tj , during the pulse, the post-synaptic current is modeled by: isyn (t) = isyn (∞) + (isyn (tj ) − isyn (∞))e− t−tj τr (3) 2. [sent-50, score-1.568]
31 After the presynaptic pulse terminated at time tj + tpw , the post-synaptic current is modeled by: − isyn (t) = isyn (tj + tpw )e t−tj −tpw τd (4) ← synaptic current ← presynaptic pulse t j t +t j pw Figure 1: Synapse model. [sent-51, score-2.253]
32 The action potential (spike) is modeled as a pulse with width tpw . [sent-52, score-0.489]
33 The synapse is modeled as first order linear system with synaptic current response described by Equations (3) and (4) 3 3. [sent-53, score-0.831]
34 1 CMOS circuit synthesis and analysis The synthesis approach Lazzaro [11] presents a very simple, compact synapse circuit that has an exponentiallydecaying synaptic current after each spike event. [sent-54, score-1.717]
35 The synaptic current always resets to the maximum current value during the spike and is not suitable for the summation of rapid bursts of spikes. [sent-55, score-0.743]
36 Another simple and widely used synapse is the current-mirror synapse that has its own set of practical problems related to the coupling of gain, time constant, and offset parameters. [sent-56, score-0.695]
37 Our circuit is synthesized from the clean exponential decay from Lazzaro’s synapse and concepts from log domain filtering [16], [17] to convert the nonlinear characteristic of the current mirror synapse into an externally-linear, time-invariant system [18]. [sent-57, score-1.263]
38 Vdd spkIn M1 M2 M4 v M3 vc Vτ M7 M5 Vw i isyn M6 M8 C Figure 2: The proposed synapse circuit. [sent-58, score-0.823]
39 The pin “spkIn” receives the spike input with negative logic. [sent-59, score-0.255]
40 The input voltage Vw adjusts the weight of the synapse and the input voltage Vτ sets the time constant. [sent-62, score-0.535]
41 The bodies of NMOS transistors are connected to ground, and the bodies of PMOS transistors are connected to Vdd except for M3 . [sent-66, score-0.308]
42 2 Basic circuit description The synapse circuit consists of eight transistors and one capacitor as shown in Figure 2. [sent-68, score-1.199]
43 Input voltage spikes are applied through an inverter (not shown), onto the gate of the PMOS M1 . [sent-70, score-0.128]
44 Vτ sets the current through M7 that determines the time constant of the output synaptic current as will be shown later. [sent-71, score-0.54]
45 Vw controls the magnitude of the synaptic current, so it determines the synaptic weight. [sent-72, score-0.658]
46 The voltage on the capacitor is converted to a current by transistor M6 , sent through the current mirror M4 − M5 , and into the source follower M3 − M4 . [sent-73, score-0.543]
47 The drain current of M8 , a scaled copy of current through M6 produces an inhibitory current. [sent-74, score-0.189]
48 A simple PMOS transistor with the same gate voltage as M5 can provide an excitatory synaptic current. [sent-75, score-0.592]
49 3 Circuit analysis We perform an analysis of the circuit by studying its response to a single spike. [sent-77, score-0.383]
50 We assume that all transistors are operating in saturation (vds > 4VT ). [sent-80, score-0.15]
51 The PMOS source follower M3 − M4 is used as a level shifter. [sent-82, score-0.078]
52 Detailed discussion on use of source followers in the subthreshold region has been discussed in [21]. [sent-83, score-0.124]
53 i For simplicity, we assume a spike begins at time t=0, and the initial voltage on the capacitor C is vc (0). [sent-85, score-0.482]
54 1 (t−tpw ) τ (14) Results Comparison of theory and measurement vSpkIn(V) We have fabricated a chip containing the basic synapse circuit as shown in Figure 2 through MOSIS in a commercially-available 1. [sent-89, score-0.702]
55 In order to compare our theoretical prediction with chip measurement, we first estimate the two transistor parameters κ and I0 by measuring the drain currents from test transistors on the same chip. [sent-91, score-0.289]
56 The current measurements were performed with a Keithley 6517A electrometer. [sent-92, score-0.074]
57 In estimating these two parameters as well as to compute our model predictions, we estimate the effective transistor width for the wide transistors (e. [sent-99, score-0.291]
58 To illustrate the detailed time course, we used a large spike pulse width. [sent-117, score-0.399]
59 We used a very wide pulse to exaggerate the details in the time response. [sent-121, score-0.197]
60 Note that as the time constant is so large, the isyn (t) rises almost linearly during the spike. [sent-122, score-0.433]
61 2 Tuning of synaptic strength and time constant The synaptic time constant is solely determined by the leak current through transistor M7 . [sent-126, score-1.053]
62 The synaptic strength is controlled by Vw (which is also coupled with Iτ ) as can be seen from (13). [sent-128, score-0.329]
63 In Figure 4, we present our test results that illustrate how the various time constants and synaptic strengths can be achieved. [sent-129, score-0.388]
64 80V 0 10 time (msec) (b) Figure 4: Changing time constant τ and synaptic strength. [sent-146, score-0.423]
65 In both (a) and (b), spike pulse width is set as 1 msec. [sent-151, score-0.411]
66 3 Spike train response The exponential rise of the synaptic current during a spike naturally provides the summation and saturation of incoming spikes. [sent-153, score-0.782]
67 Figure 5 illustrates this behavior in response to an input spike train of fixed duration. [sent-154, score-0.241]
68 5 Discussion We have proposed a new synapse model and a specific CMOS implementation of the model. [sent-155, score-0.332]
69 In our theoretical analysis, we have ignored all parasitic effects which can play an significant role in the circuit behavior. [sent-156, score-0.397]
70 For example, as the source follower M3 − M4 provides the gate voltage of M2 , switching through M1 will affect the circuit behavior due to parasitic capacitance. [sent-157, score-0.603]
71 We emphasize that various circuit implementation can be designed, especially a circuit with lower glitch but faster speed is preferred. [sent-158, score-0.688]
72 The synaptic model circuit we have described has a single time constant for both its rising and decaying phase, whereas the time-course of biological synapses show a faster rising phase, but a much slower decaying phase. [sent-159, score-0.913]
73 The second time constant can, in principle, be implemented in our circuit by adding a parallel branch to M7 with some switching circuitry. [sent-160, score-0.407]
74 Biological synapses have been best modeled and fitted by an exponentially-decaying time course with different time constants for different types of synapse. [sent-161, score-0.208]
75 Our synapse circuit model captures this important characteristic of the biological synapse, providing an easily controlled exponential decay and a natural summation and saturation of the synaptic current. [sent-162, score-1.233]
76 By using a simple first order linear model, our synapse circuit model can give the circuit designer an analytically tractable function for use in large, complex, spiking neural network system design. [sent-163, score-1.06]
77 The current mirror synapse, in spite of its successful application, vSpkIn(t) (V) 6 4 2 0 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 time (msec) 150 200 250 vc(t) (V) 0. [sent-164, score-0.159]
78 3 −8 iSyn(t) (A) x 10 4 3 2 1 0 Figure 5: Response to spike train. [sent-169, score-0.202]
79 The spike pulse width is set as 1 msec, and period 15 msec. [sent-170, score-0.411]
80 Our linear synapse is achieved, however, with the cost of silicon size. [sent-174, score-0.409]
81 This is especially true when utilized in an AER system, where the spike can be less than a microsecond. [sent-175, score-0.202]
82 Because our linearity is achieved by employing the CMOS subthreshold current characteristic, working with very narrow pulses will mean the use of large transistor widths to get large charging currents. [sent-176, score-0.368]
83 We have identified a number of modifications that may allow the circuit to operate at much higher current levels and thus higher speed. [sent-177, score-0.418]
84 6 Conclusion We have identified a need for more independent control of the synaptic gain, timecourse, and leak parameters in CMOS synapse and have demonstrated a prototype circuit that utilizes current-mode feedback to exhibit the same first-order dynamics that are utilized by Destexhe et al. [sent-178, score-1.116]
85 [14], [15] to describe a kinetic model description of receptorneurotransmitter binding for a more efficient computational description of the synaptic conductance. [sent-179, score-0.503]
86 The specific implementation relies on the subthreshold exponential characteristic of the MOSFET and thus operates best at these current levels and slower speeds. [sent-180, score-0.239]
87 We thank MOSIS for fabrication services in support of our neuromorphic analog VLSI course and teaching laboratory. [sent-182, score-0.201]
88 Mortara, “A pulsed communication/computation framework for analog VLSI perceptive systems,” in Neuromorphic Systems Engineering, T. [sent-187, score-0.13]
89 Whatley, “A pulse-coded communications infrastructure for neuromorphic systems,” in Pulsed Neural Networks, W. [sent-195, score-0.08]
90 van Schaik, “A silicon representation of the Meddis inner hair cell model,” in Proceedings of the ICSC Symposia on Intelligent Systems & Application (ISA’2000), 2000, paper 1544-078. [sent-219, score-0.077]
91 Liu, “Modeling short-term synaptic depression in silicon,” Neural Computation, vol. [sent-223, score-0.374]
92 Watts, “A spike based learning neuron in analog VLSI,” in Advances in Neural Information Processing Systems, M. [sent-230, score-0.291]
93 Indiveri, “Neuromorphic bistable VLSI synapses with spike-timing-dependent plasticity,” in Advances in Neural Information Processing Systems, M. [sent-240, score-0.061]
94 Lazzaro, “Low-power silicon axons, neuons, and synapses,” in Silicon Implementations of Pulse Coded Neural Networks, M. [sent-250, score-0.077]
95 Rall, “Distinguishing theoretical synaptic potentials computed for different soma-dendritic distributions of synaptic inputs,” J. [sent-263, score-0.658]
96 Sejnowski, “Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism,” Journal of Computational Neuroscience, vol. [sent-273, score-0.425]
97 [15] ——, “An efficient method for computing synaptic conductances based on a kinetic model of receptor binding,” Neural Computation, vol. [sent-276, score-0.455]
98 Seevinck, “Companding current-mode integrator: A new circuit principle for continuous time monolithic filters,” Electron. [sent-280, score-0.375]
99 Frey, “Exponential state space fitlers: A generic current mode design strategy,” IEEE Trans. [sent-287, score-0.074]
100 Fellrath, “CMOS analog integrated circuits based on weak inversion opearaton,” IEEE J. [sent-304, score-0.162]
wordName wordTfidf (topN-words)
[('isyn', 0.37), ('circuit', 0.344), ('synapse', 0.332), ('synaptic', 0.329), ('vw', 0.254), ('tpw', 0.223), ('spike', 0.202), ('pulse', 0.166), ('transistor', 0.135), ('vc', 0.121), ('cmos', 0.118), ('transistors', 0.113), ('vt', 0.104), ('subthreshold', 0.099), ('vlsi', 0.098), ('pmos', 0.096), ('kinetic', 0.096), ('analog', 0.089), ('voltage', 0.086), ('destexhe', 0.081), ('vspkin', 0.081), ('neuromorphic', 0.08), ('tj', 0.08), ('silicon', 0.077), ('current', 0.074), ('circuits', 0.073), ('summation', 0.064), ('synapses', 0.061), ('decay', 0.061), ('cvt', 0.061), ('iop', 0.061), ('lazzaro', 0.061), ('vgs', 0.061), ('leak', 0.06), ('modeled', 0.057), ('msec', 0.056), ('mirror', 0.054), ('follower', 0.053), ('parasitic', 0.053), ('pin', 0.053), ('vdd', 0.048), ('norwell', 0.048), ('synthesis', 0.046), ('presynaptic', 0.045), ('depression', 0.045), ('width', 0.043), ('capacitor', 0.042), ('gate', 0.042), ('bodies', 0.041), ('companding', 0.041), ('drain', 0.041), ('indiveri', 0.041), ('mosis', 0.041), ('nmos', 0.041), ('pulsed', 0.041), ('spkin', 0.041), ('vds', 0.041), ('spiking', 0.04), ('response', 0.039), ('ma', 0.038), ('saturation', 0.037), ('exponential', 0.037), ('narrow', 0.036), ('feb', 0.035), ('ids', 0.035), ('transmitter', 0.035), ('vbs', 0.035), ('implementations', 0.034), ('neglect', 0.032), ('fabrication', 0.032), ('mahowald', 0.032), ('constant', 0.032), ('time', 0.031), ('receptor', 0.03), ('binding', 0.03), ('dd', 0.03), ('rising', 0.03), ('characteristic', 0.029), ('receptors', 0.028), ('decaying', 0.028), ('conductance', 0.028), ('designs', 0.028), ('douglas', 0.028), ('neurotransmitter', 0.028), ('petsche', 0.028), ('constants', 0.028), ('gain', 0.028), ('diversity', 0.027), ('ion', 0.027), ('kluwer', 0.026), ('plasticity', 0.026), ('fabricated', 0.026), ('mozer', 0.026), ('control', 0.026), ('changing', 0.025), ('source', 0.025), ('utilizes', 0.025), ('working', 0.024), ('description', 0.024), ('dt', 0.023)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999976 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems
Author: Rock Z. Shi, Timothy K. Horiuchi
Abstract: Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5µm CMOS process. 1
2 0.34645614 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses
Author: Adria Bofill-i-petit, Alan F. Murray
Abstract: We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme of the STDP synapses can be set to either weight-independent or weight-dependent mode. We present results that characterise the learning window implemented for both modes of operation. When presented with spike trains with different types of synchronisation the neurons develop bimodal weight distributions. We also show that a 2-layered network of silicon spiking neurons with STDP synapses can perform hierarchical synchrony detection. 1
3 0.29544115 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
Author: Thomas Natschläger, Wolfgang Maass
Abstract: We employ an efficient method using Bayesian and linear classifiers for analyzing the dynamics of information in high-dimensional states of generic cortical microcircuit models. It is shown that such recurrent circuits of spiking neurons have an inherent capability to carry out rapid computations on complex spike patterns, merging information contained in the order of spike arrival with previously acquired context information. 1
4 0.20706233 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control
Author: Francesco Tenore, Ralph Etienne-Cummings, M. A. Lewis
Abstract: We have constructed a second generation CPG chip capable of generating the necessary timing to control the leg of a walking machine. We demonstrate improvements over a previous chip by moving toward a significantly more versatile device. This includes a larger number of silicon neurons, more sophisticated neurons including voltage dependent charging and relative and absolute refractory periods, and enhanced programmability of neural networks. This chip builds on the basic results achieved on a previous chip and expands its versatility to get closer to a self-contained locomotion controller for walking robots. 1
5 0.19186626 129 nips-2003-Minimising Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons
Author: Hsin Chen, Patrice Fleury, Alan F. Murray
Abstract: This paper presents VLSI circuits with continuous-valued probabilistic behaviour realized by injecting noise into each computing unit(neuron). Interconnecting the noisy neurons forms a Continuous Restricted Boltzmann Machine (CRBM), which has shown promising performance in modelling and classifying noisy biomedical data. The Minimising-Contrastive-Divergence learning algorithm for CRBM is also implemented in mixed-mode VLSI, to adapt the noisy neurons’ parameters on-chip. 1
6 0.18798892 10 nips-2003-A Low-Power Analog VLSI Visual Collision Detector
7 0.17645253 16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells
8 0.12578754 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model
9 0.11911687 27 nips-2003-Analytical Solution of Spike-timing Dependent Plasticity Based on Synaptic Biophysics
10 0.11121031 11 nips-2003-A Mixed-Signal VLSI for Real-Time Generation of Edge-Based Image Vectors
11 0.094551116 160 nips-2003-Prediction on Spike Data Using Kernel Algorithms
12 0.091658011 127 nips-2003-Mechanism of Neural Interference by Transcranial Magnetic Stimulation: Network or Single Neuron?
13 0.07989987 157 nips-2003-Plasticity Kernels and Temporal Statistics
14 0.06405361 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell
15 0.061492968 185 nips-2003-The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity
16 0.059576131 45 nips-2003-Circuit Optimization Predicts Dynamic Networks for Chemosensory Orientation in Nematode C. elegans
17 0.055724278 184 nips-2003-The Diffusion-Limited Biochemical Signal-Relay Channel
18 0.04909331 139 nips-2003-Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression
19 0.0406624 79 nips-2003-Gene Expression Clustering with Functional Mixture Models
20 0.038552262 159 nips-2003-Predicting Speech Intelligibility from a Population of Neurons
topicId topicWeight
[(0, -0.135), (1, 0.108), (2, 0.432), (3, 0.107), (4, 0.184), (5, -0.026), (6, -0.258), (7, 0.01), (8, 0.066), (9, -0.019), (10, -0.084), (11, -0.098), (12, -0.134), (13, -0.039), (14, -0.104), (15, -0.131), (16, -0.041), (17, 0.017), (18, 0.036), (19, -0.116), (20, 0.087), (21, 0.136), (22, 0.086), (23, 0.001), (24, 0.09), (25, -0.058), (26, 0.051), (27, 0.002), (28, 0.095), (29, 0.06), (30, -0.053), (31, -0.015), (32, -0.044), (33, -0.016), (34, 0.022), (35, -0.044), (36, -0.009), (37, -0.038), (38, 0.058), (39, -0.063), (40, 0.004), (41, -0.039), (42, -0.026), (43, -0.017), (44, 0.011), (45, -0.029), (46, 0.02), (47, 0.041), (48, 0.005), (49, -0.037)]
simIndex simValue paperId paperTitle
same-paper 1 0.98170733 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems
Author: Rock Z. Shi, Timothy K. Horiuchi
Abstract: Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5µm CMOS process. 1
2 0.80425179 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses
Author: Adria Bofill-i-petit, Alan F. Murray
Abstract: We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme of the STDP synapses can be set to either weight-independent or weight-dependent mode. We present results that characterise the learning window implemented for both modes of operation. When presented with spike trains with different types of synchronisation the neurons develop bimodal weight distributions. We also show that a 2-layered network of silicon spiking neurons with STDP synapses can perform hierarchical synchrony detection. 1
3 0.75283492 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
Author: Thomas Natschläger, Wolfgang Maass
Abstract: We employ an efficient method using Bayesian and linear classifiers for analyzing the dynamics of information in high-dimensional states of generic cortical microcircuit models. It is shown that such recurrent circuits of spiking neurons have an inherent capability to carry out rapid computations on complex spike patterns, merging information contained in the order of spike arrival with previously acquired context information. 1
4 0.71302062 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control
Author: Francesco Tenore, Ralph Etienne-Cummings, M. A. Lewis
Abstract: We have constructed a second generation CPG chip capable of generating the necessary timing to control the leg of a walking machine. We demonstrate improvements over a previous chip by moving toward a significantly more versatile device. This includes a larger number of silicon neurons, more sophisticated neurons including voltage dependent charging and relative and absolute refractory periods, and enhanced programmability of neural networks. This chip builds on the basic results achieved on a previous chip and expands its versatility to get closer to a self-contained locomotion controller for walking robots. 1
5 0.65312618 129 nips-2003-Minimising Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons
Author: Hsin Chen, Patrice Fleury, Alan F. Murray
Abstract: This paper presents VLSI circuits with continuous-valued probabilistic behaviour realized by injecting noise into each computing unit(neuron). Interconnecting the noisy neurons forms a Continuous Restricted Boltzmann Machine (CRBM), which has shown promising performance in modelling and classifying noisy biomedical data. The Minimising-Contrastive-Divergence learning algorithm for CRBM is also implemented in mixed-mode VLSI, to adapt the noisy neurons’ parameters on-chip. 1
6 0.62563628 10 nips-2003-A Low-Power Analog VLSI Visual Collision Detector
7 0.51913321 27 nips-2003-Analytical Solution of Spike-timing Dependent Plasticity Based on Synaptic Biophysics
8 0.49498573 11 nips-2003-A Mixed-Signal VLSI for Real-Time Generation of Edge-Based Image Vectors
9 0.46737358 16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells
10 0.37887481 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model
11 0.2957198 127 nips-2003-Mechanism of Neural Interference by Transcranial Magnetic Stimulation: Network or Single Neuron?
12 0.27704614 157 nips-2003-Plasticity Kernels and Temporal Statistics
13 0.27539405 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell
14 0.26849067 45 nips-2003-Circuit Optimization Predicts Dynamic Networks for Chemosensory Orientation in Nematode C. elegans
15 0.22434457 160 nips-2003-Prediction on Spike Data Using Kernel Algorithms
16 0.18860576 185 nips-2003-The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity
17 0.1868445 184 nips-2003-The Diffusion-Limited Biochemical Signal-Relay Channel
18 0.16537564 165 nips-2003-Reasoning about Time and Knowledge in Neural Symbolic Learning Systems
19 0.15821971 159 nips-2003-Predicting Speech Intelligibility from a Population of Neurons
20 0.14790975 79 nips-2003-Gene Expression Clustering with Functional Mixture Models
topicId topicWeight
[(0, 0.019), (11, 0.024), (30, 0.012), (35, 0.039), (45, 0.016), (53, 0.067), (59, 0.455), (63, 0.066), (69, 0.012), (71, 0.041), (76, 0.032), (85, 0.048), (91, 0.067)]
simIndex simValue paperId paperTitle
same-paper 1 0.88126403 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems
Author: Rock Z. Shi, Timothy K. Horiuchi
Abstract: Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5µm CMOS process. 1
2 0.71035576 185 nips-2003-The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity
Author: Yuval Aviel, David Horn, Moshe Abeles
Abstract: A balanced network leads to contradictory constraints on memory models, as exemplified in previous work on accommodation of synfire chains. Here we show that these constraints can be overcome by introducing a 'shadow' inhibitory pattern for each excitatory pattern of the model. This is interpreted as a doublebalance principle, whereby there exists both global balance between average excitatory and inhibitory currents and local balance between the currents carrying coherent activity at any given time frame. This principle can be applied to networks with Hebbian cell assemblies, leading to a high capacity of the associative memory. The number of possible patterns is limited by a combinatorial constraint that turns out to be P=0.06N within the specific model that we employ. This limit is reached by the Hebbian cell assembly network. To the best of our knowledge this is the first time that such high memory capacities are demonstrated in the asynchronous state of models of spiking neurons. 1 In trod u ction Numerous studies analyze the different phases of unstructured networks of spiking neurons [1, 2]. These networks with random connectivity possess a phase of asynchronous activity, the asynchronous state (AS), which is the most interesting one from the biological perspective, since it is similar to physiological data. Unstructured networks, however, do not hold information in their connectivity matrix, and therefore do not store memories. Binary networks with ordered connectivity matrices, or structured networks, and their ability to store and retrieve memories, have been extensively studied in the past [3-8]. Applicability of these results to biologically plausible neuronal models is questionable. In particular, models of spiking neurons are known to have modes of synchronous global oscillations. Avoiding such modes, and staying in an AS, is a major constraint on networks of spiking neurons that is absent in most binary neural networks. As we will show below, it is this constraint that imposes a limit on capacity in our model. Existing associative memory models of spiking neurons have not strived for maximal pattern capacity [3, 4, 8]. Here, using an integrate-and-fire model, we embed structured synaptic connections in an otherwise unstructured network and study the capacity limit of the system. The system is therefore macroscopically unstructured, but microscopically structured. The unstructured network model is based on Brunel's [1] balanced network of integrate-and-fire neurons. In his model, the network possesses different phases, one of which is the AS. We replace his unstructured excitatory connectivity by a semistructured one, including a super-position of either synfire chains or Hebbian cell assemblies. The existence of a stable AS is a fundamental prerequisite of the system. There are two reasons for that: First, physiological measurements of cortical tissues reveal an irregular neuronal activity and an asynchronous population activity. These findings match the properties of the AS. Second, in term of information content, the entropy of the system is the highest when firing probability is uniformly distributed, as in an AS. In general, embedding one or two patterns will not destabilize the AS. Increasing the number of embedded patterns, however, will eventually destabilize the AS, leading to global oscillations. In previous work [9], we have demonstrated that the cause of AS instability is correlations between neurons that result from the presence of structure in the network. The patterns, be it Hebbian cell assemblies (HCA) or pools occurring in synfire chains (SFC), have an important characteristic: neurons that are members of the same pattern (or pool) share a large portion of their inputs. This common input correlates neuronal activities both when a pattern is activated and when both neurons are influenced by random activity. If too many patterns are embedded in the network, too many neurons become correlated due to common inputs, leading to globally synchronized deviations from mean activity. A qualitative understanding of this state of affairs is provided by a simple model of a threshold linear pair of neurons that receive n excitatory common, and correlated, inputs, and K-n excitatory, as well as K inhibitory, non-common uncorrelated inputs. Thinking of these neurons as belonging to a pattern or a pool within a network, we can obtain an interesting self-consistent result by assuming the correlation of the pair of neurons to be also the correlation in their common correlated input (as is likely to be the case in a network loaded with HCA or SFC). We find then [9] that there exists a critical pattern size, n c , below which correlations decay but above which correlations are amplified. Furthermore, the following scaling was found to exist (1) nc = rc K . Implications of this model for the whole network are that: (i) rc is independent of N, the size of the network, (ii) below nc the AS is stable, and (iii) above nc the AS is unstable. Using extensive computer simulations we were able [9] to validate all these predictions. In addition, keeping n nmin, by the requirement that n excitatory post-synaptic potentials (PSPs), on average, drive a neuron across its threshold. Since N>K and typically N>>K, together with Eq. (1) it follows that N >> (n min / rc ) . Hence rc and nmin set the lower bound of the network's size, 2 above which it is possible to embed a reasonable number of patterns in the network without losing the AS. In this paper we propose a solution that enables small n min and large r values, which in turn enables embedding a large number of patterns in much smaller networks. This is made possible by the doubly-balanced construction to be outlined below. 2 The double-balance principle Counteracting the excitatory correlations with inhibitory ones is the principle that will allow us to solve the problem. Since we deal with balanced networks, in which the mean excitatory input is balanced by an inhibitory one, we note that this principle imposes a second type of balancing condition, hence we refer to it as the double- balance principle. In the following, we apply this principle by introducing synaptic connections between any excitatory pattern and its randomly chosen inhibitory pattern. These inhibitory patterns, which we call shadow patterns, are activated after the excitatory patterns fire, but have no special in-pattern connectivity or structured projections onto other patterns. The premise is that correlations evolved in the excitatory patterns will elicit correlated inhibitory activity, thus balancing the network's average correlation level. The size of the shadow pattern has to be small enough, so that the global network activity will not be quenched, yet large enough, so that the excitatory correlation will be counteracted. A balanced network that is embedded with patterns and their shadow patterns will be referred to as a doubly balanced network (DBN), to be contrasted with the singly balanced network (SBN) where shadow patterns are absent. 3 3.1 Application of the double balance principle. The Network We model neuronal activity with the Integrate and Fire [10] model. All neurons have the same parameters: τ = 10ms , τ ref = 2.5ms , C=250pF. PSPs are modeled by a delta function with fixed delay. The number of synapses on a neuron is fixed and set to KE excitatory synapses from the local network, KE excitatory synapses from external sources and KI inhibitory synapses from the local network. See Aviel et al [9] for details. All synapses of each group will be given fixed values. It is allowed for one pre-synaptic neuron to make more than one connection to one postsynaptic neuron. The network possesses NE excitatory neurons and N I ≡ γN E inhibitory neurons. Connectivity is sparse, ε = 0.1 ). K E N E = K I N I = ε , (we use A Poisson process with rate vext=10Hz models the external source. If a neuron of population y innervates a neuron of population x its synaptic strength J xy is defined as J xE ≡ J 0 K E , J xI ≡ − gJ 0 with J0=10, and g=5. Note that J xI = − g γ KI J xE , hence g γ controls the balance between the two populations. Within an HCA pattern the neurons have high connection probability with one another. Here it is achieved by requiring L of the synapses of a neuron in the excitatory pattern to originate from within the pattern. Similarly, a neuron in the inhibitory shadow pattern dedicates L of its synapses to the associated excitatory pattern. In a SFC, each neuron in an excitatory pool is fed by L neurons from the previous pool. This forms a feed forward connectivity. In addition, when shadow pools are present, each neuron in a shadow pool is fed by L neurons from its associated excitatory pool. In both cases L = C L K E , with CL=2.5. The size of the excitatory patterns (i.e. the number of neurons participating in a pattern) or pools, nE, is also chosen to be proportional to K E (see Aviel et al. 2003 [9]), nE ≡ Cn K E , where Cn varies. This is a suitable choice, because of the behavior of the critical nc of Eq. (1), and is needed for the meaningful memory activity (of the HCA or SFC) to overcome synaptic noise. ~ The size of a shadow pattern is defined as nI ≡ d nE . This leads to the factor d, representing the relative strength of inhibitory and excitatory currents, due to a pattern or pool, affecting a neuron that is connected to both: d≡ (2) Thus it fixes nI = d − J xI nI J xE nE = gJ 0 K E d gd . = J0 K I γ ( )n . In the simulations reported below d varied between 1 γ g E and 3. Wiring the network is done in two stages, first all excitatory patterns are wired, and then random connections are added, complying with the fixed number of synapses. A volley of w spikes, normally distributed over time with width of 1ms, is used to ignite a memory pattern. In the case of SFC, the first pool is ignited, and under the right conditions the volley propagates along the chain without fading away and without destabilizing the AS. 3.2 Results First we show that the AS remains stable when embedding HCAs in a small DBN, whereas global oscillations take place if embedding is done without shadow pools. Figure 1 displays clearly the sustained activity of an HCA in the DBN. The same principle also enables embedding of SFCs in a small network. This is to be contrasted with the conclusions drawn in Aviel et al [9], where it was shown that otherwise very large networks are necessary to reach this goal. Figure 1: HCAs are embedded in a balanced network without (left) and with (right) shadow patterns. P=300 HCAs of size nE=194 excitatory neurons were embedded in a network of NE=15,000 excitatory neurons. The eleventh pattern is externally ignited at time t=100ms. A raster plot of 200ms is displayed. Without shadow patterns the network exhibits global oscillations, but with shadow patterns the network exhibits only minute oscillations, enabling the activity of the ignited pattern to be sustained. The size of the shadow patterns is set according to Eq. (2) with d=1. Neurons that participate in more than one HCA may appear more than once on the raster plot, whose y-axis is ordered according to HCAs, and represents every second neuron in each pattern. Figure 2: SFCs embedded in a balanced network without (left) and with (right) shadow patterns. The first pool is externally ignited at time t=100ms. d=0.5. The rest of the parameters are as in Figure 1. Here again, without shadow pools, the network exhibits global oscillations, but with shadow pools it has only minute oscillation, enabling a stable propagation of the synfire wave. 3.3 Maximum Capacity In this section we show that, within our DBN, it is the fixed number of synapses (rather than dynamical constraints) that dictates the maximal number of patterns or pools P that may be loaded onto the network. Let us start by noting that a neuron of population x (E or I) can participate in at most m ≡ K E L patterns, hence N x m sets an upper bound on the number of neurons that participate in all patterns: P n x P ≤ m ⋅ N x . Next, defining α x ≡ , we find that Nx αx ≤ (3) m nx = K E CL K E nx To leading order in NE this turns into K E CL K E N = C C D −1 N − O αxNx = n L x E E D C K (4) x n E ( ) ( NE ) (g γ ) if x=I, or 1 for x=E. where Dx ≡ d Thus we conclude that synaptic combinatorial considerations lead to a maximal number of patterns P. If DI<1, including the case DI=0 of the SBN, the excitatory neurons determine the limit to be P = (C n C L ) N E . If, as is the case in our DBN, −1 DI>1, then ( γα I < α E P = C n C L DI ) −1 and the inhibitory neurons set the maximum value to NE . For example, setting Cn=3.5, CL=2.4, g=3 and d=3, in Eq. (4), we get P=0.06NE. In Figure 3 we use these parameters. The capacity of a DBN is compared to that of an SBN for different network sizes. The maximal load is defined by the presence of global oscillation strong enough to prohibit sustained activity of patterns. The DBN reaches the combinatorial limit, whereas the SBN does not increase with N and obviously does not reach its combinatorial limit. 1400 1200 Pmax 1000 DBN SBN DBN Upper Limit SBN Upper Limit 800 600 400 200 0 0 5000 10000 NE 15000 Figure 3: A balanced network maximally loaded with HCAs. Left: A raster plot of a maximally loaded DBN. P=408, NE=6,000. At time t=450ms, the seventh pattern is ignited for a duration of 10ms, leading to termination of another pattern's activity (upper stripe) and to sustained activity of the ignited pattern (lower stripe). Right: P(NE) as inferred from simulations of a SBN (
3 0.70383847 169 nips-2003-Sample Propagation
Author: Mark A. Paskin
Abstract: Rao–Blackwellization is an approximation technique for probabilistic inference that flexibly combines exact inference with sampling. It is useful in models where conditioning on some of the variables leaves a simpler inference problem that can be solved tractably. This paper presents Sample Propagation, an efficient implementation of Rao–Blackwellized approximate inference for a large class of models. Sample Propagation tightly integrates sampling with message passing in a junction tree, and is named for its simple, appealing structure: it walks the clusters of a junction tree, sampling some of the current cluster’s variables and then passing a message to one of its neighbors. We discuss the application of Sample Propagation to conditional Gaussian inference problems such as switching linear dynamical systems. 1
4 0.39730218 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses
Author: Adria Bofill-i-petit, Alan F. Murray
Abstract: We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme of the STDP synapses can be set to either weight-independent or weight-dependent mode. We present results that characterise the learning window implemented for both modes of operation. When presented with spike trains with different types of synchronisation the neurons develop bimodal weight distributions. We also show that a 2-layered network of silicon spiking neurons with STDP synapses can perform hierarchical synchrony detection. 1
5 0.37501645 10 nips-2003-A Low-Power Analog VLSI Visual Collision Detector
Author: Reid R. Harrison
Abstract: We have designed and tested a single-chip analog VLSI sensor that detects imminent collisions by measuring radially expansive optic flow. The design of the chip is based on a model proposed to explain leg-extension behavior in flies during landing approaches. A new elementary motion detector (EMD) circuit was developed to measure optic flow. This EMD circuit models the bandpass nature of large monopolar cells (LMCs) immediately postsynaptic to photoreceptors in the fly visual system. A 16 × 16 array of 2-D motion detectors was fabricated on a 2.24 mm × 2.24 mm die in a standard 0.5-µm CMOS process. The chip consumes 140 µW of power from a 5 V supply. With the addition of wide-angle optics, the sensor is able to detect collisions around 500 ms before impact in complex, real-world scenes. 1
6 0.36796549 129 nips-2003-Minimising Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons
7 0.36387354 16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells
8 0.33850694 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control
9 0.32716572 27 nips-2003-Analytical Solution of Spike-timing Dependent Plasticity Based on Synaptic Biophysics
10 0.3140581 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model
11 0.30803752 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
12 0.2949712 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell
13 0.29365253 177 nips-2003-Simplicial Mixtures of Markov Chains: Distributed Modelling of Dynamic User Profiles
14 0.28844243 80 nips-2003-Generalised Propagation for Fast Fourier Transforms with Partial or Missing Data
15 0.28362176 101 nips-2003-Large Margin Classifiers: Convex Loss, Low Noise, and Convergence Rates
16 0.27051702 127 nips-2003-Mechanism of Neural Interference by Transcranial Magnetic Stimulation: Network or Single Neuron?
17 0.26044652 113 nips-2003-Learning with Local and Global Consistency
18 0.26042795 126 nips-2003-Measure Based Regularization
19 0.25988054 107 nips-2003-Learning Spectral Clustering
20 0.25868857 20 nips-2003-All learning is Local: Multi-agent Learning in Global Reward Games