nips nips2001 nips2001-49 knowledge-graph by maker-knowledge-mining

49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning


Source: pdf

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. [sent-16, score-1.399]

2 Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. [sent-17, score-0.087]

3 We argue that such learning rules are suitable to analog VLSI implementation. [sent-18, score-0.17]

4 We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. [sent-19, score-0.51]

5 Test results from the fabrication of the circuit using a O. [sent-20, score-0.224]

6 1 Introduction Hebbian learning rules modify weights of synapses according to correlations between activity at the input and the output of neurons. [sent-23, score-0.147]

7 Most artificial neural networks using Hebbian learning are based on pulse-rate correlations between continuousvalued signals; they reduce the neural spike trains to mean firing rates and thus precise timing does not carry information. [sent-24, score-0.459]

8 With this approach the spiking nature of biological neurons is just an efficient solution that evolution has produced to transmit analog information over an unreliable medium. [sent-25, score-0.217]

9 In recent years, recorded data have indicated that synaptic strength modifications are also induced by timing differences between pairs of presynaptic and postsynaptic spikes [1][2]. [sent-26, score-1.008]

10 A class of learning rules derived from these experimental dat a is illustrated in Figure 1 [2]-[4]. [sent-27, score-0.087]

11 The "causal/non-causal" basis of these Hebbian learning algorithms is present in all variants of this spike-timing dependent weight modification rule. [sent-28, score-0.224]

12 When the presynaptic spike arrives at the synapse a few milliseconds presynaptic spike presynaptic spike ,tpre tpre' postsynaptic spike postsynaptic spike ! [sent-29, score-3.445]

13 w tpre - tpost tpre - tpost (a) (b) Figure 1: Two temporally-asymmetric Hebbian learning rules drawing on experimental data. [sent-32, score-0.545]

14 The curves show the shape of the weight change (~W) for differences between the firing times of the presynaptic (tpr e) and the postsynaptic (tpost) neurons. [sent-33, score-1.018]

15 When the presynaptic spike arrives at the synapse a few ms before the postsynaptic neuron fires , the weight of the synapse is increased. [sent-34, score-1.577]

16 If the postsynaptic neuron fires first, the weight is decreased. [sent-35, score-0.702]

17 before an output spike is generated, the synaptic efficiency increases. [sent-36, score-0.353]

18 In contrast, when the postsynaptic neuron fires first , the efficiency of the synapse is weakened. [sent-37, score-0.748]

19 Hence, only those synapses that receive spikes that appear to contribute to the generation of the postsynaptic spike are reinforced. [sent-38, score-0.865]

20 In [5] a similar spike-timing difference based learning rule has been used to learn input sequence prediction in a recurrent network. [sent-39, score-0.023]

21 Studies reported in [4] indicate that the positive (potentiation) element of the learning curve must be smaller than the negative (depression) to obtain stable competitive weight modification. [sent-40, score-0.179]

22 Pulse signal representation has been used extensively in hardware implementations of artificial neural networks [6] [7]. [sent-41, score-0.09]

23 Such systems use pulses as a mere technological solution to benefit from the robustness of binary signal transmission while making use of analog circuitry for the elementary computation units. [sent-42, score-0.2]

24 However , they do not exploit the relative timing differences between individual pulses to compute. [sent-43, score-0.161]

25 Also , analog hardware is not well-suited to the complexity of most artificial neural network algorithms. [sent-44, score-0.146]

26 An analog VLSI implementation of a similar, but more complex, spike-timing dependent learning rule can be found in [8]. [sent-46, score-0.137]

27 We describe a circuit that implements the spike-timing dependent weight change described above along with the t est results from a fabricated chip. [sent-47, score-0.448]

28 We have fo cused on the implem entation of the weight modification circuits, as VLSI spiking neurons with tunable m embrane time constant and refractory p eriod have already b een proposed in [9] and [10]. [sent-48, score-0.38]

29 2 Learning circuit description Figure 2 shows the weight change circuit and Figure 3 the form of signals required to drive learning. [sent-49, score-0.665]

30 These driving signals are generated by the circuits described in Figure 4. [sent-50, score-0.241]

31 The voltage across the weight capacitor , Cw in Figure 2, is modified according to t he spike-timing dependent weight change rule discussed above. [sent-51, score-0.42]

32 The weight change, ~ W, is defined as -~ Vw so that the leakage of t he capacitor leads Vw in the direction of weight decay. [sent-52, score-0.231]

33 PI Figure 2: W e ight change circuit postsynaptic spike up n '- - -__ down (a) (b) Figure 3: Stimulus for the w e ight change circuit The weight change circuit of Figure 2 works as follows. [sent-54, score-1.78]

34 When a falling edge of either a postsynaptic or a presynaptic spike occurs , a short activation pulse is generated which causes Cd ec to be charged to V pea k through transistor Nl. [sent-55, score-1.56]

35 The charge accumulated in Cd ec will leak to ground with a rate set by Vd ec ay ' The resulting voltage at the gate of N3 produces a current flowing through P2-P3-N4. [sent-56, score-0.343]

36 If a presynaptic spike is active after the falling edge of a postsynaptic spike an activelow up pulse is applied to the gate of transistor P5. [sent-57, score-1.793]

37 Thus, the current flowing through N3 is mirrored to transistor P4 causing an increase in the voltage across Cw that corresponds to a decrease in the weight. [sent-58, score-0.352]

38 In contrast, when a presynaptic spike precedes a postsynaptic spike an active-high down pulse is generated and the current in N3 is mirrored to N5-N6 resulting in a discharge of Cw . [sent-59, score-1.577]

39 As the current in N2 is constant, the current integrated by Cw displays an exponential decay, if Vpeak is such that N3 is in sub-threshold mode. [sent-60, score-0.05]

40 Hence, the rate of decay of the learning curve is fixed by the ratio hlCdec. [sent-61, score-0.136]

41 The abruptness of the transition zone between potentiation and depression is set by the duration of both the presynaptic and postsynaptic spike. [sent-62, score-1.108]

42 Finally, an imbalance between the areas under the positive and negative side of the curve can be introduced via Vdep and Vpot . [sent-63, score-0.06]

43 The effect of all these circuit parameters is exemplified by the test results shown in the following section. [sent-64, score-0.248]

44 act down post_spike (a) (b) Figure 4: Learning drivers. [sent-65, score-0.074]

45 (b) Asynchronous controller for up and down signals The circuit of Figure 4(a) , present in both the presynaptic and postsynaptic neurons, generates a short act pulse with the falling edge of the output spike. [sent-67, score-1.502]

46 The act pulses are ORed at each synapse to produce the activation pulse applied to the weight change circuit of Figure 2. [sent-68, score-0.827]

47 The other two driving signals , up and down, are produced by a small asynchronous controller using standard and asymmetric C-elements [11] shown in Figure 4(b). [sent-69, score-0.211]

48 The internal signal q indicates if the last falling edge to occur corresponds to a pre (q = 1) or a postsynaptic spike (q = 0). [sent-70, score-1.079]

49 This ensures that an up signal that decreases the weight is only generated when a presynaptic spike is active after the falling edge of a postsynaptic spike. [sent-71, score-1.387]

50 Similarly down is activated only when the postsynaptic spike is active following a presynaptic spike falling edge. [sent-72, score-1.49]

51 Using the current flowing through N3 (Figure 2) to both increase and decrease the weight allows us to match the curve at the potentiation and depression regions at the exp ense of having to introduce the driving circuits of Figure 4. [sent-73, score-0.679]

52 3 Results from the temporally-asymmetric Hebbian chip The circuit in Figure 2 has b een fabricated in a O. [sent-74, score-0.323]

53 The driving signals (down, up and activation) are currently generated off-chip. [sent-77, score-0.139]

54 The circuit can be operated in t he p,s timescale, however , here we only present test results with time constants similar to those suggested by experimental data and studied using software models in [3]- [5]. [sent-78, score-0.273]

55 5 t pre -I - t Vpeak = 694m V = 1ms T 'p T act = 50j. [sent-82, score-0.216]

56 ts =2ms V \ pre post pot =OV Vdd - Vdep = OV post =5 t Vdecay = 515mV post >' V peak = 694mV = 1ms -I pre =5m / 1. [sent-83, score-0.639]

57 (a ) The voltage across Cw is initially set to OV and increased by a sequence of consecutive pairs of pre and postsynaptic spikes. [sent-96, score-0.737]

58 The delays between presynaptic and postsynaptic firing times were set to 2ms , 5ms and 7. [sent-97, score-0.868]

59 5ms (b) The order of pre and postsynaptic spikes is reversed to decrease Vw . [sent-98, score-0.724]

60 In both plots the duration of the spikes, T sp , and the activation pulse, Ta ct , is set to 1ms and and 50p,s respectively. [sent-99, score-0.075]

61 The learning window plots shown in Figures 6-8 were constructed with test data from a sequence of consecutive presynaptic and postsynaptic spikes with different delays . [sent-100, score-0.988]

62 Before every new pair of presynaptic and postsynaptic spikes, the voltage in Cw was reset to Vw = 2V . [sent-101, score-0.926]

63 The weight change curves are similar for other initial "reset" weight voltages owing to the linearity of the learning circuit for different Vw values as shown in Figure 5. [sent-102, score-0.495]

64 A power supply voltage of Vdd = 5V is used in all test results shown. [sent-103, score-0.126]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('postsynaptic', 0.464), ('presynaptic', 0.319), ('spike', 0.278), ('ov', 0.263), ('circuit', 0.224), ('vdep', 0.175), ('pulse', 0.155), ('vpeak', 0.146), ('pre', 0.142), ('vdd', 0.129), ('tpost', 0.127), ('falling', 0.116), ('hebbian', 0.115), ('potentiation', 0.108), ('depression', 0.108), ('synapse', 0.108), ('cw', 0.106), ('voltage', 0.102), ('vw', 0.102), ('circuits', 0.102), ('tpre', 0.102), ('weight', 0.096), ('post', 0.093), ('vlsi', 0.092), ('spikes', 0.088), ('fires', 0.088), ('tact', 0.088), ('vdecay', 0.088), ('analog', 0.083), ('pot', 0.076), ('flowing', 0.076), ('act', 0.074), ('driving', 0.074), ('modification', 0.074), ('signals', 0.065), ('pulses', 0.065), ('rules', 0.064), ('transistor', 0.061), ('curve', 0.06), ('abruptness', 0.058), ('bofill', 0.058), ('een', 0.058), ('mirrored', 0.058), ('thompson', 0.058), ('vpot', 0.058), ('timing', 0.057), ('change', 0.056), ('neuron', 0.054), ('decay', 0.053), ('edge', 0.052), ('ight', 0.051), ('murray', 0.051), ('activation', 0.049), ('tsp', 0.046), ('spiking', 0.045), ('firing', 0.044), ('edinburgh', 0.043), ('tunable', 0.043), ('cmos', 0.043), ('synaptic', 0.041), ('ec', 0.041), ('fabricated', 0.041), ('delays', 0.041), ('reset', 0.041), ('differences', 0.039), ('neurons', 0.039), ('asynchronous', 0.039), ('capacitor', 0.039), ('arrives', 0.037), ('uk', 0.036), ('synapses', 0.035), ('gate', 0.035), ('active', 0.035), ('efficiency', 0.034), ('hardware', 0.033), ('controller', 0.033), ('cd', 0.032), ('dependent', 0.031), ('po', 0.031), ('artificial', 0.03), ('decrease', 0.03), ('consecutive', 0.029), ('signal', 0.027), ('precise', 0.027), ('duration', 0.026), ('embrane', 0.025), ('charged', 0.025), ('mere', 0.025), ('milliseconds', 0.025), ('operated', 0.025), ('transmit', 0.025), ('zone', 0.025), ('ms', 0.025), ('current', 0.025), ('modify', 0.025), ('biological', 0.025), ('test', 0.024), ('learning', 0.023), ('leak', 0.023), ('dep', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

2 0.29947674 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

3 0.28205851 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

Author: Aaron P. Shon, David Hsu, Chris Diorio

Abstract: We have designed and fabricated a VLSI synapse that can learn a conditional probability or correlation between spike-based inputs and feedback signals. The synapse is low power, compact, provides nonvolatile weight storage, and can perform simultaneous multiplication and adaptation. We can calibrate arrays of synapses to ensure uniform adaptation characteristics. Finally, adaptation in our synapse does not necessarily depend on the signals used for computation. Consequently, our synapse can implement learning rules that correlate past and present synaptic activity. We provide analysis and experimental chip results demonstrating the operation in learning and calibration mode, and show how to use our synapse to implement various learning rules in silicon. 1 I n tro d u cti o n Computation with conditional probabilities and correlations underlies many models of neurally inspired information processing. For example, in the sequence-learning neural network models proposed by Levy [1], synapses store the log conditional probability that a presynaptic spike occurred given that the postsynaptic neuron spiked sometime later. Boltzmann machine synapses learn the difference between the correlations of pairs of neurons in the sleep and wake phase [2]. In most neural models, computation and adaptation occurs at the synaptic level. Hence, a silicon synapse that can learn conditional probabilities or correlations between pre- and post-synaptic signals can be a key part of many silicon neural-learning architectures. We have designed and implemented a silicon synapse, in a 0.35µm CMOS process, that learns a synaptic weight that corresponds to the conditional probability or correlation between binary input and feedback signals. This circuit utilizes floating-gate transistors to provide both nonvolatile storage and weight adaptation mechanisms [3]. In addition, the circuit is compact, low power, and provides simultaneous adaptation and computation. Our circuit improves upon previous implementations of floating-gate based learning synapses [3,4,5] in several ways. First, our synapse appears to be the first spike-based floating-gate synapse that implements a general learning principle, rather than a particular learning rule [4,5]. We demon- strate that our synapse can learn either the conditional probability or the correlation between input and feedback signals. Consequently, we can implement a wide range of synaptic learning networks with our circuit. Second, unlike the general correlational learning synapse proposed by Hasler et. al. [3], our synapse can implement learning rules that correlate pre- and postsynaptic activity that occur at different times. Learning algorithms that employ time-separated correlations include both temporal difference learning [6] and recently postulated temporally asymmetric Hebbian learning [7]. Hasler’s correlational floating-gate synapse can only perform updates based on the present input and feedback signals, and is therefore unsuitable for learning rules that correlate signals that occur at different times. Because signals that control adaptation and computation in our synapse are separate, our circuit can implement these time-dependent learning rules. Finally, we can calibrate our synapses to remove mismatch between the adaptation mechanisms of individual synapses. Mismatch between the same adaptation mechanisms on different floating-gate transistors limits the accuracy of learning rules based on these devices. This problem has been noted in previous circuits that use floating-gate adaptation [4,8]. In our circuit, different synapses can learn widely divergent weights from the same inputs because of component mismatch. We provide a calibration mechanism that enables identical adaptation across multiple synapses despite device mismatch. To our knowledge, this circuit is the first instance of a floating-gate learning circuit that includes this feature. This paper is organized as follows. First, we provide a brief introduction to floating-gate transistors. Next, we provide a description and analysis of our synapse, demonstrating that it can learn the conditional probability or correlation between a pair of binary signals. We then describe the calibration circuitry and show its effectiveness in compensating for adaptation mismatches. Finally, we discuss how this synapse can be used for silicon implementations of various learning networks. 2 Floating-gate transistors Because our circuit relies on floating-gate transistors to achieve adaptation, we begin by briefly discussing these devices. A floating-gate transistor (e.g. transistor M3 of Fig.1(a)) comprises a MOSFET whose gate is isolated on all sides by SiO2. A control gate capacitively couples signals to the floating gate. Charge stored on the floating gate implements a nonvolatile analog weight; the transistor’s output current varies with both the floating-gate voltage and the control-gate voltage. We use Fowler-Nordheim tunneling [9] to increase the floating-gate charge, and impact-ionized hot-electron injection (IHEI) [10] to decrease the floating-gate charge. We tunnel by placing a high voltage on a tunneling implant, denoted by the arrow in Fig.1(a). We inject by imposing more than about 3V across the drain and source of transistor M3. The circuit allows simultaneous adaptation and computation, because neither tunneling nor IHEI interfere with circuit operation. Over a wide range of tunneling voltages Vtun, we can approximate the magnitude of the tunneling current Itun as [4]: I tun = I tun 0 exp (Vtun − V fg ) / Vχ (1) where Vtun is the tunneling-implant voltage, Vfg is the floating-gate voltage, and Itun0 and Vχ are fit constants. Over a wide range of transistor drain and source voltages, we can approximate the magnitude of the injection current Iinj as [4]: 1−U t / Vγ I inj = I inj 0 I s exp ( (Vs − Vd ) / Vγ ) (2) where Vs and Vd are the drain and source voltages, Iinj0 is a pre-exponential current, Vγ is a constant that depends on the VLSI process, and Ut is the thermal voltage kT/q. 3 T h e s i l i co n s y n a p s e We show our silicon synapse in Fig.1. The synapse stores an analog weight W, multiplies W by a binary input Xin, and adapts W to either a conditional probability P(Xcor|Y) or a correlation P(XcorY). Xin is analogous to a presynaptic input, while Y is analogous to a postsynaptic signal or error feedback. Xcor is a presynaptic adaptation signal, and typically has some relationship with Xin. We can implement different learning rules by altering the relationship between Xcor and Xin. For some examples, see section 4. We now describe the circuit in more detail. The drain current of floating-gate transistor M4 represents the weight value W. Because the control gate of M4 is fixed, W depends solely on the charge on floating-gate capacitor C1. We can switch the drain current on or off using transistor M7; this switching action corresponds to a multiplication of the weight value W by a binary input signal, Xin. We choose values for the drain voltage of the M4 to prevent injection. A second floating-gate transistor M3, whose gate is also connected to C1, controls adaptation by injection and tunneling. Simultaneously high input signals Xcor and Y cause injection, increasing the weight. A high Vtun causes tunneling, decreasing the weight. We either choose to correlate a high Vtun with signal Y or provide a fixed high Vtun throughout the adaptation process. The choice determines whether the circuit learns a conditional probability or a correlation, respectively. Because the drain current sourced by M4 provides is the weight W, we can express W in terms of M4’s floating-gate voltage, Vfg. Vfg includes the effects of both the fixed controlgate voltage and the variable floating-gate charge. The expression differs depending on whether the readout transistor is operating in the subthreshold or above-threshold regime. We provide both expressions below: I 0 exp( − κ 2V fg /(1 + κ )U t ) W= κ V fg (1 + κ ) 2 β V0 − below threshold 2 (3) above threshold Here V0 is a constant that depends on the threshold voltage and on Vdd, Ut is the thermal voltage kT/q, κ is the floating-gate-to-channel coupling coefficient, and I 0 is a fixed bias current. Eq. 3 shows that W depends solely on Vfg, (all the other factors are constants). These equations differ slightly from standard equations for the source current through a transistor due to source degeneration caused by M 4. This degeneration smoothes the nonlinear relationship between Vfg and Is; its addition to the circuit is optional. 3.1 Weight adaptation Because W depends on Vfg, we can control W by tunneling or injecting transistor M3. In this section, we show that these mechanisms enable our circuit to learn the correlation or conditional probability between inputs Xcor (which we will refer to as X) and Y. Our analysis assumes that these statistics are fixed over some period during which adaptation occurs. The change in floating-gate voltage, and hence the weight, discussed below should therefore be interpreted in terms of the expected weight change due to the statistics of the inputs. We discuss learning of conditional probabilities; a slight change in the tunneling signal, described previously, allows us to learn correlations instead. We first derive the injection equation for the floating-gate voltage in terms of the joint probability P(X,Y) by considering the relationship between the input signals and Is, Vs, Vb Vtun M1 W eq (nA) 80 M2 60 40 C1 Xcor M4 M3 W M5 Xin Y o chip data − fit: P(X|Y)0.78 20 M6 0 M7 synaptic output 0.2 0.4 0.6 Pr(X|Y) 1 0.8 (b) 3.5 Fig. 1. (a) Synapse schematic. (b) Plot of equilibrium weight in the subthreshold regime versus the conditional probability P(X|Y), showing both experimental chip data and a fit from Eq.7 (c). Plot of equilibrium weight versus conditional probability in the above-threshold regime, again showing chip data and a fit from Eq.7. W eq (µA) (a). 3 2.5 2 0 o chip data − fit 0.2 0.4 0.6 Pr(X|Y) 0.8 1 (c) and Vd of M3. We assume that transistor M1 is in saturation, constraining Is at M3 to be constant. Presentation of a joint binary event (X,Y) closes nFET switches M5 and M6, pulling the drain voltage Vd of M3 to 0V and causing injection. Therefore the probability that Vd is low enough to cause injection is the probability of the joint event Pr(X,Y). By Eq.2 , the amount of the injection is also dependent on M3’s source voltage Vs. Because M3 is constrained to a fixed channel current, a drop in the floating-gate voltage, ∆Vfg, causes a drop in Vs of magnitude κ∆Vfg. Substituting these expressions into Eq.2 results in a floating-gate voltage update of: (dV fg / dt )inj = − I inj 0 Pr( X , Y ) exp(κ Vfg / Vγ ) (4) where Iinj0 also includes the constant source current. Eq.4 shows that the floating-gate voltage update due to injection is a function of the probability of the joint event (X,Y). Next we analyze the effects of tunneling on the floating-gate voltage. The origin of the tunneling signal determines whether the synapse is learning a conditional probability or a correlation. If the circuit is learning a conditional probability, occurrence of the conditioning event Y gates a corresponding high-voltage (~9V) signal onto the tunneling implant. Consequently, we can express the change in floating-gate voltage due to tunneling in terms of the probability of Y, and the floating-gate voltage. (dV fg / dt )tun = I tun 0 Pr(Y ) exp(−V fg / Vχ ) (5) Eq.5 shows that the floating-gate voltage update due to tunneling is a function of the probability of the event Y. 3.2 Weight equilibrium To demonstrate that our circuit learns P(X|Y), we show that the equilibrium weight of the synapse is solely a function of P(X|Y). The equilibrium weight of the synapse is the weight value where the expected weight change over time equals zero. This weight value corresponds to the floating-gate voltage where injection and tunneling currents are equal. To find this voltage, we equate Eq’s. 4 and 5 and solve: eq V fg = I inj 0 −1 log Pr( X | Y ) + log I tun 0 (κ / Vy + 1/ Vx ) (6) To derive the equilibrium weight, we substitute Eq.6 into Eq.3 and solve: I0 Weq = I inj 0 I tun 0 β V0 + η log where α = α Pr( X | Y ) I inj 0 I tun 0 below threshold 2 + log ( Pr( X | Y ) ) above threshold (7) κ2 κ2 and η = . (1 + κ )U t (κ / Vγ + 1/ Vχ ) (1 + κ )(κ / Vγ + 1/ Vχ ) Consequently, the equilibrium weight is a function of the conditional probability below threshold and a function of the log-squared conditional probability above threshold. Note that the equilibrium weight is stable because of negative feedback in the tunneling and injection processes. Therefore, the weight will always converge to the equilibrium value shown in Eq.7. Figs. 1(b) and (c) show the equilibrium weight versus the conditional P(X|Y) for both sub- and above-threshold circuits, along with fits to Eq.7. Note that both the sub- and above-threshold relationship between P(X|Y) and the equilibrium weight enables us to compute the probability of a vector of synaptic inputs X given a post-synaptic response Y. In both cases, we can apply the outputs currents of an array of synapses through diodes, and then add the resulting voltages via a capacitive voltage divider, resulting in a voltage that is a linear function of log P(X|Y). 3.3 Calibration circuitry Mismatch between injection and tunneling in different floating-gate transistors can greatly reduce the ability of our synapses to learn meaningful values. Experimental data from floating-gate transistors fabricated in a 0.35µm process show that injection varies by as much as 2:1 across a chip, and tunneling by up to 1.2:1. The effect of this mismatch on our synapses causes the weight equilibrium of different synapses to differ by a multiplicative gain. Fig.2 (b) shows the equilibrium weights of an array of six synapses exposed to identical input signals. The variation of the synaptic weights is of the same order of magnitude as the weights themselves, making large arrays of synapses all but useless for implementing many learning algorithms. We alleviate this problem by calibrating our synapses to equalize the pre-exponential tunneling and injection constants. Because the dependence of the equilibrium weight on these constants is determined by the ratio of Iinj0/Itun0, our calibration process changes Iinj to equalize the ratio of injection to tunneling across all synapses. We choose to calibrate injection because we can easily change Iinj0 by altering the drain current through M1. Our calibration procedure is a self-convergent memory write [11], that causes the equilibrium weight of every synapse to equal the current Ical. Calibration requires many operat- 80 Verase M1 M8 60 W eq (nA) Vb M2 Vtun 40 M3 M4 M9 V cal 20 M5 0 M7 M6 synaptic output 0.2 Ical 0.6 P(X|Y) 0.8 1 0.4 0.6 P(X|Y) 0.8 1 0.4 (b) 80 (a) Fig. 2. (a) Schematic of calibrated synapse with signals used during the calibration procedure. (b) Equilibrium weights for array of synapses shown in Fig.1a. (c) Equilibrium weights for array of calibrated synapses after calibration. W eq (nA) 60 40 20 0 0.2 (c) ing cycles, where, during each cycle, we first increase the equilibrium weight of the synapse, and second, we let the synapse adapt to the new equilibrium weight. We create the calibrated synapse by modifying our original synapse according to Fig. 2(a). We convert M1 into a floating-gate transistor, whose floating-gate charge thereby sets M3’s channel current, providing control of Iinj0 of Eq.7. Transistor M8 modifies M1’s gate charge by means of injection when M9’s gate is low and Vcal is low. M9’s gate is only low when the equilibrium weight W is less than Ical. During calibration, injection and tunneling on M3 are continuously active. We apply a pulse train to Vcal; during each pulse period, Vcal is predominately high. When Vcal is high, the synapse adapts towards its equilibrium weight. When Vcal pulses low, M8 injects, increasing the synapse’s equilibrium weight W. We repeat this process until the equilibrium weight W matches Ical, causing M9’s gate voltage to rise, disabling Vcal and with it injection. To ensure that a precalibrated synapse has an equilibrium weight below Ical, we use tunneling to erase all bias transistors prior to calibration. Fig.2(c) shows the equilibrium weights of six synapses after calibration. The data show that calibration can reduce the effect of mismatched adaptation on the synapse’s learned weight to a small fraction of the weight itself. Because M1 is a floating-gate transistor, its parasitic gate-drain capacitance causes a mild dependence between M1’s drain voltage and source current. Consequently, M3’s floatinggate voltage now affects its source current (through M1’s drain voltage), and we can model M3 as a source-degenerated pFET [3]. The new expression for the injection current in M3 is: Presynaptic neuron W+ Synapse W− X Y Injection Postsynaptic neuron Injection Activation window Fig. 3. A method for achieving spike-time dependent plasticity in silicon. (dV fg / dt )inj = − I inj 0 Pr( X , Y ) exp Vfg κ Vγ − κ k1 Ut (8) where k1 is close to zero. The new expression for injection slightly changes the α and η terms of the weight equilibrium in Eq.7, although the qualitative relationship between the weight equilibrium and the conditional probability remains the same. 4 Implementing silicon synaptic learning rules In this section we discuss how to implement a variety of learning rules from the computational-neurobiology and neural-network literature with our synapse circuit. We can use our circuit to implement a Hebbian learning rule. Simultaneously activating both M5 and M6 is analogous to heterosynaptic LTP based on synchronized pre- and postsynaptic signals, and activating tunneling with the postsynaptic Y is analogous to homosynaptic LTD. In our synapse, we tie Xin and Xcor together and correlate Vtun with Y. Our synapse is also capable of emulating a Boltzmann weight-update rule [2]. This weight-update rule derives from the difference between correlations among neurons when the network receives external input, and when the network operates in a free running phase (denoted as clamped and unclamped phases respectively). With weight decay, a Boltzmann synapse learns the difference between correlations in the clamped and unclamped phase. We can create a Boltzmann synapse from a pair of our circuits, in which the effective weight is the difference between the weights of the two synapses. To implement a weight update, we update one silicon synapse based on pre- and postsynaptic signals in the clamped phase, and update the other synapse in the unclamped phase. We do this by sending Xin to Xcor of one synapse in the clamped phase, and sending Xin to Xcor of the other synapse in the negative phase. Vtun remains constant throughout adaptation. Finally, we consider implementing a temporally asymmetric Hebbian learning rule [7] using our synapse. In temporally asymmetric Hebbian learning, a synapse exhibits LTP or LTD if the presynaptic input occurs before or after the postsynaptic response, respectively. We implement an asymmetric learning synapse using two of our circuits, where the synaptic weight is the difference in the weights of the two circuit. We show the circuit in Fig. 3. Each neuron sends two signals: a neuronal output, and an adaptation time window that is active for some time afterwards. Therefore, the combined synapse receives two presynaptic signals and two postsynaptic signals. The relative timing of a postsynaptic response, Y, with the presynaptic input, X, determines whether the synapse undergoes LTP or LTD. If Y occurs before X, Y’s time window correlates with X, causing injection on the negative synapse, decreasing the weight. If Y occurs after X, Y correlates with X’s time window, causing injection on the positive synapse, increasing the weight. Hence, our circuit can use the relative timing between presynaptic and postsynaptic activity to implement learning. 5 Conclusion We have described a silicon synapse that implements a wide range of spike-based learning rules, and that does not suffer from device mismatch. We have also described how we can implement various silicon-learning networks using this synapse. In addition, although we have only analyzed the learning properties of the synapse for binary signals, we can instead use pulse-coded analog signals. One possible avenue for future work is to analyze the implications of different pulse-coded schemes on the circuit’s adaptive behavior. A c k n o w l e d g e me n t s This work was supported by the National Science Foundation and by the Office of Naval Research. Aaron Shon was also supported by a NDSEG fellowship. We thank Anhai Doan and the anonymous reviewers for helpful comments. References [1] W.B.Levy, “A computational approach to hippocampal function,” in R.D. Hawkins and G.H. Bower (eds.), Computational Models of Learning in Simple Neural Systems, The Psychology of Learning and Motivation vol. 23, pp. 243-305, San Diego, CA: Academic Press, 1989. [2] D. H. Ackley, G. Hinton, and T. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science vol. 9, pp. 147-169, 1985. [3 ] P. Hasler, B. A. Minch, J. Dugger, and C. Diorio, “Adaptive circuits and synapses using pFET floating-gate devices, ” in G. Cauwenberghs and M. Bayoumi (eds.) Learning in Silicon, pp. 33-65, Kluwer Academic, 1999. [4] P. Hafliger, A spike-based learning rule and its implementation in analog hardware, Ph.D. thesis, ETH Zurich, 1999. [5] C. Diorio, P. Hasler, B. A. Minch, and C. Mead, “A floating-gate MOS learning array with locally computer weight updates,” IEEE Transactions on Electron Devices vol. 44(12), pp. 2281-2289, 1997. [6] R. Sutton, “Learning to predict by the methods of temporal difference,” Machine Learning, vol. 3, p p . 9-44, 1988. [7] H.Markram, J. Lübke, M. Frotscher, and B. Sakmann, “Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,” Science vol. 275, pp.213-215, 1997. [8] A. Pesavento, T. Horiuchi, C. Diorio, and C. Koch, “Adaptation of current signals with floating-gate circuits,” in Proceedings of the 7th International Conference on Microelectronics for Neural, Fuzzy, and Bio-Inspired Systems (Microneuro99), pp. 128-134, 1999. [9] M. Lenzlinger and E. H. Snow. “Fowler-Nordheim tunneling into thermally grown SiO2,” Journal of Applied Physics vol. 40(1), p p . 278--283, 1969. [10] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices, San Diego, CA: Academic Press, 1995. [11] C. Diorio, “A p-channel MOS synapse transistor with self-convergent memory writes,” IEEE Journal of Solid-State Circuits vol. 36(5), pp. 816-822, 2001.

4 0.18198276 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas

Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

5 0.1667463 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

Author: B. D. Wright, Kamal Sen, William Bialek, A. J. Doupe

Abstract: In nature, animals encounter high dimensional sensory stimuli that have complex statistical and dynamical structure. Attempts to study the neural coding of these natural signals face challenges both in the selection of the signal ensemble and in the analysis of the resulting neural responses. For zebra finches, naturalistic stimuli can be defined as sounds that they encounter in a colony of conspecific birds. We assembled an ensemble of these sounds by recording groups of 10-40 zebra finches, and then analyzed the response of single neurons in the songbird central auditory area (field L) to continuous playback of long segments from this ensemble. Following methods developed in the fly visual system, we measured the information that spike trains provide about the acoustic stimulus without any assumptions about which features of the stimulus are relevant. Preliminary results indicate that large amounts of information are carried by spike timing, with roughly half of the information accessible only at time resolutions better than 10 ms; additional information is still being revealed as time resolution is improved to 2 ms. Information can be decomposed into that carried by the locking of individual spikes to the stimulus (or modulations of spike rate) vs. that carried by timing in spike patterns. Initial results show that in field L, temporal patterns give at least % extra information. Thus, single central auditory neurons can provide an informative representation of naturalistic sounds, in which spike timing may play a significant role.   

6 0.15528487 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

7 0.13808733 34 nips-2001-Analog Soft-Pattern-Matching Classifier using Floating-Gate MOS Technology

8 0.11134611 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

9 0.097221956 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines

10 0.089696981 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

11 0.082373865 2 nips-2001-3 state neurons for contextual processing

12 0.076708727 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

13 0.063251846 37 nips-2001-Associative memory in realistic neuronal networks

14 0.063148782 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

15 0.06041703 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

16 0.055900581 33 nips-2001-An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures

17 0.048657559 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance

18 0.041700698 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

19 0.03916505 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

20 0.038160771 23 nips-2001-A theory of neural integration in the head-direction system


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.11), (1, -0.248), (2, -0.137), (3, 0.071), (4, 0.148), (5, 0.049), (6, 0.165), (7, -0.036), (8, -0.096), (9, -0.086), (10, 0.071), (11, -0.313), (12, 0.161), (13, 0.033), (14, -0.135), (15, 0.132), (16, -0.0), (17, -0.015), (18, 0.138), (19, 0.246), (20, -0.007), (21, -0.09), (22, -0.07), (23, -0.058), (24, 0.014), (25, -0.104), (26, -0.041), (27, 0.011), (28, -0.02), (29, -0.046), (30, -0.006), (31, 0.134), (32, 0.115), (33, -0.0), (34, 0.008), (35, 0.008), (36, -0.007), (37, -0.109), (38, -0.019), (39, -0.006), (40, 0.002), (41, 0.059), (42, -0.111), (43, 0.093), (44, -0.074), (45, -0.022), (46, -0.034), (47, 0.075), (48, 0.063), (49, 0.151)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98492968 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

2 0.8836078 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

Author: Aaron P. Shon, David Hsu, Chris Diorio

Abstract: We have designed and fabricated a VLSI synapse that can learn a conditional probability or correlation between spike-based inputs and feedback signals. The synapse is low power, compact, provides nonvolatile weight storage, and can perform simultaneous multiplication and adaptation. We can calibrate arrays of synapses to ensure uniform adaptation characteristics. Finally, adaptation in our synapse does not necessarily depend on the signals used for computation. Consequently, our synapse can implement learning rules that correlate past and present synaptic activity. We provide analysis and experimental chip results demonstrating the operation in learning and calibration mode, and show how to use our synapse to implement various learning rules in silicon. 1 I n tro d u cti o n Computation with conditional probabilities and correlations underlies many models of neurally inspired information processing. For example, in the sequence-learning neural network models proposed by Levy [1], synapses store the log conditional probability that a presynaptic spike occurred given that the postsynaptic neuron spiked sometime later. Boltzmann machine synapses learn the difference between the correlations of pairs of neurons in the sleep and wake phase [2]. In most neural models, computation and adaptation occurs at the synaptic level. Hence, a silicon synapse that can learn conditional probabilities or correlations between pre- and post-synaptic signals can be a key part of many silicon neural-learning architectures. We have designed and implemented a silicon synapse, in a 0.35µm CMOS process, that learns a synaptic weight that corresponds to the conditional probability or correlation between binary input and feedback signals. This circuit utilizes floating-gate transistors to provide both nonvolatile storage and weight adaptation mechanisms [3]. In addition, the circuit is compact, low power, and provides simultaneous adaptation and computation. Our circuit improves upon previous implementations of floating-gate based learning synapses [3,4,5] in several ways. First, our synapse appears to be the first spike-based floating-gate synapse that implements a general learning principle, rather than a particular learning rule [4,5]. We demon- strate that our synapse can learn either the conditional probability or the correlation between input and feedback signals. Consequently, we can implement a wide range of synaptic learning networks with our circuit. Second, unlike the general correlational learning synapse proposed by Hasler et. al. [3], our synapse can implement learning rules that correlate pre- and postsynaptic activity that occur at different times. Learning algorithms that employ time-separated correlations include both temporal difference learning [6] and recently postulated temporally asymmetric Hebbian learning [7]. Hasler’s correlational floating-gate synapse can only perform updates based on the present input and feedback signals, and is therefore unsuitable for learning rules that correlate signals that occur at different times. Because signals that control adaptation and computation in our synapse are separate, our circuit can implement these time-dependent learning rules. Finally, we can calibrate our synapses to remove mismatch between the adaptation mechanisms of individual synapses. Mismatch between the same adaptation mechanisms on different floating-gate transistors limits the accuracy of learning rules based on these devices. This problem has been noted in previous circuits that use floating-gate adaptation [4,8]. In our circuit, different synapses can learn widely divergent weights from the same inputs because of component mismatch. We provide a calibration mechanism that enables identical adaptation across multiple synapses despite device mismatch. To our knowledge, this circuit is the first instance of a floating-gate learning circuit that includes this feature. This paper is organized as follows. First, we provide a brief introduction to floating-gate transistors. Next, we provide a description and analysis of our synapse, demonstrating that it can learn the conditional probability or correlation between a pair of binary signals. We then describe the calibration circuitry and show its effectiveness in compensating for adaptation mismatches. Finally, we discuss how this synapse can be used for silicon implementations of various learning networks. 2 Floating-gate transistors Because our circuit relies on floating-gate transistors to achieve adaptation, we begin by briefly discussing these devices. A floating-gate transistor (e.g. transistor M3 of Fig.1(a)) comprises a MOSFET whose gate is isolated on all sides by SiO2. A control gate capacitively couples signals to the floating gate. Charge stored on the floating gate implements a nonvolatile analog weight; the transistor’s output current varies with both the floating-gate voltage and the control-gate voltage. We use Fowler-Nordheim tunneling [9] to increase the floating-gate charge, and impact-ionized hot-electron injection (IHEI) [10] to decrease the floating-gate charge. We tunnel by placing a high voltage on a tunneling implant, denoted by the arrow in Fig.1(a). We inject by imposing more than about 3V across the drain and source of transistor M3. The circuit allows simultaneous adaptation and computation, because neither tunneling nor IHEI interfere with circuit operation. Over a wide range of tunneling voltages Vtun, we can approximate the magnitude of the tunneling current Itun as [4]: I tun = I tun 0 exp (Vtun − V fg ) / Vχ (1) where Vtun is the tunneling-implant voltage, Vfg is the floating-gate voltage, and Itun0 and Vχ are fit constants. Over a wide range of transistor drain and source voltages, we can approximate the magnitude of the injection current Iinj as [4]: 1−U t / Vγ I inj = I inj 0 I s exp ( (Vs − Vd ) / Vγ ) (2) where Vs and Vd are the drain and source voltages, Iinj0 is a pre-exponential current, Vγ is a constant that depends on the VLSI process, and Ut is the thermal voltage kT/q. 3 T h e s i l i co n s y n a p s e We show our silicon synapse in Fig.1. The synapse stores an analog weight W, multiplies W by a binary input Xin, and adapts W to either a conditional probability P(Xcor|Y) or a correlation P(XcorY). Xin is analogous to a presynaptic input, while Y is analogous to a postsynaptic signal or error feedback. Xcor is a presynaptic adaptation signal, and typically has some relationship with Xin. We can implement different learning rules by altering the relationship between Xcor and Xin. For some examples, see section 4. We now describe the circuit in more detail. The drain current of floating-gate transistor M4 represents the weight value W. Because the control gate of M4 is fixed, W depends solely on the charge on floating-gate capacitor C1. We can switch the drain current on or off using transistor M7; this switching action corresponds to a multiplication of the weight value W by a binary input signal, Xin. We choose values for the drain voltage of the M4 to prevent injection. A second floating-gate transistor M3, whose gate is also connected to C1, controls adaptation by injection and tunneling. Simultaneously high input signals Xcor and Y cause injection, increasing the weight. A high Vtun causes tunneling, decreasing the weight. We either choose to correlate a high Vtun with signal Y or provide a fixed high Vtun throughout the adaptation process. The choice determines whether the circuit learns a conditional probability or a correlation, respectively. Because the drain current sourced by M4 provides is the weight W, we can express W in terms of M4’s floating-gate voltage, Vfg. Vfg includes the effects of both the fixed controlgate voltage and the variable floating-gate charge. The expression differs depending on whether the readout transistor is operating in the subthreshold or above-threshold regime. We provide both expressions below: I 0 exp( − κ 2V fg /(1 + κ )U t ) W= κ V fg (1 + κ ) 2 β V0 − below threshold 2 (3) above threshold Here V0 is a constant that depends on the threshold voltage and on Vdd, Ut is the thermal voltage kT/q, κ is the floating-gate-to-channel coupling coefficient, and I 0 is a fixed bias current. Eq. 3 shows that W depends solely on Vfg, (all the other factors are constants). These equations differ slightly from standard equations for the source current through a transistor due to source degeneration caused by M 4. This degeneration smoothes the nonlinear relationship between Vfg and Is; its addition to the circuit is optional. 3.1 Weight adaptation Because W depends on Vfg, we can control W by tunneling or injecting transistor M3. In this section, we show that these mechanisms enable our circuit to learn the correlation or conditional probability between inputs Xcor (which we will refer to as X) and Y. Our analysis assumes that these statistics are fixed over some period during which adaptation occurs. The change in floating-gate voltage, and hence the weight, discussed below should therefore be interpreted in terms of the expected weight change due to the statistics of the inputs. We discuss learning of conditional probabilities; a slight change in the tunneling signal, described previously, allows us to learn correlations instead. We first derive the injection equation for the floating-gate voltage in terms of the joint probability P(X,Y) by considering the relationship between the input signals and Is, Vs, Vb Vtun M1 W eq (nA) 80 M2 60 40 C1 Xcor M4 M3 W M5 Xin Y o chip data − fit: P(X|Y)0.78 20 M6 0 M7 synaptic output 0.2 0.4 0.6 Pr(X|Y) 1 0.8 (b) 3.5 Fig. 1. (a) Synapse schematic. (b) Plot of equilibrium weight in the subthreshold regime versus the conditional probability P(X|Y), showing both experimental chip data and a fit from Eq.7 (c). Plot of equilibrium weight versus conditional probability in the above-threshold regime, again showing chip data and a fit from Eq.7. W eq (µA) (a). 3 2.5 2 0 o chip data − fit 0.2 0.4 0.6 Pr(X|Y) 0.8 1 (c) and Vd of M3. We assume that transistor M1 is in saturation, constraining Is at M3 to be constant. Presentation of a joint binary event (X,Y) closes nFET switches M5 and M6, pulling the drain voltage Vd of M3 to 0V and causing injection. Therefore the probability that Vd is low enough to cause injection is the probability of the joint event Pr(X,Y). By Eq.2 , the amount of the injection is also dependent on M3’s source voltage Vs. Because M3 is constrained to a fixed channel current, a drop in the floating-gate voltage, ∆Vfg, causes a drop in Vs of magnitude κ∆Vfg. Substituting these expressions into Eq.2 results in a floating-gate voltage update of: (dV fg / dt )inj = − I inj 0 Pr( X , Y ) exp(κ Vfg / Vγ ) (4) where Iinj0 also includes the constant source current. Eq.4 shows that the floating-gate voltage update due to injection is a function of the probability of the joint event (X,Y). Next we analyze the effects of tunneling on the floating-gate voltage. The origin of the tunneling signal determines whether the synapse is learning a conditional probability or a correlation. If the circuit is learning a conditional probability, occurrence of the conditioning event Y gates a corresponding high-voltage (~9V) signal onto the tunneling implant. Consequently, we can express the change in floating-gate voltage due to tunneling in terms of the probability of Y, and the floating-gate voltage. (dV fg / dt )tun = I tun 0 Pr(Y ) exp(−V fg / Vχ ) (5) Eq.5 shows that the floating-gate voltage update due to tunneling is a function of the probability of the event Y. 3.2 Weight equilibrium To demonstrate that our circuit learns P(X|Y), we show that the equilibrium weight of the synapse is solely a function of P(X|Y). The equilibrium weight of the synapse is the weight value where the expected weight change over time equals zero. This weight value corresponds to the floating-gate voltage where injection and tunneling currents are equal. To find this voltage, we equate Eq’s. 4 and 5 and solve: eq V fg = I inj 0 −1 log Pr( X | Y ) + log I tun 0 (κ / Vy + 1/ Vx ) (6) To derive the equilibrium weight, we substitute Eq.6 into Eq.3 and solve: I0 Weq = I inj 0 I tun 0 β V0 + η log where α = α Pr( X | Y ) I inj 0 I tun 0 below threshold 2 + log ( Pr( X | Y ) ) above threshold (7) κ2 κ2 and η = . (1 + κ )U t (κ / Vγ + 1/ Vχ ) (1 + κ )(κ / Vγ + 1/ Vχ ) Consequently, the equilibrium weight is a function of the conditional probability below threshold and a function of the log-squared conditional probability above threshold. Note that the equilibrium weight is stable because of negative feedback in the tunneling and injection processes. Therefore, the weight will always converge to the equilibrium value shown in Eq.7. Figs. 1(b) and (c) show the equilibrium weight versus the conditional P(X|Y) for both sub- and above-threshold circuits, along with fits to Eq.7. Note that both the sub- and above-threshold relationship between P(X|Y) and the equilibrium weight enables us to compute the probability of a vector of synaptic inputs X given a post-synaptic response Y. In both cases, we can apply the outputs currents of an array of synapses through diodes, and then add the resulting voltages via a capacitive voltage divider, resulting in a voltage that is a linear function of log P(X|Y). 3.3 Calibration circuitry Mismatch between injection and tunneling in different floating-gate transistors can greatly reduce the ability of our synapses to learn meaningful values. Experimental data from floating-gate transistors fabricated in a 0.35µm process show that injection varies by as much as 2:1 across a chip, and tunneling by up to 1.2:1. The effect of this mismatch on our synapses causes the weight equilibrium of different synapses to differ by a multiplicative gain. Fig.2 (b) shows the equilibrium weights of an array of six synapses exposed to identical input signals. The variation of the synaptic weights is of the same order of magnitude as the weights themselves, making large arrays of synapses all but useless for implementing many learning algorithms. We alleviate this problem by calibrating our synapses to equalize the pre-exponential tunneling and injection constants. Because the dependence of the equilibrium weight on these constants is determined by the ratio of Iinj0/Itun0, our calibration process changes Iinj to equalize the ratio of injection to tunneling across all synapses. We choose to calibrate injection because we can easily change Iinj0 by altering the drain current through M1. Our calibration procedure is a self-convergent memory write [11], that causes the equilibrium weight of every synapse to equal the current Ical. Calibration requires many operat- 80 Verase M1 M8 60 W eq (nA) Vb M2 Vtun 40 M3 M4 M9 V cal 20 M5 0 M7 M6 synaptic output 0.2 Ical 0.6 P(X|Y) 0.8 1 0.4 0.6 P(X|Y) 0.8 1 0.4 (b) 80 (a) Fig. 2. (a) Schematic of calibrated synapse with signals used during the calibration procedure. (b) Equilibrium weights for array of synapses shown in Fig.1a. (c) Equilibrium weights for array of calibrated synapses after calibration. W eq (nA) 60 40 20 0 0.2 (c) ing cycles, where, during each cycle, we first increase the equilibrium weight of the synapse, and second, we let the synapse adapt to the new equilibrium weight. We create the calibrated synapse by modifying our original synapse according to Fig. 2(a). We convert M1 into a floating-gate transistor, whose floating-gate charge thereby sets M3’s channel current, providing control of Iinj0 of Eq.7. Transistor M8 modifies M1’s gate charge by means of injection when M9’s gate is low and Vcal is low. M9’s gate is only low when the equilibrium weight W is less than Ical. During calibration, injection and tunneling on M3 are continuously active. We apply a pulse train to Vcal; during each pulse period, Vcal is predominately high. When Vcal is high, the synapse adapts towards its equilibrium weight. When Vcal pulses low, M8 injects, increasing the synapse’s equilibrium weight W. We repeat this process until the equilibrium weight W matches Ical, causing M9’s gate voltage to rise, disabling Vcal and with it injection. To ensure that a precalibrated synapse has an equilibrium weight below Ical, we use tunneling to erase all bias transistors prior to calibration. Fig.2(c) shows the equilibrium weights of six synapses after calibration. The data show that calibration can reduce the effect of mismatched adaptation on the synapse’s learned weight to a small fraction of the weight itself. Because M1 is a floating-gate transistor, its parasitic gate-drain capacitance causes a mild dependence between M1’s drain voltage and source current. Consequently, M3’s floatinggate voltage now affects its source current (through M1’s drain voltage), and we can model M3 as a source-degenerated pFET [3]. The new expression for the injection current in M3 is: Presynaptic neuron W+ Synapse W− X Y Injection Postsynaptic neuron Injection Activation window Fig. 3. A method for achieving spike-time dependent plasticity in silicon. (dV fg / dt )inj = − I inj 0 Pr( X , Y ) exp Vfg κ Vγ − κ k1 Ut (8) where k1 is close to zero. The new expression for injection slightly changes the α and η terms of the weight equilibrium in Eq.7, although the qualitative relationship between the weight equilibrium and the conditional probability remains the same. 4 Implementing silicon synaptic learning rules In this section we discuss how to implement a variety of learning rules from the computational-neurobiology and neural-network literature with our synapse circuit. We can use our circuit to implement a Hebbian learning rule. Simultaneously activating both M5 and M6 is analogous to heterosynaptic LTP based on synchronized pre- and postsynaptic signals, and activating tunneling with the postsynaptic Y is analogous to homosynaptic LTD. In our synapse, we tie Xin and Xcor together and correlate Vtun with Y. Our synapse is also capable of emulating a Boltzmann weight-update rule [2]. This weight-update rule derives from the difference between correlations among neurons when the network receives external input, and when the network operates in a free running phase (denoted as clamped and unclamped phases respectively). With weight decay, a Boltzmann synapse learns the difference between correlations in the clamped and unclamped phase. We can create a Boltzmann synapse from a pair of our circuits, in which the effective weight is the difference between the weights of the two synapses. To implement a weight update, we update one silicon synapse based on pre- and postsynaptic signals in the clamped phase, and update the other synapse in the unclamped phase. We do this by sending Xin to Xcor of one synapse in the clamped phase, and sending Xin to Xcor of the other synapse in the negative phase. Vtun remains constant throughout adaptation. Finally, we consider implementing a temporally asymmetric Hebbian learning rule [7] using our synapse. In temporally asymmetric Hebbian learning, a synapse exhibits LTP or LTD if the presynaptic input occurs before or after the postsynaptic response, respectively. We implement an asymmetric learning synapse using two of our circuits, where the synaptic weight is the difference in the weights of the two circuit. We show the circuit in Fig. 3. Each neuron sends two signals: a neuronal output, and an adaptation time window that is active for some time afterwards. Therefore, the combined synapse receives two presynaptic signals and two postsynaptic signals. The relative timing of a postsynaptic response, Y, with the presynaptic input, X, determines whether the synapse undergoes LTP or LTD. If Y occurs before X, Y’s time window correlates with X, causing injection on the negative synapse, decreasing the weight. If Y occurs after X, Y correlates with X’s time window, causing injection on the positive synapse, increasing the weight. Hence, our circuit can use the relative timing between presynaptic and postsynaptic activity to implement learning. 5 Conclusion We have described a silicon synapse that implements a wide range of spike-based learning rules, and that does not suffer from device mismatch. We have also described how we can implement various silicon-learning networks using this synapse. In addition, although we have only analyzed the learning properties of the synapse for binary signals, we can instead use pulse-coded analog signals. One possible avenue for future work is to analyze the implications of different pulse-coded schemes on the circuit’s adaptive behavior. A c k n o w l e d g e me n t s This work was supported by the National Science Foundation and by the Office of Naval Research. Aaron Shon was also supported by a NDSEG fellowship. We thank Anhai Doan and the anonymous reviewers for helpful comments. References [1] W.B.Levy, “A computational approach to hippocampal function,” in R.D. Hawkins and G.H. Bower (eds.), Computational Models of Learning in Simple Neural Systems, The Psychology of Learning and Motivation vol. 23, pp. 243-305, San Diego, CA: Academic Press, 1989. [2] D. H. Ackley, G. Hinton, and T. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science vol. 9, pp. 147-169, 1985. [3 ] P. Hasler, B. A. Minch, J. Dugger, and C. Diorio, “Adaptive circuits and synapses using pFET floating-gate devices, ” in G. Cauwenberghs and M. Bayoumi (eds.) Learning in Silicon, pp. 33-65, Kluwer Academic, 1999. [4] P. Hafliger, A spike-based learning rule and its implementation in analog hardware, Ph.D. thesis, ETH Zurich, 1999. [5] C. Diorio, P. Hasler, B. A. Minch, and C. Mead, “A floating-gate MOS learning array with locally computer weight updates,” IEEE Transactions on Electron Devices vol. 44(12), pp. 2281-2289, 1997. [6] R. Sutton, “Learning to predict by the methods of temporal difference,” Machine Learning, vol. 3, p p . 9-44, 1988. [7] H.Markram, J. Lübke, M. Frotscher, and B. Sakmann, “Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,” Science vol. 275, pp.213-215, 1997. [8] A. Pesavento, T. Horiuchi, C. Diorio, and C. Koch, “Adaptation of current signals with floating-gate circuits,” in Proceedings of the 7th International Conference on Microelectronics for Neural, Fuzzy, and Bio-Inspired Systems (Microneuro99), pp. 128-134, 1999. [9] M. Lenzlinger and E. H. Snow. “Fowler-Nordheim tunneling into thermally grown SiO2,” Journal of Applied Physics vol. 40(1), p p . 278--283, 1969. [10] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices, San Diego, CA: Academic Press, 1995. [11] C. Diorio, “A p-channel MOS synapse transistor with self-convergent memory writes,” IEEE Journal of Solid-State Circuits vol. 36(5), pp. 816-822, 2001.

3 0.68282682 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

4 0.47080043 34 nips-2001-Analog Soft-Pattern-Matching Classifier using Floating-Gate MOS Technology

Author: Toshihiko Yamasaki, Tadashi Shibata

Abstract: A flexible pattern-matching analog classifier is presented in conjunction with a robust image representation algorithm called Principal Axes Projection (PAP). In the circuit, the functional form of matching is configurable in terms of the peak position, the peak height and the sharpness of the similarity evaluation. The test chip was fabricated in a 0.6-µm CMOS technology and successfully applied to hand-written pattern recognition and medical radiograph analysis using PAP as a feature extraction pre-processing step for robust image coding. The separation and classification of overlapping patterns is also experimentally demonstrated. 1 I ntr o du c ti o n Pattern classification using template matching techniques is a powerful tool in implementing human-like intelligent systems. However, the processing is computationally very expensive, consuming a lot of CPU time when implemented as software running on general-purpose computers. Therefore, software approaches are not practical for real-time applications. For systems working in mobile environment, in particular, they are not realistic because the memory and computational resources are severely limited. The development of analog VLSI chips having a fully parallel template matching architecture [1,2] would be a promising solution in such applications because they offer an opportunity of low-power operation as well as very compact implementation. In order to build a real human-like intelligent system, however, not only the pattern representation algorithm but also the matching hardware itself needs to be made flexible and robust in carrying out the pattern matching task. First of all, two-dimensional patterns need to be represented by feature vectors having substantially reduced dimensions, while at the same time preserving the human perception of similarity among patterns in the vector space mapping. For this purpose, an image representation algorithm called Principal Axes Projection (PAP) has been de- veloped [3] and its robust nature in pattern recognition has been demonstrated in the applications to medical radiograph analysis [3] and hand-written digits recognition [4]. However, the demonstration so far was only carried out by computer simulation. Regarding the matching hardware, high-flexibility analog template matching circuits have been developed for PAP vector representation. The circuits are flexible in a sense that the matching criteria (the weight to elements, the strictness in matching) are configurable. In Ref. [5], the fundamental characteristics of the building block circuits were presented, and their application to simple hand-written digits was presented in Ref. [6]. The purpose of this paper is to demonstrate the robust nature of the hardware matching system by experiments. The classification of simple hand-written patterns and the cephalometric landmark identification in gray-scale medical radiographs have been carried out and successful results are presented. In addition, multiple overlapping patterns can be separated without utilizing a priori knowledge, which is one of the most difficult problems at present in artificial intelligence. 2 I ma g e re pr es e n tati on by P AP PAP is a feature extraction technique using the edge information. The input image (64x64 pixels) is first subjected to pixel-by-pixel spatial filtering operations to detect edges in four directions: horizontal (HR); vertical (VR); +45 degrees (+45); and –45 degrees (-45). Each detected edge is represented by a binary flag and four edge maps are generated. The two-dimensional bit array in an edge map is reduced to a one-dimensional array of numerals by projection. The horizontal edge flags are accumulated in the horizontal direction and projected onto vertical axis. The vertical, +45-degree and –45-degree edge flags are similarly projected onto horizontal, -45-degree and +45-degree axes, respectively. Therefore the method is called “Principal Axes Projection (PAP)” [3,4]. Then each projection data set is series connected in the order of HR, +45, VR, -45 to form a feature vector. Neighboring four elements are averaged and merged to one element and a 64-dimensional vector is finally obtained. This vector representation very well preserves the human perception of similarity in the vector space. In the experiments below, we have further reduced the feature vector to 16 dimensions by merging each set of four neighboring elements into one, without any significant degradation in performance. C i r cui t c o nf i g ura ti ons A B C VGG A B C VGG IOUT IOUT 1 1 2 2 4 4 1 VIN 13 VIN RST RST £ ¡ ¤¢  £ ¥ §¦  3 Figure 1: Schematic of vector element matching circuit: (a) pyramid (gain reduction) type; (b) plateau (feedback) type. The capacitor area ratio is indicated in the figure. The basic functional form of the similarity evaluation is generated by the shortcut current flowing in a CMOS inverter as in Refs. [7,8,9]. However, their circuits were utilized to form radial basis functions and only the peak position was programmable. In our circuits, not only the peak position but also the peak height and the sharpness of the peak response shape are made configurable to realize flexible matching operations [5]. Two types of the element matching circuit are shown in Fig. 1. They evaluate the similarity between two vector elements. The result of the evaluation is given as an output current (IOUT ) from the pMOS current mirror. The peak position is temporarily memorized by auto-zeroing of the CMOS inverter. The common-gate transistor with VGG stabilizes the voltage supply to the inverter. By controlling the gate bias VGG, the peak height can be changed. This corresponds to multiplying a weight factor to the element. The sharpness of the functional form is taken as the strictness of the similarity evaluation. In the pyramid type circuit (Fig. 1(a)), the sharpness is controlled by the gain reduction in the input. In the plateau type (Fig. 1(b)), the output voltage of the inverter is fed back to input nodes and the sharpness changes in accordance with the amount of the feedback.    ¥£¡ ¦¤¢   £¨ 9&% ¦©§ (!! #$ 5 !' #$ &% 9 9 4 92 !¦ A1@9  ¨¥  5 4 52 (!  5 8765  9) 0 1 ¥ 1 ¨

5 0.45350856 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas

Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

6 0.45267534 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

7 0.34792495 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

8 0.33667037 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

9 0.30291677 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines

10 0.2664336 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

11 0.25622782 2 nips-2001-3 state neurons for contextual processing

12 0.22896756 37 nips-2001-Associative memory in realistic neuronal networks

13 0.21056323 33 nips-2001-An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures

14 0.20422363 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

15 0.20107165 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance

16 0.18253826 177 nips-2001-Switch Packet Arbitration via Queue-Learning

17 0.16327257 151 nips-2001-Probabilistic principles in unsupervised learning of visual structure: human data and a model

18 0.15334341 91 nips-2001-Improvisation and Learning

19 0.14207977 57 nips-2001-Correlation Codes in Neuronal Populations

20 0.12996268 64 nips-2001-EM-DD: An Improved Multiple-Instance Learning Technique


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.566), (27, 0.077), (30, 0.058), (38, 0.026), (48, 0.03), (72, 0.019), (74, 0.011), (79, 0.029), (91, 0.074)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94958282 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

2 0.80832785 139 nips-2001-Online Learning with Kernels

Author: Jyrki Kivinen, Alex J. Smola, Robert C. Williamson

Abstract: We consider online learning in a Reproducing Kernel Hilbert Space. Our method is computationally efficient and leads to simple algorithms. In particular we derive update equations for classification, regression, and novelty detection. The inclusion of the -trick allows us to give a robust parameterization. Moreover, unlike in batch learning where the -trick only applies to the -insensitive loss function we are able to derive general trimmed-mean types of estimators such as for Huber’s robust loss.     ¡

3 0.80364728 170 nips-2001-Spectral Kernel Methods for Clustering

Author: Nello Cristianini, John Shawe-Taylor, Jaz S. Kandola

Abstract: In this paper we introduce new algorithms for unsupervised learning based on the use of a kernel matrix. All the information required by such algorithms is contained in the eigenvectors of the matrix or of closely related matrices. We use two different but related cost functions, the Alignment and the 'cut cost'. The first one is discussed in a companion paper [3], the second one is based on graph theoretic concepts. Both functions measure the level of clustering of a labeled dataset, or the correlation between data clusters and labels. We state the problem of unsupervised learning as assigning labels so as to optimize these cost functions. We show how the optimal solution can be approximated by slightly relaxing the corresponding optimization problem, and how this corresponds to using eigenvector information. The resulting simple algorithms are tested on real world data with positive results. 1

4 0.68887472 38 nips-2001-Asymptotic Universality for Learning Curves of Support Vector Machines

Author: Manfred Opper, Robert Urbanczik

Abstract: Using methods of Statistical Physics, we investigate the rOle of model complexity in learning with support vector machines (SVMs). We show the advantages of using SVMs with kernels of infinite complexity on noisy target rules, which, in contrast to common theoretical beliefs, are found to achieve optimal generalization error although the training error does not converge to the generalization error. Moreover, we find a universal asymptotics of the learning curves which only depend on the target rule but not on the SVM kernel. 1

5 0.67751682 134 nips-2001-On Kernel-Target Alignment

Author: Nello Cristianini, John Shawe-Taylor, André Elisseeff, Jaz S. Kandola

Abstract: We introduce the notion of kernel-alignment, a measure of similarity between two kernel functions or between a kernel and a target function. This quantity captures the degree of agreement between a kernel and a given learning task, and has very natural interpretations in machine learning, leading also to simple algorithms for model selection and learning. We analyse its theoretical properties, proving that it is sharply concentrated around its expected value, and we discuss its relation with other standard measures of performance. Finally we describe some of the algorithms that can be obtained within this framework, giving experimental results showing that adapting the kernel to improve alignment on the labelled data significantly increases the alignment on the test set, giving improved classification accuracy. Hence, the approach provides a principled method of performing transduction. Keywords: Kernels, alignment, eigenvectors, eigenvalues, transduction 1

6 0.50929779 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

7 0.48762214 137 nips-2001-On the Convergence of Leveraging

8 0.45893151 92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines

9 0.45065176 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines

10 0.44972408 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

11 0.44282472 138 nips-2001-On the Generalization Ability of On-Line Learning Algorithms

12 0.43794173 58 nips-2001-Covariance Kernels from Bayesian Generative Models

13 0.43526694 63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine

14 0.43242979 105 nips-2001-Kernel Machines and Boolean Functions

15 0.42980221 8 nips-2001-A General Greedy Approximation Algorithm with Applications

16 0.42644984 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

17 0.42400712 22 nips-2001-A kernel method for multi-labelled classification

18 0.41763103 15 nips-2001-A New Discriminative Kernel From Probabilistic Models

19 0.41412163 74 nips-2001-Face Recognition Using Kernel Methods

20 0.39769891 56 nips-2001-Convolution Kernels for Natural Language