nips nips2000 nips2000-11 knowledge-graph by maker-knowledge-mining

11 nips-2000-A Silicon Primitive for Competitive Learning


Source: pdf

Author: David Hsu, Miguel Figueroa, Chris Diorio

Abstract: Competitive learning is a technique for training classification and clustering networks. We have designed and fabricated an 11transistor primitive, that we term an automaximizing bump circuit, that implements competitive learning dynamics. The circuit performs a similarity computation, affords nonvolatile storage, and implements simultaneous local adaptation and computation. We show that our primitive is suitable for implementing competitive learning in VLSI, and demonstrate its effectiveness in a standard clustering task.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 A silicon primitive for competitive learning David Usu Miguel Figueroa Chris Diorio Computer Science and Engineering The University of Washington 114 Sieg Hall, Box 352350 Seattle, W A 98195-2350 USA hsud, miguel, diorio@cs. [sent-1, score-0.358]

2 We have designed and fabricated an 11transistor primitive, that we term an automaximizing bump circuit, that implements competitive learning dynamics. [sent-4, score-0.859]

3 The circuit performs a similarity computation, affords nonvolatile storage, and implements simultaneous local adaptation and computation. [sent-5, score-0.73]

4 We show that our primitive is suitable for implementing competitive learning in VLSI, and demonstrate its effectiveness in a standard clustering task. [sent-6, score-0.352]

5 Upon presentation of a new input to the network, the neuron representing the closest cluster adapts its weight vector, decreasing the difference between the weight vector and present input. [sent-9, score-0.255]

6 Details on this adaptation vary for different competitive learning rules, but the general functionality of the synapse is preserved across various competitive learning networks. [sent-10, score-0.846]

7 These functions are weight storage, similarity computation, and competitive learning dynamics. [sent-11, score-0.371]

8 Many VLSI implementations of competitive learning have been reported in the literature [2]. [sent-12, score-0.257]

9 Digital storage is expensive in terms of die area and power consumption; capacitive storage typically requires a refresh scheme to prevent weight decay. [sent-14, score-0.211]

10 These devices use the floating-gate technology to provide nonvolatile analog storage and local adaptation in silicon. [sent-18, score-0.409]

11 The adaptation mechanisms do not perturb the operation of the device, thus enabling simultaneous adaptation and computation. [sent-19, score-0.435]

12 Unfortunately, the adaptation mechanisms provide dynamics that are difficult to translate into existing neural-network learning rules. [sent-20, score-0.269]

13 [5] proposed a silicon competitive learning synapse that used floating gate technology in the early 90's. [sent-23, score-0.855]

14 However, that approach suffers from asymmetric adaptation due to separate mechanisms for increasing and decreasing weight values. [sent-24, score-0.317]

15 In addition, they neither characterized the adaptation dynamics of their device, nor demonstrated competitive learning with their device. [sent-25, score-0.43]

16 We present a new silicon primitive, the automaximizing bump circuit, that uses synapse transistors to implement competitive learning in silicon. [sent-26, score-1.172]

17 This ll-transistor circuit computes a similarity measure, provides nonvolatile storage, implements local adaptation, and performs simultaneous adaptation and computation. [sent-27, score-0.75]

18 In addition, the circuit naturally exhibits competitive learning dynamics. [sent-28, score-0.629]

19 In this paper, we derive the properties of the automaximizing bump circuit directly from the physics of synapse transistors, and corroborate our analysis with data measured from a chip fabricated in a 0. [sent-29, score-1.08]

20 In addition, experiments on a competitive learning circuit, and software simulations of the learning rule, show that this device provides a suitable primitive for competitive learning. [sent-31, score-0.626]

21 2 Synapse transistors The automaxmizing bump circuit's behavior depends on the storage and adaptation properties of synapse transistors . [sent-32, score-1.101]

22 A synapse transistor comprises a floating-gate MOSFET, with a control gate capacitively coupled to the floating gate, and an associated tunneling implant. [sent-34, score-1.023]

23 The transistor uses floating-gate charge to implement a nonvolatile analog memory, and outputs a source current that varies with both the stored value and the control-gate voltage. [sent-35, score-0.55]

24 The synapse uses two adaptation mechanisms: Fowler-Nordheim tunneling [6] increases the stored charge; impact-ionized hot-electron injection (IHEI) [7] decreases the charge. [sent-36, score-0.828]

25 Because tunneling and IHEI can both be active during normal transistor operation, the synapse enables simultaneous adaptation and computation. [sent-37, score-0.833]

26 A voltage difference between the floating gate and the tunneling implant causes electrons to tunnel from the floating gate, through gate oxide, to the tunneling implant. [sent-38, score-1.636]

27 We can approximate this current (with respect to fixed tunneling and floatinggate voltages, V tunO and V go ) as [4]: (1) where I tunO and Vx are constants that depend on V tunO and V gO , and Ll V tun and Ll Vg are deviations of the tunneling and floating gate voltages from these fixed levels . [sent-39, score-1.252]

28 IHEI adds electrons to the floating gate, decreasing its stored charge. [sent-40, score-0.384]

29 3 Automaximizing bump circuit The automaximizing bump circuit (Fig. [sent-42, score-1.663]

30 1) is an adaptive version of the classic bump-antibump circuit [8]. [sent-43, score-0.368]

31 It uses synapse transistors to implement the three essential functions of a competitive learning synapse: storage of a weight value f1" computation of a similarity measure between the input and f1" and the ability to move f1, closer to the input. [sent-44, score-0.78]

32 MI-M5 form the classic bumpantibump circuit; we added M6-MII and the floating gates. [sent-47, score-0.26]

33 (b) Data showing that the circuit computes a similarity between the input, V in , and the stored value, /-l, for three different stored weights. [sent-48, score-0.604]

34 We augment the bump-anti bump circuit by adding floating gates and tunneling junctions to MI-M5, turning them into synapse transistors; MI and M3 share the same floating gate and tunneling junction, as do M2 and M4. [sent-53, score-2.282]

35 For convenience, we will refer to our new circuit merely as a bump circuit. [sent-55, score-0.778]

36 The charge stored on the bump circuit's floating gates, QI and Q2, shift Imi/S peak away from ~V=O by an amount determined by their difference. [sent-56, score-0.874]

37 We interpret this difference as the weight, p, stored by the circuit, and interpret Imid as a similarity measure between the circuit's input and stored weight. [sent-57, score-0.269]

38 The circuit is automaximizing because tunneling and IHEI naturally tune the peak of Imid to coincide with the present input. [sent-59, score-0.836]

39 This high-level behavior coincides with the dynamics of competitive learning; both act to decrease the difference between a stored weight and the applied input. [sent-60, score-0.404]

40 Therefore, no explicit computation of the direction or magnitude of weight updates is necessary-the circuit naturally performs these computations for us. [sent-61, score-0.455]

41 Consequently, we only need to indicate when the circuit should adapt, not how it does adapt. [sent-62, score-0.347]

42 1 Weight storage The bump circuit's weight value derives directly from the charge on its floatinggates. [sent-66, score-0.657]

43 A synapse transistor's floating-gate charge looks, for all practical purposes, like a voltage source, V" applied to the control gate. [sent-67, score-0.362]

44 This voltage source has a value Vs = QIC i", where Cin is the control-gate to floating-gate coupling capacitance and Q is the floating gate charge. [sent-68, score-0.551]

45 We define the bump circuit's weight, /1, as: (4) This weight corresponds to the value of Yin that equalizes the two floating-gate voltages (and maximizes froid). [sent-70, score-0.649]

46 1 shows the bump circuit's froid output for three weight values, as a function of the differential input. [sent-72, score-0.57]

47 Because floating gate charge is nonvolatile, the weight is also nonvolatile. [sent-74, score-0.549]

48 The differential encoding of the input makes the bump circuit's adaptation symmetric with respect to (Vin -/1). [sent-75, score-0.662]

49 Because the floating gate voltages are independent of the sign of (Vin-/1), the bump circuit's learning rule is symmetric with respect to (Vin-/1). [sent-79, score-1.118]

50 2 Adaptation We now explore the bump circuit's adaptation dynamics. [sent-81, score-0.604]

51 In our subsequent derivations, we consider only positive L1 Vfg , because adaptation is symmetric (albeit with a change of sign). [sent-85, score-0.22]

52 Tunneling causes adaptation by decreasing the difference between the floating-gate voltages V fgl and Vfg2 . [sent-87, score-0.386]

53 Electron tunneling increases the voltage of both floating gates, but, because tunneling increases exponentially with smaller floating-gate voltages (see Eq. [sent-88, score-1.218]

54 Assuming that Ml 's floating gate voltage is lower than M2's, the change in L1 Vfg due to electron tunneling is: d L1 Vfg 1dt = -(I tunl - f tun2 ) 1Cfg (8) We substitute Eq. [sent-90, score-0.905]

55 8 and solve for the tunneling learning rule: d L1Vfg Idt = -ftOe (. [sent-92, score-0.376]

56 ) (9) where ftO=ftunO/Cfp Vx is a model constant, L1 Vo = (L1 Vfgl + L1 V fg2 )12, and f/J models the tunneling mismatch between synapse transistors. [sent-99, score-0.494]

57 (a) Measured adaptation rates, due to tunneling and IHEI, along with fits from Eqs. [sent-124, score-0.508]

58 (b) Composite adaptation rate, along with a fit from (12). [sent-126, score-0.173]

59 We slowed the IHEI adaptation rate (by using a higher Vinj ), compared with the data from part (a), to cause better matching between tunneling and IHEI. [sent-127, score-0.55]

60 ~ V rg ; and the The circuit also uses IHEI to decrease ~ Vrg . [sent-130, score-0.409]

61 We bias the bump circuit so that only transistors Ml and M2 exhibit IHEI. [sent-131, score-0.91]

62 Consequently, we decrease ~ V rg by controlling the drain voltages at Ml and M2. [sent-134, score-0.276]

63 Coupled current mirrors (M6-M7 and M8-M9) at the drains of Ml and M2, simultaneously raise the drain voltage of the transistor that is sourcing a larger current, and lower the drain voltage of the transistor that is sourcing a smaller current. [sent-135, score-0.704]

64 The transistor with the smaller source current will experience a larger V sd , and thus exponentially more IHEI, causing its source current to rapidly increase. [sent-136, score-0.272]

65 Diodes (MlO and M11) further increase the drain voltage of the transistor with the larger current, further reducing its IHEI. [sent-137, score-0.306]

66 The net effect is that IHEI acts to equalize the currents, and, likewise, the floating gate voltages . [sent-138, score-0.572]

67 Recently Hasler proposed a similar method for controlling IHEI in a floating gate differential pair [4]. [sent-139, score-0.448]

68 Assuming II >h, the change in ~ V rg due to IHEI is: (10) We expand the learning rule by substituting Eq. [sent-140, score-0.166]

69 To compute values for the drain voltages of MI and M 2 , we assume that all of II flows through MIl and all of 12 flows through M7. [sent-143, score-0.259]

70 Like tunneling, the IHEI rule depends on three factors: a controllable learning rate, Vinj ; the difference between Yin and f. [sent-145, score-0.161]

71 2 shows measurements of d~ Vrgldt versus ~ Vfg due to tunneling and IHEI, along with fits to Eqs. [sent-148, score-0.335]

72 IHEI and tunneling facilitate adaptation by adding and removing charge from the floating gates, respectively. [sent-150, score-0.836]

73 Isolated, any of these mechanisms will eventually drive the bump circuit out of its operating range. [sent-151, score-0.833]

74 There is an added benefit to combining tunneling and IHEI: Part (a) Fig 2 shows that tunneling acts more strongly for smaller values of ~ Vfg , while IHEI shows the opposite behavior. [sent-153, score-0.69]

75 The mechanisms complement each other, providing adaptation over more than a I V range in ~ Vrg . [sent-154, score-0.228]

76 11 to derive the bump learning rule: --d~Vrg / dt =/tOe (~Vtun - ~I'o)/V, . [sent-157, score-0.494]

77 When ~ V fg is small, adaptation is primarily driven by IHEI, while tunneling dominates for larger values of ~ Vfg • The bump learning rule is unlike any learning rule that we have found in the literature. [sent-160, score-1.153]

78 First, it naturally moves the bump circuit's weight towards the present input. [sent-162, score-0.519]

79 Second, the weight update is symmetric with respect to the difference between the stored value and the present input. [sent-163, score-0.216]

80 Third, we can vary the weight-update rate over many orders of magnitude by adjusting Vlun and V inj • Finally, because the bump circuit uses synapse transistors to perform adaptation, the circuit can adapt during normal operation. [sent-164, score-1.516]

81 4 Competitive learning with bump circuits We summarize the results of simulations of the bump learning rule and also results from a competitive learning circuit fabricated in the TSMC 0. [sent-165, score-1.73]

82 We first compared the performance of a software neural network on a standard clustering task, using the bump learning rule (fitted to data from Fig. [sent-169, score-0.623]

83 2), and a basic competitive learning rule (learning rate p=O. [sent-170, score-0.342]

84 On an input presentation, the network updated the weight vector of the closest neuron using either the bump learning rule, or Eq. [sent-174, score-0.635]

85 3 shows that the bump circuit's rule performs favorably with the hard competitive learning rule. [sent-179, score-0.774]

86 3) comprised two neurons with a one-dimensional input (a neuron was a single bump circuit), and a feedback network to control adaptation. [sent-181, score-0.55]

87 The feedback network comprised a winner-take-all (WT A) [10] that detected which bump was closest to the present input, and additional circuitry [9] that generated Vtun and Vinj from the WT A output. [sent-182, score-0.586]

88 We tested this circuit on a clustering task, to learn the centers of a mixture of two Gaussians. [sent-183, score-0.383]

89 3, we compare the performance of our circuit with a simulated neural network using Eq. [sent-185, score-0.376]

90 The VLSI circuit performed comparably with the neural network, demonstrating that our bump circuit, in conjunction with simple feedback mechanisms, can implement competitive learning in VLSI. [sent-187, score-1.087]

91 We can generalize the circuitry to multiple dimensions (multiple bump circuits per neuron) and multiple neurons; each neuron only requires one V lun and V inj signal. [sent-188, score-0.604]

92 3 X10' + Hard competitive learning rule o bump learning rule 2. [sent-189, score-0.861]

93 (a) Comparison of a neural network using the bump learning rule versus a standard competitive learning rule. [sent-194, score-0.824]

94 (c) Performance of a competitive learning circuit versus a neural network for learning a mixture of two Gaussians. [sent-197, score-0.674]

95 Q) circuit output I + + ~+ 05 :J > target values - + ~ 06 04 neural network output + o =:-: . [sent-200, score-0.376]

96 Schneider, "Analog VLSI circuits for competitive learning networks", in Analog Integrated Circuits and Signal Processing, 15, pp. [sent-228, score-0.337]

97 Diorio, "A p-channel MOS synapse transistor with self-convergent memory writes", IEEE Transactions on Electron Devices, vol. [sent-231, score-0.291]

98 Snow, "Fowler- Nordheim tunneling into thermally grown Si0 2", Journal of Applied Physics, vol. [sent-245, score-0.335]

99 Delbruck, "Bump circuits for computing similarity and dissimilarity of analog voltages", CNS Memo 26, California Institute of Technology, 1993. [sent-253, score-0.198]

100 Diorio, "A silicon primitive for competitive learning," UW CSE Technical Report no . [sent-257, score-0.317]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('bump', 0.431), ('circuit', 0.347), ('tunneling', 0.335), ('ihei', 0.322), ('floating', 0.239), ('competitive', 0.216), ('adaptation', 0.173), ('synapse', 0.159), ('gate', 0.158), ('voltages', 0.155), ('transistors', 0.132), ('transistor', 0.132), ('vfg', 0.119), ('voltage', 0.114), ('automaximizing', 0.107), ('yin', 0.107), ('stored', 0.093), ('vrg', 0.092), ('charge', 0.089), ('circuits', 0.08), ('vin', 0.079), ('imid', 0.077), ('nonvolatile', 0.077), ('storage', 0.074), ('vinj', 0.066), ('rule', 0.066), ('weight', 0.063), ('drain', 0.06), ('primitive', 0.059), ('vlsi', 0.059), ('mechanisms', 0.055), ('diorio', 0.053), ('similarity', 0.051), ('froid', 0.046), ('tuno', 0.046), ('vfgl', 0.046), ('vlun', 0.046), ('analog', 0.045), ('silicon', 0.042), ('learning', 0.041), ('source', 0.04), ('devices', 0.04), ('electron', 0.04), ('mos', 0.04), ('rg', 0.04), ('vtun', 0.04), ('gates', 0.039), ('closest', 0.037), ('clustering', 0.036), ('fabricated', 0.036), ('simultaneous', 0.034), ('neuron', 0.034), ('circuitry', 0.033), ('device', 0.033), ('adapt', 0.033), ('difference', 0.032), ('ml', 0.031), ('figueroa', 0.031), ('miguel', 0.031), ('sourcing', 0.031), ('vsi', 0.031), ('vsl', 0.031), ('current', 0.03), ('feedback', 0.03), ('differential', 0.03), ('network', 0.029), ('implements', 0.028), ('symmetric', 0.028), ('inj', 0.026), ('comprised', 0.026), ('currents', 0.026), ('derivations', 0.026), ('electrons', 0.026), ('hasler', 0.026), ('injection', 0.026), ('vx', 0.026), ('decreasing', 0.026), ('naturally', 0.025), ('pp', 0.024), ('composite', 0.024), ('allen', 0.024), ('part', 0.023), ('peak', 0.022), ('uses', 0.022), ('controllable', 0.022), ('dissimilarity', 0.022), ('flows', 0.022), ('implement', 0.022), ('dt', 0.022), ('classic', 0.021), ('controlling', 0.021), ('computes', 0.02), ('consequently', 0.02), ('increases', 0.02), ('acts', 0.02), ('software', 0.02), ('performs', 0.02), ('change', 0.019), ('rate', 0.019), ('vi', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999934 11 nips-2000-A Silicon Primitive for Competitive Learning

Author: David Hsu, Miguel Figueroa, Chris Diorio

Abstract: Competitive learning is a technique for training classification and clustering networks. We have designed and fabricated an 11transistor primitive, that we term an automaximizing bump circuit, that implements competitive learning dynamics. The circuit performs a similarity computation, affords nonvolatile storage, and implements simultaneous local adaptation and computation. We show that our primitive is suitable for implementing competitive learning in VLSI, and demonstrate its effectiveness in a standard clustering task.

2 0.310774 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron

Author: Shih-Chii Liu, Bradley A. Minch

Abstract: In this work, we explore homeostasis in a silicon integrate-and-fire neuron. The neuron adapts its firing rate over long time periods on the order of seconds or minutes so that it returns to its spontaneous firing rate after a lasting perturbation. Homeostasis is implemented via two schemes. One scheme looks at the presynaptic activity and adapts the synaptic weight depending on the presynaptic spiking rate. The second scheme adapts the synaptic

3 0.17058668 56 nips-2000-Foundations for a Circuit Complexity Theory of Sensory Processing

Author: Robert A. Legenstein, Wolfgang Maass

Abstract: We introduce total wire length as salient complexity measure for an analysis of the circuit complexity of sensory processing in biological neural systems and neuromorphic engineering. This new complexity measure is applied to a set of basic computational problems that apparently need to be solved by circuits for translation- and scale-invariant sensory processing. We exhibit new circuit design strategies for these new benchmark functions that can be implemented within realistic complexity bounds, in particular with linear or almost linear total wire length.

4 0.11217149 57 nips-2000-Four-legged Walking Gait Control Using a Neuromorphic Chip Interfaced to a Support Vector Learning Algorithm

Author: Susanne Still, Bernhard Schölkopf, Klaus Hepp, Rodney J. Douglas

Abstract: To control the walking gaits of a four-legged robot we present a novel neuromorphic VLSI chip that coordinates the relative phasing of the robot's legs similar to how spinal Central Pattern Generators are believed to control vertebrate locomotion [3]. The chip controls the leg movements by driving motors with time varying voltages which are the outputs of a small network of coupled oscillators. The characteristics of the chip's output voltages depend on a set of input parameters. The relationship between input parameters and output voltages can be computed analytically for an idealized system. In practice, however, this ideal relationship is only approximately true due to transistor mismatch and offsets. Fine tuning of the chip's input parameters is done automatically by the robotic system, using an unsupervised Support Vector (SV) learning algorithm introduced recently [7]. The learning requires only that the description of the desired output is given. The machine learns from (unlabeled) examples how to set the parameters to the chip in order to obtain a desired motor behavior.

5 0.085207663 55 nips-2000-Finding the Key to a Synapse

Author: Thomas Natschläger, Wolfgang Maass

Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.

6 0.077515215 118 nips-2000-Smart Vision Chip Fabricated Using Three Dimensional Integration Technology

7 0.065550976 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code

8 0.059094034 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account

9 0.046589814 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

10 0.045779336 34 nips-2000-Competition and Arbors in Ocular Dominance

11 0.044541672 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks

12 0.038798764 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics

13 0.036440481 146 nips-2000-What Can a Single Neuron Compute?

14 0.035269562 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks

15 0.030619757 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli

16 0.030499037 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization

17 0.028990529 33 nips-2000-Combining ICA and Top-Down Attention for Robust Speech Recognition

18 0.028481789 28 nips-2000-Balancing Multiple Sources of Reward in Reinforcement Learning

19 0.026486658 41 nips-2000-Discovering Hidden Variables: A Structure-Based Approach

20 0.024887634 63 nips-2000-Hierarchical Memory-Based Reinforcement Learning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.109), (1, -0.113), (2, -0.165), (3, -0.044), (4, 0.023), (5, -0.021), (6, -0.01), (7, 0.067), (8, -0.035), (9, 0.133), (10, -0.069), (11, 0.035), (12, -0.067), (13, -0.578), (14, -0.154), (15, 0.09), (16, 0.031), (17, -0.084), (18, -0.015), (19, -0.009), (20, 0.152), (21, 0.092), (22, 0.094), (23, 0.033), (24, 0.123), (25, 0.094), (26, 0.036), (27, 0.026), (28, 0.042), (29, -0.02), (30, -0.046), (31, 0.059), (32, -0.085), (33, 0.089), (34, -0.051), (35, 0.051), (36, -0.017), (37, -0.01), (38, 0.015), (39, -0.002), (40, -0.03), (41, 0.027), (42, 0.085), (43, -0.009), (44, -0.185), (45, 0.012), (46, -0.022), (47, 0.065), (48, -0.009), (49, -0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97861516 11 nips-2000-A Silicon Primitive for Competitive Learning

Author: David Hsu, Miguel Figueroa, Chris Diorio

Abstract: Competitive learning is a technique for training classification and clustering networks. We have designed and fabricated an 11transistor primitive, that we term an automaximizing bump circuit, that implements competitive learning dynamics. The circuit performs a similarity computation, affords nonvolatile storage, and implements simultaneous local adaptation and computation. We show that our primitive is suitable for implementing competitive learning in VLSI, and demonstrate its effectiveness in a standard clustering task.

2 0.84999698 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron

Author: Shih-Chii Liu, Bradley A. Minch

Abstract: In this work, we explore homeostasis in a silicon integrate-and-fire neuron. The neuron adapts its firing rate over long time periods on the order of seconds or minutes so that it returns to its spontaneous firing rate after a lasting perturbation. Homeostasis is implemented via two schemes. One scheme looks at the presynaptic activity and adapts the synaptic weight depending on the presynaptic spiking rate. The second scheme adapts the synaptic

3 0.51774508 56 nips-2000-Foundations for a Circuit Complexity Theory of Sensory Processing

Author: Robert A. Legenstein, Wolfgang Maass

Abstract: We introduce total wire length as salient complexity measure for an analysis of the circuit complexity of sensory processing in biological neural systems and neuromorphic engineering. This new complexity measure is applied to a set of basic computational problems that apparently need to be solved by circuits for translation- and scale-invariant sensory processing. We exhibit new circuit design strategies for these new benchmark functions that can be implemented within realistic complexity bounds, in particular with linear or almost linear total wire length.

4 0.4871096 57 nips-2000-Four-legged Walking Gait Control Using a Neuromorphic Chip Interfaced to a Support Vector Learning Algorithm

Author: Susanne Still, Bernhard Schölkopf, Klaus Hepp, Rodney J. Douglas

Abstract: To control the walking gaits of a four-legged robot we present a novel neuromorphic VLSI chip that coordinates the relative phasing of the robot's legs similar to how spinal Central Pattern Generators are believed to control vertebrate locomotion [3]. The chip controls the leg movements by driving motors with time varying voltages which are the outputs of a small network of coupled oscillators. The characteristics of the chip's output voltages depend on a set of input parameters. The relationship between input parameters and output voltages can be computed analytically for an idealized system. In practice, however, this ideal relationship is only approximately true due to transistor mismatch and offsets. Fine tuning of the chip's input parameters is done automatically by the robotic system, using an unsupervised Support Vector (SV) learning algorithm introduced recently [7]. The learning requires only that the description of the desired output is given. The machine learns from (unlabeled) examples how to set the parameters to the chip in order to obtain a desired motor behavior.

5 0.24776635 118 nips-2000-Smart Vision Chip Fabricated Using Three Dimensional Integration Technology

Author: Hiroyuki Kurino, M. Nakagawa, Kang Wook Lee, Tomonori Nakamura, Yuusuke Yamada, Ki Tae Park, Mitsumasa Koyanagi

Abstract: The smart VISIOn chip has a large potential for application in general purpose high speed image processing systems . In order to fabricate smart vision chips including photo detector compactly, we have proposed the application of three dimensional LSI technology for smart vision chips. Three dimensional technology has great potential to realize new neuromorphic systems inspired by not only the biological function but also the biological structure. In this paper, we describe our three dimensional LSI technology for neuromorphic circuits and the design of smart vision chips .

6 0.21514401 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization

7 0.20682171 34 nips-2000-Competition and Arbors in Ocular Dominance

8 0.19994922 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics

9 0.19080497 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account

10 0.1865519 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code

11 0.13724177 125 nips-2000-Stability and Noise in Biochemical Switches

12 0.11479546 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

13 0.11231335 55 nips-2000-Finding the Key to a Synapse

14 0.10649973 112 nips-2000-Reinforcement Learning with Function Approximation Converges to a Region

15 0.10487619 22 nips-2000-Algorithms for Non-negative Matrix Factorization

16 0.10473022 146 nips-2000-What Can a Single Neuron Compute?

17 0.10337206 96 nips-2000-One Microphone Source Separation

18 0.10231655 28 nips-2000-Balancing Multiple Sources of Reward in Reinforcement Learning

19 0.10012179 79 nips-2000-Learning Segmentation by Random Walks

20 0.0988039 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.025), (17, 0.087), (32, 0.014), (33, 0.03), (54, 0.013), (59, 0.402), (62, 0.044), (65, 0.014), (67, 0.047), (75, 0.017), (76, 0.02), (79, 0.013), (81, 0.013), (90, 0.029), (93, 0.098), (97, 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.87514472 11 nips-2000-A Silicon Primitive for Competitive Learning

Author: David Hsu, Miguel Figueroa, Chris Diorio

Abstract: Competitive learning is a technique for training classification and clustering networks. We have designed and fabricated an 11transistor primitive, that we term an automaximizing bump circuit, that implements competitive learning dynamics. The circuit performs a similarity computation, affords nonvolatile storage, and implements simultaneous local adaptation and computation. We show that our primitive is suitable for implementing competitive learning in VLSI, and demonstrate its effectiveness in a standard clustering task.

2 0.74579436 87 nips-2000-Modelling Spatial Recall, Mental Imagery and Neglect

Author: Suzanna Becker, Neil Burgess

Abstract: We present a computational model of the neural mechanisms in the parietal and temporal lobes that support spatial navigation, recall of scenes and imagery of the products of recall. Long term representations are stored in the hippocampus, and are associated with local spatial and object-related features in the parahippocampal region. Viewer-centered representations are dynamically generated from long term memory in the parietal part of the model. The model thereby simulates recall and imagery of locations and objects in complex environments. After parietal damage, the model exhibits hemispatial neglect in mental imagery that rotates with the imagined perspective of the observer, as in the famous Milan Square experiment [1]. Our model makes novel predictions for the neural representations in the parahippocampal and parietal regions and for behavior in healthy volunteers and neuropsychological patients.

3 0.38793108 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron

Author: Shih-Chii Liu, Bradley A. Minch

Abstract: In this work, we explore homeostasis in a silicon integrate-and-fire neuron. The neuron adapts its firing rate over long time periods on the order of seconds or minutes so that it returns to its spontaneous firing rate after a lasting perturbation. Homeostasis is implemented via two schemes. One scheme looks at the presynaptic activity and adapts the synaptic weight depending on the presynaptic spiking rate. The second scheme adapts the synaptic

4 0.35304233 99 nips-2000-Periodic Component Analysis: An Eigenvalue Method for Representing Periodic Structure in Speech

Author: Lawrence K. Saul, Jont B. Allen

Abstract: An eigenvalue method is developed for analyzing periodic structure in speech. Signals are analyzed by a matrix diagonalization reminiscent of methods for principal component analysis (PCA) and independent component analysis (ICA). Our method-called periodic component analysis (1l

5 0.27365479 146 nips-2000-What Can a Single Neuron Compute?

Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek

Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1

6 0.27313069 79 nips-2000-Learning Segmentation by Random Walks

7 0.27211815 74 nips-2000-Kernel Expansions with Unlabeled Examples

8 0.26720443 122 nips-2000-Sparse Representation for Gaussian Process Models

9 0.26584184 94 nips-2000-On Reversing Jensen's Inequality

10 0.26541936 4 nips-2000-A Linear Programming Approach to Novelty Detection

11 0.26539886 7 nips-2000-A New Approximate Maximal Margin Classification Algorithm

12 0.26521724 133 nips-2000-The Kernel Gibbs Sampler

13 0.26408112 111 nips-2000-Regularized Winnow Methods

14 0.2636916 60 nips-2000-Gaussianization

15 0.26298773 22 nips-2000-Algorithms for Non-negative Matrix Factorization

16 0.26163405 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks

17 0.26161787 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition

18 0.26115936 37 nips-2000-Convergence of Large Margin Separable Linear Classification

19 0.2606746 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning

20 0.26060787 21 nips-2000-Algorithmic Stability and Generalization Performance