nips nips2003 nips2003-129 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Hsin Chen, Patrice Fleury, Alan F. Murray
Abstract: This paper presents VLSI circuits with continuous-valued probabilistic behaviour realized by injecting noise into each computing unit(neuron). Interconnecting the noisy neurons forms a Continuous Restricted Boltzmann Machine (CRBM), which has shown promising performance in modelling and classifying noisy biomedical data. The Minimising-Contrastive-Divergence learning algorithm for CRBM is also implemented in mixed-mode VLSI, to adapt the noisy neurons’ parameters on-chip. 1
Reference: text
sentIndex sentText sentNum sentScore
1 uk Abstract This paper presents VLSI circuits with continuous-valued probabilistic behaviour realized by injecting noise into each computing unit(neuron). [sent-6, score-0.155]
2 Interconnecting the noisy neurons forms a Continuous Restricted Boltzmann Machine (CRBM), which has shown promising performance in modelling and classifying noisy biomedical data. [sent-7, score-0.345]
3 The Minimising-Contrastive-Divergence learning algorithm for CRBM is also implemented in mixed-mode VLSI, to adapt the noisy neurons’ parameters on-chip. [sent-8, score-0.128]
4 1 Introduction As interests in interfacing electronic circuits to biological cells grows, an intelligent embedded system able to classify noisy and drifting biomedical signals becomes important to extract useful information at the bio-electrical interface. [sent-9, score-0.294]
5 To date, probabilistic computation has been unable to deal with the continuous-valued nature of biomedical data, while remaining amenable to hardware implementation. [sent-11, score-0.062]
6 The Continuous Restricted Boltzmann Machine(CRBM) has been shown to be promising in the modelling of noisy and drifting biomedical data[1][2], with a simple Minimising-Contrastive-Divergence(MCD) learning algorithm[1][3]. [sent-12, score-0.223]
7 The CRBM consists of continuous-valued stochastic neurons that adapt their “internal noise” to code the variation of continuous-valued data, dramatically enriching the CRBM’s representational power. [sent-13, score-0.097]
8 Following a brief introduction of the CRBM, the VLSI implementation of the noisy neuron and the MCD learning rule are presented. [sent-14, score-0.241]
9 2 Continuous Restricted Boltzmann Machine Let si represent the state of neuron i, and wij the connection between neuron i and neuron j. [sent-15, score-0.618]
10 Parameter aj is the “noise-control factor”, controlling the neuron’s output nonlinearity such that a neuron j can learn to become near-deterministic (small aj ), continuousstochastic (moderate aj ), or binary-stochastic (large aj )[4][1]. [sent-17, score-0.981]
11 A CRBM consists of one visible and one hidden layer of noisy neurons with interlayer connections defined by a weight matrix {W}. [sent-18, score-0.231]
12 ηw and ηa denote the learning rates for parameters {wij } and {aj }, respectively. [sent-20, score-0.035]
13 ∆wij = ηw sign ˆ s i sj 4 ∆aj = ηa sign ˆ s2 j − sj 2 ˆ 4 − s i sj ˆˆ 4 4 (5) (6) Note that the denominator 1/a2 in Eq. [sent-23, score-0.732]
14 To validate the simplification above, a CRBM with 2 visible neurons and 4 hidden neurons was trained to model the two-dimensional data distribution defined by 20 training data (Fig. [sent-25, score-0.235]
15 5, ηa = 15 for visible neurons, and ηa = 1 for hidden neurons 1 . [sent-27, score-0.138]
16 6µm 2P3M CMOS process, which allows a power supply voltage of five volts. [sent-32, score-0.109]
17 Therefore, the states of neurons {s i } and the corresponding weights {wij } are designed to be represented by voltage in [1. [sent-33, score-0.206]
18 As both si and wij are real numbers, a four-quadrant multiplier is required to calculate wij si 3. [sent-37, score-0.665]
19 1 Four-quadrant multiplier While the Chible four-quadrant multiplier [6] has a simple architecture with a wide input range, the reference zero of one of its inputs is process-dependent. [sent-38, score-0.262]
20 Though only relative values of weights matter for the neurons, the process-dependent reference becomes nontrivial if the same four-quadrant multiplier is used to implement the MCD learning rule. [sent-39, score-0.19]
21 2, to allow external control of reference zeros of both inputs. [sent-41, score-0.048]
22 Each computing cell contains two differential pairs biased by two complementary branches, Mn1-Mn2 and Mp1-Mp2. [sent-42, score-0.072]
23 (Io1 −Io2 ) is thus proportional to (Vw −Vth,n1 − nVth,n2 )(Vsi − Vsr ) when Vw > (Vth,n1 + nVth,n2 ) 2 , and (Io3 − Io4 ) proportional to (n2 V dd − Vw − Vth,p1 − nVth,p2 )(Vsr − Vsi ) when Vw < (n2 V dd − Vth,p1 − nVth,p2 )[6]. [sent-43, score-0.084]
24 Subject to careful design of the complementary biasing transistors[6], (Vth,n1 + nVth,n2 ) ≈ (n2 V dd − Vth,p1 − nVth,p2 ) ≈ V dd/2. [sent-44, score-0.042]
25 Combining the two differential currents then gives Io = (Io1 + Io3 ) − (Io2 + Io4 ) = I(Vw ) · (Vsi − Vsr ) (7) With wi input to one computing cell and wr to the other cell, as shown in Fig. [sent-45, score-0.18]
26 2b, M1-M6 generates an output current Iout ∝ (wi − wr )(si − sr ). [sent-46, score-0.086]
27 The measured DC characteristic from a fabricated chip is shown in Fig. [sent-47, score-0.119]
28 The four-quadrant multipliers output a total current proportional to i wij si , while the differential pair, Mna and 2 n is the slope factor of MOS transistor, and Vth,x refers to the absolute value of transistor Mx’s threshold voltage. [sent-51, score-0.351]
29 The current-to-voltage converter, composed of an operational amplifier and an voltage-controlled active resistor[7], then sums all currents, outputting a voltage Vx = Vsr − isum · R(Vaj ) to the sigmoid function. [sent-54, score-0.252]
30 The resistor RL finally converts io into a output voltage vo = io RL + Vsr . [sent-56, score-0.44]
31 (8) implies that Vaj controls the feedback resistance of the I-V converter, and consequently adapts the nonlinearity of the sigmoid function (which appears as aj in Eq. [sent-58, score-0.312]
32 With various Vaj , the measured DC characteristic (chip result) of the sigmoid function is shown in Fig. [sent-60, score-0.098]
33 5 shows the measured output of a noisy neuron (upper trace) with {si } sweeping between 1. [sent-98, score-0.266]
34 8V, and vni generated by LFSR (Linear Feedback Shift Register) [9] with an amplitude of 0. [sent-101, score-0.075]
35 The {si } and {wi } above forced the neuron’s output to sweep a sigmoid-shaped curve as Fig. [sent-103, score-0.03]
36 4b, while the input noise disturbed the curve to achieve continous-valued probabilistic output. [sent-104, score-0.049]
37 A neuron state Vsj was sampled periodically and held with negligible clock feedthrough whenever the switch opened(went low). [sent-105, score-0.141]
38 4 Minimising-Contrastive-Divergence learning on chip The MCD learning for the Product of Experts[3] has been successfully implemented and reported in [10]. [sent-106, score-0.112]
39 The MCD learning for CRBM is therefore implemented simply by replacing the following two circuits. [sent-107, score-0.035]
40 1 is substituted for the two-quadrant multiplier in [10] to enhance learning flexibility; secondly, a pulse-coded learning circuit, rather than the analogue weightchanging circuit in [10], is employed to allow not only accurate learning steps but also refresh of dynamically-held parameters. [sent-110, score-0.439]
41 6 shows the block diagram of the VLSI implementation of the MCD learning rules for the noisy neurons, along with the digital control signals. [sent-113, score-0.199]
42 In learning mode (LER/REF=1), the initial states si and sj are first sampled by clock signals CKsi and CKsj , resulting in a current I+ at the output of four-quadrant multiplier. [sent-114, score-0.533]
43 After CK+ samples and holds I+ , the one-step reconstructed states si and sj are ˆ ˆ sampled by CKsip and CKsjp to produce another current I− . [sent-115, score-0.435]
44 CKq then samples and holds the output of the current subtracter Isub , which represents the difference between initial data and one-step Gibbs sampled data. [sent-116, score-0.096]
45 Repeating the above clocking sequence for four cycles, four Isub are accumulated and averaged to derive Iave , representing si sj 4 − si sj 4 in equation(5). [sent-117, score-0.814]
46 Finally, Iave is compared to a reference ˆˆ current to determine the learning direction DIR, and the learning circuit, triggered by CKup , updates the parameter once. [sent-118, score-0.118]
47 The dash-lined box represents the voltagelimiting circuit used only for parameter {aj }, whose voltage range should be limited to ensure normal operation of the voltage-controlled active resistor in Fig. [sent-119, score-0.373]
48 In refresh mode (LER/REF=1), the signal REFR rather than DIR determines the updating direction, maintaining the weight to a reference value. [sent-121, score-0.119]
49 (5)(6) (b)The digital control signals The subtracter, accumulator and current comparator in Fig. [sent-123, score-0.034]
50 The following subsections therefore focus on the pulse-coded learning circuit and the measurement results of on-chip MCD learning. [sent-125, score-0.224]
51 2 The pulse-coded learning circuit The pulse-coded learning circuit consists of a pulse generator (Fig. [sent-127, score-0.619]
52 The stepsize of the learning cell is adjustable through VP and VN in Fig. [sent-130, score-0.137]
53 However, transistor nonlinearities and process variations do not allow different and accurate learning rates to be set for various parameters in the same chip ({aj } and {wij } in our case). [sent-132, score-0.119]
54 We therefore apply a width-variable pulse to the enabling input (EN) of the learning cell, controlling the learning step precisely by monitoring the pulse width off-chip. [sent-133, score-0.312]
55 As the input capacitance of each learning cell is less than 0. [sent-134, score-0.107]
56 1pF, one pulse generator can control all the learning cells with the same learning rate. [sent-135, score-0.241]
57 2 implies that only three pulse generators are required for ηw , ηav , and ηah . [sent-137, score-0.121]
58 The pulse generator is therefore a simple way to achieve accurate control. [sent-138, score-0.171]
59 The pulse generator is largely a D-type flip-flop whose output Vpulse is initially reset to low via reset. [sent-139, score-0.237]
60 Eventually, Vpulse is reset to zero as soon as Vd is discharged. [sent-141, score-0.036]
61 During the positive pulse, the learning cell charges or discharges the voltage stored on Cw [12], according to the directional input INC/DEC. [sent-142, score-0.216]
62 Varying Vmu controls the pulse width accurately from 10ns (Vη = 2. [sent-143, score-0.121]
63 9V ), amounting to learning stepsize from 1mV to 500mV as VN = 0. [sent-145, score-0.065]
64 (6) indicates that {aj } can be adapted with the same learning circuit simply by substituting sj and sj for si and si in Fig. [sent-150, score-1.038]
65 6, the voltage Vaj should ˆ ˆ be confined in [1,3]V, to ensure normal operation of the voltage-controlled active resistor in Fig. [sent-151, score-0.184]
66 8 is thus designed to limit the range of Vaj , defined by Vmax and Vmin through two voltage comparators. [sent-154, score-0.109]
67 4 On-chip learning Two MCD learning circuits, one for {wij } and the other for {aj }, have been fabricated successfully. [sent-160, score-0.117]
68 9 shows the measured on-chip learning of both parameters with (a) different learning rates (b) different learning directions. [sent-162, score-0.135]
69 With the reference zero being defined at (a) (b) Figure 9: Measurement of parameter aj and wij learning in (a)different learning rates (b)different directions 2. [sent-168, score-0.432]
70 As controlled by different pulse widths (PULSE1 and PULSE2), the two parameters were updated with different stepsizes (10mV and 34mV) but in the same direction. [sent-177, score-0.121]
71 The trace of parameter aj shows digital noise attributable to sub-optimal layout, and has been improved in a subsequent design. [sent-178, score-0.315]
72 Therefore, the learning circuit forces aj to decrease toward Vmax , while wij remains learning up and down as Fig. [sent-182, score-0.573]
73 5 Conclusion Fabricated CMOS circuits have been presented and the implemention of noisy neural computation that underlies the CRBM has been demonstrated. [sent-184, score-0.199]
74 The promising measured results show that the CRBM is, as has been inferred in the past[1], amenable to mixed-mode VLSI. [sent-185, score-0.03]
75 A full CRBM system with two visible and four hidden neurons has thus been implemented to examine this concept. [sent-187, score-0.138]
76 The neurons in the proof-of-concept CRBM system are hard-wired to each other and the multi-channel uncorrelated noise sources implemented by the LFSR [9]. [sent-188, score-0.146]
77 A scalable design will thus be an essential next step before pratical biomedical applications. [sent-189, score-0.062]
78 Furthermore, the CRBM system may open the possibility of utilising VLSI intrinsic noise for computation in the deep-sub-miron era. [sent-190, score-0.049]
79 Murray, “A continuous restricted boltzmann machine with an implementable training algorithm,” IEE Proc. [sent-193, score-0.05]
80 Hinton, “Training products of experts by minimizing contrastive divergence,” Neural Computation, vol. [sent-206, score-0.055]
81 Chible, “Analog circuit for synapse neural networks vlsi implementation,” The 7th IEEE Int. [sent-222, score-0.314]
82 Tsividis, “Floating voltage-controlled resistors in cmos technology,” Electronics Letters, vol. [sent-229, score-0.044]
83 Vittoz, “Mos transistors operated in the lateral bipolar mode and their application in cmos technology,” IEEE Journal of Solid-State Circuits, vol. [sent-233, score-0.077]
84 Chu, “A vlsi-efficient technique for generating multiple uncorrelated noise sources and its application to stochastic neural networks,” IEEE Trans. [sent-244, score-0.049]
85 Murray, “Mixed-signal vlsi implementation of the product of experts’ minimizing contrastive divergence learning scheme,” in IEEE Proc. [sent-251, score-0.246]
86 Cauwenberghs, “An analog vlsi recurrent neural network,” IEEE Tran. [sent-264, score-0.125]
wordName wordTfidf (topN-words)
[('vaj', 0.395), ('crbm', 0.338), ('vsi', 0.282), ('sj', 0.244), ('mcd', 0.207), ('aj', 0.198), ('circuit', 0.189), ('vsr', 0.188), ('si', 0.163), ('vlsi', 0.125), ('vw', 0.125), ('pulse', 0.121), ('vmax', 0.119), ('wij', 0.116), ('dir', 0.114), ('neuron', 0.113), ('voltage', 0.109), ('multiplier', 0.107), ('circuits', 0.106), ('ck', 0.106), ('neurons', 0.097), ('io', 0.094), ('vmin', 0.094), ('noisy', 0.093), ('murray', 0.082), ('di', 0.076), ('isum', 0.075), ('resistor', 0.075), ('vni', 0.075), ('vpulse', 0.075), ('cell', 0.072), ('erential', 0.069), ('sigmoid', 0.068), ('biomedical', 0.062), ('chible', 0.056), ('ckup', 0.056), ('iave', 0.056), ('isub', 0.056), ('vcomp', 0.056), ('vmu', 0.056), ('vsj', 0.056), ('wr', 0.056), ('cw', 0.055), ('contrastive', 0.055), ('erent', 0.052), ('wi', 0.052), ('generator', 0.05), ('boltzmann', 0.05), ('noise', 0.049), ('iout', 0.049), ('vd', 0.049), ('reference', 0.048), ('fabricated', 0.047), ('nonlinearity', 0.046), ('cmos', 0.044), ('chip', 0.042), ('electronics', 0.042), ('transistor', 0.042), ('dd', 0.042), ('vp', 0.042), ('visible', 0.041), ('amps', 0.038), ('ckq', 0.038), ('cksi', 0.038), ('cksip', 0.038), ('cksj', 0.038), ('cksjp', 0.038), ('converter', 0.038), ('fleury', 0.038), ('lerref', 0.038), ('lfsr', 0.038), ('mnb', 0.038), ('refr', 0.038), ('refresh', 0.038), ('refreshed', 0.038), ('subtracter', 0.038), ('vnr', 0.038), ('vo', 0.038), ('volts', 0.038), ('vsigma', 0.038), ('chen', 0.037), ('diagram', 0.037), ('reset', 0.036), ('learning', 0.035), ('rl', 0.034), ('dc', 0.034), ('digital', 0.034), ('trace', 0.034), ('mode', 0.033), ('vn', 0.033), ('mna', 0.033), ('vittoz', 0.033), ('drifting', 0.033), ('vx', 0.033), ('divergence', 0.031), ('output', 0.03), ('measured', 0.03), ('mos', 0.03), ('stepsize', 0.03), ('sampled', 0.028)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999946 129 nips-2003-Minimising Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons
Author: Hsin Chen, Patrice Fleury, Alan F. Murray
Abstract: This paper presents VLSI circuits with continuous-valued probabilistic behaviour realized by injecting noise into each computing unit(neuron). Interconnecting the noisy neurons forms a Continuous Restricted Boltzmann Machine (CRBM), which has shown promising performance in modelling and classifying noisy biomedical data. The Minimising-Contrastive-Divergence learning algorithm for CRBM is also implemented in mixed-mode VLSI, to adapt the noisy neurons’ parameters on-chip. 1
2 0.19186626 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems
Author: Rock Z. Shi, Timothy K. Horiuchi
Abstract: Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5µm CMOS process. 1
3 0.17824405 193 nips-2003-Variational Linear Response
Author: Manfred Opper, Ole Winther
Abstract: A general linear response method for deriving improved estimates of correlations in the variational Bayes framework is presented. Three applications are given and it is discussed how to use linear response as a general principle for improving mean field approximations.
4 0.17584389 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
Author: Thomas Natschläger, Wolfgang Maass
Abstract: We employ an efficient method using Bayesian and linear classifiers for analyzing the dynamics of information in high-dimensional states of generic cortical microcircuit models. It is shown that such recurrent circuits of spiking neurons have an inherent capability to carry out rapid computations on complex spike patterns, merging information contained in the order of spike arrival with previously acquired context information. 1
5 0.1538434 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses
Author: Adria Bofill-i-petit, Alan F. Murray
Abstract: We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme of the STDP synapses can be set to either weight-independent or weight-dependent mode. We present results that characterise the learning window implemented for both modes of operation. When presented with spike trains with different types of synchronisation the neurons develop bimodal weight distributions. We also show that a 2-layered network of silicon spiking neurons with STDP synapses can perform hierarchical synchrony detection. 1
6 0.12892383 10 nips-2003-A Low-Power Analog VLSI Visual Collision Detector
7 0.11225987 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control
8 0.090781249 11 nips-2003-A Mixed-Signal VLSI for Real-Time Generation of Edge-Based Image Vectors
9 0.089722022 94 nips-2003-Information Maximization in Noisy Channels : A Variational Approach
10 0.086438656 16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells
11 0.083697341 45 nips-2003-Circuit Optimization Predicts Dynamic Networks for Chemosensory Orientation in Nematode C. elegans
12 0.070245616 4 nips-2003-A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning
13 0.063924558 104 nips-2003-Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks
14 0.055053972 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell
15 0.054039154 43 nips-2003-Bounded Invariance and the Formation of Place Fields
16 0.050453816 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model
17 0.048258379 127 nips-2003-Mechanism of Neural Interference by Transcranial Magnetic Stimulation: Network or Single Neuron?
18 0.045812506 151 nips-2003-PAC-Bayesian Generic Chaining
19 0.043905333 185 nips-2003-The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity
20 0.043647546 110 nips-2003-Learning a World Model and Planning with a Self-Organizing, Dynamic Neural System
topicId topicWeight
[(0, -0.141), (1, 0.059), (2, 0.262), (3, 0.112), (4, 0.153), (5, -0.013), (6, -0.085), (7, -0.03), (8, -0.052), (9, -0.104), (10, -0.095), (11, -0.167), (12, -0.103), (13, -0.045), (14, -0.007), (15, -0.049), (16, -0.089), (17, -0.03), (18, -0.013), (19, -0.072), (20, 0.099), (21, -0.071), (22, 0.161), (23, 0.044), (24, 0.067), (25, 0.012), (26, -0.033), (27, 0.017), (28, -0.007), (29, 0.089), (30, -0.049), (31, 0.087), (32, 0.004), (33, 0.096), (34, 0.044), (35, 0.001), (36, -0.056), (37, -0.041), (38, -0.051), (39, -0.048), (40, 0.003), (41, 0.103), (42, -0.179), (43, 0.064), (44, -0.06), (45, 0.064), (46, 0.088), (47, 0.016), (48, 0.035), (49, -0.092)]
simIndex simValue paperId paperTitle
same-paper 1 0.96959847 129 nips-2003-Minimising Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons
Author: Hsin Chen, Patrice Fleury, Alan F. Murray
Abstract: This paper presents VLSI circuits with continuous-valued probabilistic behaviour realized by injecting noise into each computing unit(neuron). Interconnecting the noisy neurons forms a Continuous Restricted Boltzmann Machine (CRBM), which has shown promising performance in modelling and classifying noisy biomedical data. The Minimising-Contrastive-Divergence learning algorithm for CRBM is also implemented in mixed-mode VLSI, to adapt the noisy neurons’ parameters on-chip. 1
2 0.68833542 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
Author: Thomas Natschläger, Wolfgang Maass
Abstract: We employ an efficient method using Bayesian and linear classifiers for analyzing the dynamics of information in high-dimensional states of generic cortical microcircuit models. It is shown that such recurrent circuits of spiking neurons have an inherent capability to carry out rapid computations on complex spike patterns, merging information contained in the order of spike arrival with previously acquired context information. 1
3 0.65734708 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems
Author: Rock Z. Shi, Timothy K. Horiuchi
Abstract: Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5µm CMOS process. 1
4 0.62119871 10 nips-2003-A Low-Power Analog VLSI Visual Collision Detector
Author: Reid R. Harrison
Abstract: We have designed and tested a single-chip analog VLSI sensor that detects imminent collisions by measuring radially expansive optic flow. The design of the chip is based on a model proposed to explain leg-extension behavior in flies during landing approaches. A new elementary motion detector (EMD) circuit was developed to measure optic flow. This EMD circuit models the bandpass nature of large monopolar cells (LMCs) immediately postsynaptic to photoreceptors in the fly visual system. A 16 × 16 array of 2-D motion detectors was fabricated on a 2.24 mm × 2.24 mm die in a standard 0.5-µm CMOS process. The chip consumes 140 µW of power from a 5 V supply. With the addition of wide-angle optics, the sensor is able to detect collisions around 500 ms before impact in complex, real-world scenes. 1
5 0.51721328 193 nips-2003-Variational Linear Response
Author: Manfred Opper, Ole Winther
Abstract: A general linear response method for deriving improved estimates of correlations in the variational Bayes framework is presented. Three applications are given and it is discussed how to use linear response as a general principle for improving mean field approximations.
6 0.50361645 11 nips-2003-A Mixed-Signal VLSI for Real-Time Generation of Edge-Based Image Vectors
7 0.43225011 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses
8 0.42517638 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control
9 0.389557 94 nips-2003-Information Maximization in Noisy Channels : A Variational Approach
10 0.36088848 16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells
11 0.34693265 45 nips-2003-Circuit Optimization Predicts Dynamic Networks for Chemosensory Orientation in Nematode C. elegans
12 0.34685433 4 nips-2003-A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning
13 0.29638594 155 nips-2003-Perspectives on Sparse Bayesian Learning
14 0.23194186 14 nips-2003-A Nonlinear Predictive State Representation
15 0.21574657 165 nips-2003-Reasoning about Time and Knowledge in Neural Symbolic Learning Systems
16 0.21456507 104 nips-2003-Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks
17 0.21450686 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell
18 0.21391279 175 nips-2003-Sensory Modality Segregation
19 0.21296842 97 nips-2003-Iterative Scaled Trust-Region Learning in Krylov Subspaces via Pearlmutter's Implicit Sparse Hessian
20 0.20076926 102 nips-2003-Large Scale Online Learning
topicId topicWeight
[(0, 0.03), (11, 0.037), (30, 0.014), (35, 0.044), (53, 0.084), (59, 0.07), (63, 0.035), (69, 0.026), (71, 0.027), (74, 0.371), (76, 0.034), (85, 0.085), (91, 0.056)]
simIndex simValue paperId paperTitle
same-paper 1 0.8168692 129 nips-2003-Minimising Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons
Author: Hsin Chen, Patrice Fleury, Alan F. Murray
Abstract: This paper presents VLSI circuits with continuous-valued probabilistic behaviour realized by injecting noise into each computing unit(neuron). Interconnecting the noisy neurons forms a Continuous Restricted Boltzmann Machine (CRBM), which has shown promising performance in modelling and classifying noisy biomedical data. The Minimising-Contrastive-Divergence learning algorithm for CRBM is also implemented in mixed-mode VLSI, to adapt the noisy neurons’ parameters on-chip. 1
2 0.71868539 14 nips-2003-A Nonlinear Predictive State Representation
Author: Matthew R. Rudary, Satinder P. Singh
Abstract: Predictive state representations (PSRs) use predictions of a set of tests to represent the state of controlled dynamical systems. One reason why this representation is exciting as an alternative to partially observable Markov decision processes (POMDPs) is that PSR models of dynamical systems may be much more compact than POMDP models. Empirical work on PSRs to date has focused on linear PSRs, which have not allowed for compression relative to POMDPs. We introduce a new notion of tests which allows us to define a new type of PSR that is nonlinear in general and allows for exponential compression in some deterministic dynamical systems. These new tests, called e-tests, are related to the tests used by Rivest and Schapire [1] in their work with the diversity representation, but our PSR avoids some of the pitfalls of their representation—in particular, its potential to be exponentially larger than the equivalent POMDP. 1
3 0.71178687 84 nips-2003-How to Combine Expert (and Novice) Advice when Actions Impact the Environment?
Author: Daniela Pucci de Farias, Nimrod Megiddo
Abstract: The so-called “experts algorithms” constitute a methodology for choosing actions repeatedly, when the rewards depend both on the choice of action and on the unknown current state of the environment. An experts algorithm has access to a set of strategies (“experts”), each of which may recommend which action to choose. The algorithm learns how to combine the recommendations of individual experts so that, in the long run, for any fixed sequence of states of the environment, it does as well as the best expert would have done relative to the same sequence. This methodology may not be suitable for situations where the evolution of states of the environment depends on past chosen actions, as is usually the case, for example, in a repeated non-zero-sum game. A new experts algorithm is presented and analyzed in the context of repeated games. It is shown that asymptotically, under certain conditions, it performs as well as the best available expert. This algorithm is quite different from previously proposed experts algorithms. It represents a shift from the paradigms of regret minimization and myopic optimization to consideration of the long-term effect of a player’s actions on the opponent’s actions or the environment. The importance of this shift is demonstrated by the fact that this algorithm is capable of inducing cooperation in the repeated Prisoner’s Dilemma game, whereas previous experts algorithms converge to the suboptimal non-cooperative play. 1
4 0.53421897 81 nips-2003-Geometric Analysis of Constrained Curves
Author: Anuj Srivastava, Washington Mio, Xiuwen Liu, Eric Klassen
Abstract: We present a geometric approach to statistical shape analysis of closed curves in images. The basic idea is to specify a space of closed curves satisfying given constraints, and exploit the differential geometry of this space to solve optimization and inference problems. We demonstrate this approach by: (i) defining and computing statistics of observed shapes, (ii) defining and learning a parametric probability model on shape space, and (iii) designing a binary hypothesis test on this space. 1
5 0.38441017 169 nips-2003-Sample Propagation
Author: Mark A. Paskin
Abstract: Rao–Blackwellization is an approximation technique for probabilistic inference that flexibly combines exact inference with sampling. It is useful in models where conditioning on some of the variables leaves a simpler inference problem that can be solved tractably. This paper presents Sample Propagation, an efficient implementation of Rao–Blackwellized approximate inference for a large class of models. Sample Propagation tightly integrates sampling with message passing in a junction tree, and is named for its simple, appealing structure: it walks the clusters of a junction tree, sampling some of the current cluster’s variables and then passing a message to one of its neighbors. We discuss the application of Sample Propagation to conditional Gaussian inference problems such as switching linear dynamical systems. 1
6 0.37733215 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems
7 0.36599305 101 nips-2003-Large Margin Classifiers: Convex Loss, Low Noise, and Convergence Rates
8 0.3617774 185 nips-2003-The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity
9 0.35799026 146 nips-2003-Online Learning of Non-stationary Sequences
10 0.35541984 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
11 0.35445255 177 nips-2003-Simplicial Mixtures of Markov Chains: Distributed Modelling of Dynamic User Profiles
12 0.35438597 124 nips-2003-Max-Margin Markov Networks
13 0.35394648 113 nips-2003-Learning with Local and Global Consistency
14 0.35200617 20 nips-2003-All learning is Local: Multi-agent Learning in Global Reward Games
15 0.35190868 3 nips-2003-AUC Optimization vs. Error Rate Minimization
16 0.35128817 50 nips-2003-Denoising and Untangling Graphs Using Degree Priors
17 0.35075489 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images
18 0.35007039 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model
19 0.34972367 126 nips-2003-Measure Based Regularization
20 0.34915772 78 nips-2003-Gaussian Processes in Reinforcement Learning