nips nips2004 nips2004-58 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Felix Schürmann, Karlheinz Meier, Johannes Schemmel
Abstract: Computation without stable states is a computing paradigm different from Turing’s and has been demonstrated for various types of simulated neural networks. This publication transfers this to a hardware implemented neural network. Results of a software implementation are reproduced showing that the performance peaks when the network exhibits dynamics at the edge of chaos. The liquid computing approach seems well suited for operating analog computing devices such as the used VLSI neural network. 1
Reference: text
sentIndex sentText sentNum sentScore
1 de/vision Abstract Computation without stable states is a computing paradigm different from Turing’s and has been demonstrated for various types of simulated neural networks. [sent-7, score-0.134]
2 This publication transfers this to a hardware implemented neural network. [sent-8, score-0.322]
3 Results of a software implementation are reproduced showing that the performance peaks when the network exhibits dynamics at the edge of chaos. [sent-9, score-0.406]
4 The liquid computing approach seems well suited for operating analog computing devices such as the used VLSI neural network. [sent-10, score-0.879]
5 Topology seems to be a key element, especially, since algorithms do not necessarily perform better when the size of the network is simply increased. [sent-12, score-0.137]
6 Hardware implemented neural networks, on the other hand, offer scalability in complexity and gain in speed but naturally do not compete in flexibility with software solutions. [sent-13, score-0.117]
7 Except for specific applications or highly iterative algorithms [1], the capabilities of hardware neural networks as generic problem solvers are difficult to assess in a straight-forward fashion. [sent-14, score-0.306]
8 They both used randomly connected neural networks as non-linear dynamical systems with the inputs causing perturbations to the transient response of the network. [sent-17, score-0.167]
9 In order to customize such a system for a problem, a readout is trained which requires only the network reponse of a single time step for input. [sent-18, score-0.25]
10 , the non-linear dynamical system is called a liquid and together with the readouts it represents a liquid state machine (LSM). [sent-23, score-1.305]
11 Adopting the liquid computing strategy for mixed-mode hardware implemented networks using very large scale integration (VLSI) offers two promising prospects: First, such a system profits immediately from scaling, i. [sent-25, score-0.914]
12 , more neurons increase the complexity of the network dynamics while not increasing training complexity. [sent-27, score-0.322]
13 Second, it is expected that the liquid approach can cope with an imperfect substrate as commonly present in analog hardware. [sent-28, score-0.781]
14 Configuring highly integrated analog hardware as a liquid therefore seems a promising way for analog computing. [sent-29, score-1.121]
15 This conclusion is not unexpected since the liquid computing paradigm was inspired by a complex and ‘analog’ system in the first place: the biological nervous system [2]. [sent-30, score-0.665]
16 This publication presents initial results on configuring a general purpose mixedmode neural network ASIC (application specific integrated circuit) as a liquid. [sent-31, score-0.291]
17 The used custom-made ANN ASIC [5] provides 256 McCulloch-Pitts neurons with about 33k analog synapses and allows a wide variety of topologies, especially highly recurrent ones. [sent-32, score-0.414]
18 In order to operate the ASIC as a liquid a generation procedure proposed by Bertschinger et al. [sent-33, score-0.688]
19 [6] is adopted that generates the network topology and weights. [sent-34, score-0.166]
20 These authors as well showed that the performance of those inputdriven networks—meant are the suitable properties of the network dynamics to act as a liquid—depends on whether the response of the liquid to the inputs is ordered or chaotic. [sent-35, score-0.888]
21 Precisely, according to a special measure the performance peaks when the liquid is inbetween order and chaos. [sent-36, score-0.672]
22 , physically different liquids are evaluated; the obtained experimental results are in accordance with the previously published software simulations [6]. [sent-39, score-0.337]
23 Its design goals were to implement small synapses while being fast reconfigurable and capable of operating at high speed; it therefore combines analog computation with digital signaling. [sent-42, score-0.204]
24 It is comprised of 33k analog synapses with capacitive weight storage (nominal 10-bit plus sign) and 256 McCulloch-Pitts neurons. [sent-43, score-0.204]
25 A full weight refresh can be performed within 200µs and in the current setup one network cycle, i. [sent-46, score-0.193]
26 The analog operation of the chip is limited to the synaptic weights ωij and the input stage of the output neurons. [sent-51, score-0.266]
27 Since both, input (Ij ) and output signals (Oi ) of the network are binary, the weight multiplication is reduced to a summation and the activation function g(x) of the output neurons equals the Heaviside function Θ(x): Oi = g( ωij Ij ), g(x) = Θ(x), I, O ∈ {0, 1}. [sent-52, score-0.402]
28 (1) j The neural network chip is organized in four identical blocks; each represents a fully connected one-layer perceptron with McCulloch-Pitts neurons. [sent-53, score-0.196]
29 One block basically consists of 128×64 analog synapses that connect each of the 128 inputs to each of the 64 output neurons. [sent-54, score-0.379]
30 The network operates in a discrete time update scheme, i. [sent-55, score-0.168]
31 inputs a block can be configured as a recurrent network (c. [sent-60, score-0.32]
32 Additionally, outputs of the other network blocks can be fed back to the block’s input. [sent-64, score-0.177]
33 In this case the output of a neuron at time t depends not only on the actual input but also on the previous network cycle and the activity of the other blocks. [sent-65, score-0.33]
34 Denoting the time needed for one network cycle with ∆t, the output function of one network block becomes: O(t + ∆t)a = Θ i ωij I(t)a + j j x∈{a,b,c,d} k x ωik O(t)x . [sent-66, score-0.442]
35 k (2) Here, ∆t denotes the time needed for one network cycle. [sent-67, score-0.168]
36 The first term in the a argument of the activation function is the external input to the network block I j . [sent-68, score-0.27]
37 a The second term models the feedback path from the output of block a, Ok , as well as the other 3 blocks b,c,d back to its input. [sent-69, score-0.142]
38 For two network blocks this is illustrated in Fig. [sent-70, score-0.177]
39 Principally, this model allows an arbitrarily large network that operates synchronously at a common network frequency fnet = 1/∆t since the external input can be the output of other identical network chips. [sent-72, score-0.542]
40 external input Figure 2: Intra- and inter-block routing schematic of the used ANN ASIC. [sent-73, score-0.112]
41 Since one output neuron has 128 inputs, it cannot be connected to all 256 neurons simultaneously. [sent-75, score-0.212]
42 Furthermore, it can only make arbitrary connections to neurons of the same block, whereas the inter-block feedback fixes certain output neurons to certain inputs. [sent-76, score-0.3]
43 The ANN ASIC is connected to a standard PC with a custom-made PCI-based interface card using a programmable logic to control the neural network chip. [sent-79, score-0.166]
44 The response of the neural network ASIC at a certain time step is called the liquid state x(t). [sent-83, score-0.867]
45 The classifier result, and thus the response of the liquid state machine at a time t, is given by: v(t) = Θ( wi xi (t)). [sent-86, score-0.701]
46 Using the same liquid state x(t) multiple readouts can be used to predict differing target functions simultaneously (c. [sent-88, score-0.714]
47 liquid state x(t) bias software Linear Classifier å x(t ) w i …101110001 …10111000 1 u(t) i : …10100111 0 …010001110 …01001001 0 v(t) i Linear Classifier ~ x (t ) w å i i ~ v(t) i hardware input neural net (liquid) readouts Figure 3: The liquid state machine setup. [sent-92, score-1.69]
48 [6] with the central difference that the liquid here is implemented in hardware. [sent-94, score-0.624]
49 The specific hardware design imposes McCulloch-Pitts type neurons that are either on or off (O ∈ {0, 1}) and not symmetric (O ∈ {−1, 1}). [sent-95, score-0.334]
50 Each neuron of the network then gets k inputs from other neurons, one constant bias of weight u, and two mutually exclusive input neurons with weights 0. [sent-102, score-0.412]
51 The latter modification was introduced to account for the fact that the inner neurons assume only the values {0, 1}. [sent-105, score-0.125]
52 The performance of the liquid state machine is evaluated according to the mutual information of the target values y(t) and the predicted values v(t). [sent-107, score-0.728]
53 In order to assess the capability to account for inputs of preceeding time steps, it is sensible to define another measure, the memory capacity MC (cf. [sent-110, score-0.236]
54 It therefore is a good test for the non-trivial contribution of the liquid if a liquid state machine with a linear readout has to solve a linearly non-separable problem. [sent-117, score-1.316]
55 The benchmark problem used in the following is 3-bit parity in time, i. [sent-118, score-0.106]
56 The linear classifiers are trained to predict the linearly non-separable yτ (t) simply from the liquid state x(t). [sent-121, score-0.643]
57 To do this it is necessary that in the liquid state at time t there is information present of the previous time steps. [sent-122, score-0.705]
58 showed theoretically and in simulation that depending on the parameters k, σ 2 , and u an input-driven neural network shows ordered or chaotic dynamics. [sent-124, score-0.239]
59 This causes input information either to disappear quickly (the simplest case would be an identity map from input to output) or stay forever in the network respectively. [sent-125, score-0.217]
60 Although the transition of the network dynamics from order to chaos happens gradually with the variation of the generation parameters (k, σ 2 , u), the performance as a liquid shows a distinct peak when the network exhibits dynamics inbetween order and chaos. [sent-126, score-1.286]
61 These critical dynamics suggest the term “computation at the edge of chaos” which is originated by Langton [8]. [sent-127, score-0.13]
62 The following results are obtained using the ANN ASIC as the liquid on a random binary input string (u(t)) of length 4000 for which the linear classifier is calculated. [sent-128, score-0.631]
63 The shown mutual information and memory capacity are the measured performance on a random binary test string of length 8000. [sent-129, score-0.156]
64 4 shows the mutual information MI versus the shift in time τ for the 3-bit delayed parity problem and the network parameters fixed to N = 256, k = 6, σ 2 = 0. [sent-135, score-0.404]
65 Plotted are the mean values of 50 liquids evaluated in PSfrag replacements memory curve (k=0. [sent-137, score-0.329]
66 2 0 0 4 6 time shift (τ ) 2 10 8 Figure 4: The mutual information between prediction and target for the 3-bit delayed parity problem versus the delay for k=6, σ 2 =0. [sent-143, score-0.267]
67 hardware and the given limits are the standard deviation in the mean. [sent-148, score-0.241]
68 From the error limits it can be inferred that the parity problem is solved in all runs for τ = 0, and in some for τ = 1. [sent-149, score-0.138]
69 For larger time shifts the performance decreases until the liquid has no information on the input anymore. [sent-150, score-0.662]
70 5 4 25 MC [bit] inputs (k) 20 5 chaos 3 25 MC [bit] chaos 1 order 0. [sent-156, score-0.291]
71 5 s2 Figure 5: Shown are two parameter sweeps for the 3-bit delayed parity in dependence of the generation parameters k and σ 2 with fixed N = 256, u = 0. [sent-161, score-0.274]
72 Left: 50 liquids per parameter set evaluated in hardware. [sent-162, score-0.334]
73 Right: 35 liquids per parameter set using software simulation of ASIC but with symmetric neurons. [sent-163, score-0.398]
74 In order to assess how different generation parameters influence the quality of the liquid, parameter sweeps are performed. [sent-166, score-0.157]
75 For each parameter set several liquids are generated and readouts trained. [sent-167, score-0.346]
76 5 shows a parameter sweep of k and σ 2 for the memory capacity MC for N = 256, and u = 0. [sent-170, score-0.163]
77 On the left side, results obtained with the hardware are shown. [sent-171, score-0.209]
78 The largest three mean MCs are marked in order with a white circle, white asterisk, and white plus. [sent-173, score-0.128]
79 It can be seen that the memory capacity peaks distinctly along a hyperbola-like band. [sent-174, score-0.138]
80 The area below the transition band goes along with ordered dynamics; above it, the network exhibits chaotic behavior. [sent-175, score-0.265]
81 The shape of the transition indicates a constant network activity for critical dynamics. [sent-176, score-0.219]
82 The standard deviation in the mean of 50 liquids per parameter set is below 2%, i. [sent-177, score-0.303]
83 because in the hardware setup only a limited parameter range of σ 2 and u is accessible due to synapses of the range [−1, 1] with a limited resolution. [sent-181, score-0.383]
84 The accessible region (σ 2 ∈ [0, 1] and u ∈ [0, 1]) nonetheless exhibits a similar transition as described by Bertschinger et al. [sent-182, score-0.17]
85 The smaller overall performance in memory capacity compared to their liquids, on the other hand, is simply due to the anti-symmetric neurons and not to other hardware restrictions as it can be seen from the right side of Fig. [sent-184, score-0.436]
86 There the same parameter sweep is shown, but this time the liquid is implemented in a software simulation of the ASIC with symmetric neurons. [sent-186, score-0.811]
87 While all connectivity constraints of the hardware are incorporated in the simulation, the only other change in the setup is the adjustment of the input signal to u ± 1. [sent-187, score-0.305]
88 5 s2 Figure 6: Mean mutual information of 50 simultaneously trained linear classifiers on randomly drawn 5-bit Boolean functions using the hardware liquid (10 liquids per parameter set evaluated). [sent-215, score-1.157]
89 Finally, the hardware-based liquid state machine was tested on 50 randomly drawn Boolean functions of the last 5 inputs (5 bit in time) (cf. [sent-217, score-0.794]
90 In this setup, 50 linear classifiers read out the same liquid simultaneously to calculate their independent predictions at each time step. [sent-220, score-0.622]
91 From the right plot it can be seen that the standard deviation for the single measurement along the critical line is fairly small; this shows that critical dynamics yield a generic liquid independent of the readout. [sent-222, score-0.735]
92 In the present publication these ideas are transferred back to an analog computing device: a mixedmode VLSI neural network. [sent-225, score-0.314]
93 were reproduced showing that the readout with linear classifiers is especially successful when the network exhibits critical dynamics. [sent-227, score-0.378]
94 Beyond the point of solving rather academic problems like 3-bit parity, the liquid computing approach may be well suited to make use of the massive resources found in analog computing devices, especially, since the liquid is generic, i. [sent-228, score-1.414]
95 The experiments with the general purpose ANN ASIC allow to explore the necessary connectivity and accuracy of future hardware implementations. [sent-231, score-0.209]
96 Even though it has not be shown in this publication, initial experiments support that the used liquids show a robustness against faults introduced after the readout has been trained. [sent-233, score-0.327]
97 Such a liquid state machine can make use of the hardware implementation and will be able to operate in real-time on continuous data streams. [sent-235, score-0.852]
98 A mixed-mode analog u neural network using current-steering synapses. [sent-266, score-0.312]
99 Real-time computation at the edge of chaos a in recurrent neural networks. [sent-271, score-0.224]
100 Information dynamics and emergent compua tation in recurrent circuits of spiking neurons. [sent-276, score-0.118]
wordName wordTfidf (topN-words)
[('liquid', 0.591), ('asic', 0.378), ('liquids', 0.245), ('hardware', 0.209), ('bertschinger', 0.177), ('ann', 0.171), ('analog', 0.146), ('network', 0.137), ('mc', 0.133), ('neurons', 0.125), ('chaos', 0.109), ('parity', 0.106), ('readout', 0.082), ('bit', 0.078), ('vlsi', 0.074), ('inputs', 0.073), ('di', 0.072), ('readouts', 0.071), ('erent', 0.069), ('meier', 0.067), ('schemmel', 0.067), ('maass', 0.062), ('dynamics', 0.06), ('recurrent', 0.058), ('synapses', 0.058), ('setup', 0.056), ('software', 0.055), ('exhibits', 0.055), ('mutual', 0.054), ('natschl', 0.053), ('gured', 0.053), ('memory', 0.053), ('state', 0.052), ('generation', 0.052), ('block', 0.052), ('publication', 0.051), ('output', 0.05), ('capacity', 0.049), ('et', 0.045), ('hohmann', 0.045), ('inbetween', 0.045), ('mcs', 0.045), ('mixedmode', 0.045), ('shading', 0.045), ('sweeps', 0.045), ('substrate', 0.044), ('computing', 0.043), ('mi', 0.042), ('critical', 0.042), ('external', 0.041), ('delayed', 0.041), ('blocks', 0.04), ('transition', 0.04), ('input', 0.04), ('simulation', 0.04), ('guring', 0.039), ('heidelberg', 0.039), ('turing', 0.039), ('networks', 0.038), ('boolean', 0.038), ('ers', 0.038), ('neuron', 0.037), ('published', 0.037), ('classi', 0.036), ('peaks', 0.036), ('reproduced', 0.035), ('lsm', 0.035), ('shift', 0.035), ('cycle', 0.035), ('marked', 0.035), ('er', 0.033), ('asterisk', 0.033), ('recon', 0.033), ('oi', 0.033), ('ger', 0.033), ('chaotic', 0.033), ('classifier', 0.033), ('implemented', 0.033), ('ij', 0.032), ('limits', 0.032), ('time', 0.031), ('evaluated', 0.031), ('white', 0.031), ('sweep', 0.031), ('routing', 0.031), ('stable', 0.031), ('paradigm', 0.031), ('parameter', 0.03), ('assess', 0.03), ('accessible', 0.03), ('chip', 0.03), ('integrated', 0.029), ('topology', 0.029), ('neural', 0.029), ('edge', 0.028), ('per', 0.028), ('devices', 0.027), ('terminology', 0.027), ('especially', 0.027), ('response', 0.027)]
simIndex simValue paperId paperTitle
same-paper 1 1.0 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid
Author: Felix Schürmann, Karlheinz Meier, Johannes Schemmel
Abstract: Computation without stable states is a computing paradigm different from Turing’s and has been demonstrated for various types of simulated neural networks. This publication transfers this to a hardware implemented neural network. Results of a software implementation are reproduced showing that the performance peaks when the network exhibits dynamics at the edge of chaos. The liquid computing approach seems well suited for operating analog computing devices such as the used VLSI neural network. 1
2 0.23918366 26 nips-2004-At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks
Author: Nils Bertschinger, Thomas Natschläger, Robert A. Legenstein
Abstract: In this paper we analyze the relationship between the computational capabilities of randomly connected networks of threshold gates in the timeseries domain and their dynamical properties. In particular we propose a complexity measure which we find to assume its highest values near the edge of chaos, i.e. the transition from ordered to chaotic dynamics. Furthermore we show that the proposed complexity measure predicts the computational capabilities very well: only near the edge of chaos are such networks able to perform complex computations on time series. Additionally a simple synaptic scaling rule for self-organized criticality is presented and analyzed. 1
3 0.12240497 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons
Author: R. J. Vogelstein, Udayan Mallik, Eugenio Culurciello, Gert Cauwenberghs, Ralph Etienne-Cummings
Abstract: We have constructed a system that uses an array of 9,600 spiking silicon neurons, a fast microcontroller, and digital memory, to implement a reconfigurable network of integrate-and-fire neurons. The system is designed for rapid prototyping of spiking neural networks that require high-throughput communication with external address-event hardware. Arbitrary network topologies can be implemented by selectively routing address-events to specific internal or external targets according to a memory-based projective field mapping. The utility and versatility of the system is demonstrated by configuring it as a three-stage network that accepts input from an address-event imager, detects salient regions of the image, and performs spatial acuity modulation around a high-resolution fovea that is centered on the location of highest salience. 1
4 0.11443482 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons
Author: Rajesh P. Rao
Abstract: There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrate-and-fire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a direction-selective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a two-level hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1
5 0.10110552 135 nips-2004-On-Chip Compensation of Device-Mismatch Effects in Analog VLSI Neural Networks
Author: Miguel Figueroa, Seth Bridges, Chris Diorio
Abstract: Device mismatch in VLSI degrades the accuracy of analog arithmetic circuits and lowers the learning performance of large-scale neural networks implemented in this technology. We show compact, low-power on-chip calibration techniques that compensate for device mismatch. Our techniques enable large-scale analog VLSI neural networks with learning performance on the order of 10 bits. We demonstrate our techniques on a 64-synapse linear perceptron learning with the Least-Mean-Squares (LMS) algorithm, and fabricated in a 0.35µm CMOS process. 1
6 0.090537541 176 nips-2004-Sub-Microwatt Analog VLSI Support Vector Machine for Pattern Classification and Sequence Estimation
7 0.086732842 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
8 0.077739514 198 nips-2004-Unsupervised Variational Bayesian Learning of Nonlinear Models
9 0.076657265 28 nips-2004-Bayesian inference in spiking neurons
10 0.076530285 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity
11 0.075690143 151 nips-2004-Rate- and Phase-coded Autoassociative Memory
12 0.065152936 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern
13 0.062863901 180 nips-2004-Synchronization of neural networks by mutual learning and its application to cryptography
14 0.056322381 183 nips-2004-Temporal-Difference Networks
15 0.055099782 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons
16 0.053907827 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity
17 0.050131213 12 nips-2004-A Temporal Kernel-Based Model for Tracking Hand Movements from Neural Activities
18 0.049425092 124 nips-2004-Multiple Alignment of Continuous Time Series
19 0.045483667 184 nips-2004-The Cerebellum Chip: an Analog VLSI Implementation of a Cerebellar Model of Classical Conditioning
20 0.04484899 112 nips-2004-Maximising Sensitivity in a Spiking Network
topicId topicWeight
[(0, -0.152), (1, -0.152), (2, -0.012), (3, 0.017), (4, 0.004), (5, 0.056), (6, 0.058), (7, 0.025), (8, -0.012), (9, -0.128), (10, 0.002), (11, -0.176), (12, -0.098), (13, 0.005), (14, -0.007), (15, -0.085), (16, -0.217), (17, -0.158), (18, -0.017), (19, -0.257), (20, -0.044), (21, -0.026), (22, 0.028), (23, 0.008), (24, -0.065), (25, -0.032), (26, -0.018), (27, 0.084), (28, -0.035), (29, -0.095), (30, -0.022), (31, -0.022), (32, 0.087), (33, -0.015), (34, -0.048), (35, 0.019), (36, -0.013), (37, -0.027), (38, 0.045), (39, -0.103), (40, -0.019), (41, -0.045), (42, -0.006), (43, 0.022), (44, 0.033), (45, -0.105), (46, -0.205), (47, 0.032), (48, -0.083), (49, -0.103)]
simIndex simValue paperId paperTitle
same-paper 1 0.96274799 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid
Author: Felix Schürmann, Karlheinz Meier, Johannes Schemmel
Abstract: Computation without stable states is a computing paradigm different from Turing’s and has been demonstrated for various types of simulated neural networks. This publication transfers this to a hardware implemented neural network. Results of a software implementation are reproduced showing that the performance peaks when the network exhibits dynamics at the edge of chaos. The liquid computing approach seems well suited for operating analog computing devices such as the used VLSI neural network. 1
2 0.76638561 26 nips-2004-At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks
Author: Nils Bertschinger, Thomas Natschläger, Robert A. Legenstein
Abstract: In this paper we analyze the relationship between the computational capabilities of randomly connected networks of threshold gates in the timeseries domain and their dynamical properties. In particular we propose a complexity measure which we find to assume its highest values near the edge of chaos, i.e. the transition from ordered to chaotic dynamics. Furthermore we show that the proposed complexity measure predicts the computational capabilities very well: only near the edge of chaos are such networks able to perform complex computations on time series. Additionally a simple synaptic scaling rule for self-organized criticality is presented and analyzed. 1
3 0.64627975 180 nips-2004-Synchronization of neural networks by mutual learning and its application to cryptography
Author: Einat Klein, Rachel Mislovaty, Ido Kanter, Andreas Ruttor, Wolfgang Kinzel
Abstract: Two neural networks that are trained on their mutual output synchronize to an identical time dependant weight vector. This novel phenomenon can be used for creation of a secure cryptographic secret-key using a public channel. Several models for this cryptographic system have been suggested, and have been tested for their security under different sophisticated attack strategies. The most promising models are networks that involve chaos synchronization. The synchronization process of mutual learning is described analytically using statistical physics methods.
4 0.63308072 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons
Author: R. J. Vogelstein, Udayan Mallik, Eugenio Culurciello, Gert Cauwenberghs, Ralph Etienne-Cummings
Abstract: We have constructed a system that uses an array of 9,600 spiking silicon neurons, a fast microcontroller, and digital memory, to implement a reconfigurable network of integrate-and-fire neurons. The system is designed for rapid prototyping of spiking neural networks that require high-throughput communication with external address-event hardware. Arbitrary network topologies can be implemented by selectively routing address-events to specific internal or external targets according to a memory-based projective field mapping. The utility and versatility of the system is demonstrated by configuring it as a three-stage network that accepts input from an address-event imager, detects salient regions of the image, and performs spatial acuity modulation around a high-resolution fovea that is centered on the location of highest salience. 1
5 0.56875926 128 nips-2004-Neural Network Computation by In Vitro Transcriptional Circuits
Author: Jongmin Kim, John Hopfield, Erik Winfree
Abstract: The structural similarity of neural networks and genetic regulatory networks to digital circuits, and hence to each other, was noted from the very beginning of their study [1, 2]. In this work, we propose a simple biochemical system whose architecture mimics that of genetic regulation and whose components allow for in vitro implementation of arbitrary circuits. We use only two enzymes in addition to DNA and RNA molecules: RNA polymerase (RNAP) and ribonuclease (RNase). We develop a rate equation for in vitro transcriptional networks, and derive a correspondence with general neural network rate equations [3]. As proof-of-principle demonstrations, an associative memory task and a feedforward network computation are shown by simulation. A difference between the neural network and biochemical models is also highlighted: global coupling of rate equations through enzyme saturation can lead to global feedback regulation, thus allowing a simple network without explicit mutual inhibition to perform the winner-take-all computation. Thus, the full complexity of the cell is not necessary for biochemical computation: a wide range of functional behaviors can be achieved with a small set of biochemical components. 1
6 0.53063869 135 nips-2004-On-Chip Compensation of Device-Mismatch Effects in Analog VLSI Neural Networks
7 0.49407288 109 nips-2004-Mass Meta-analysis in Talairach Space
8 0.48287332 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
9 0.46885896 176 nips-2004-Sub-Microwatt Analog VLSI Support Vector Machine for Pattern Classification and Sequence Estimation
10 0.42203254 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons
11 0.39707842 57 nips-2004-Economic Properties of Social Networks
13 0.34470949 193 nips-2004-Theories of Access Consciousness
14 0.33686882 14 nips-2004-A Topographic Support Vector Machine: Classification Using Local Label Configurations
15 0.32961941 28 nips-2004-Bayesian inference in spiking neurons
16 0.32305446 198 nips-2004-Unsupervised Variational Bayesian Learning of Nonlinear Models
17 0.31809986 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons
18 0.303386 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern
19 0.29550895 95 nips-2004-Large-Scale Prediction of Disulphide Bond Connectivity
20 0.29413536 151 nips-2004-Rate- and Phase-coded Autoassociative Memory
topicId topicWeight
[(1, 0.014), (13, 0.12), (15, 0.132), (20, 0.016), (24, 0.039), (26, 0.045), (31, 0.024), (33, 0.135), (35, 0.054), (39, 0.017), (50, 0.043), (52, 0.018), (76, 0.012), (95, 0.24)]
simIndex simValue paperId paperTitle
1 0.91024411 171 nips-2004-Solitaire: Man Versus Machine
Author: Xiang Yan, Persi Diaconis, Paat Rusmevichientong, Benjamin V. Roy
Abstract: In this paper, we use the rollout method for policy improvement to analyze a version of Klondike solitaire. This version, sometimes called thoughtful solitaire, has all cards revealed to the player, but then follows the usual Klondike rules. A strategy that we establish, using iterated rollouts, wins about twice as many games on average as an expert human player does. 1
same-paper 2 0.83101386 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid
Author: Felix Schürmann, Karlheinz Meier, Johannes Schemmel
Abstract: Computation without stable states is a computing paradigm different from Turing’s and has been demonstrated for various types of simulated neural networks. This publication transfers this to a hardware implemented neural network. Results of a software implementation are reproduced showing that the performance peaks when the network exhibits dynamics at the edge of chaos. The liquid computing approach seems well suited for operating analog computing devices such as the used VLSI neural network. 1
3 0.75857866 61 nips-2004-Efficient Out-of-Sample Extension of Dominant-Set Clusters
Author: Massimiliano Pavan, Marcello Pelillo
Abstract: Dominant sets are a new graph-theoretic concept that has proven to be relevant in pairwise data clustering problems, such as image segmentation. They generalize the notion of a maximal clique to edgeweighted graphs and have intriguing, non-trivial connections to continuous quadratic optimization and spectral-based grouping. We address the problem of grouping out-of-sample examples after the clustering process has taken place. This may serve either to drastically reduce the computational burden associated to the processing of very large data sets, or to efficiently deal with dynamic situations whereby data sets need to be updated continually. We show that the very notion of a dominant set offers a simple and efficient way of doing this. Numerical experiments on various grouping problems show the effectiveness of the approach. 1
4 0.68639922 26 nips-2004-At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks
Author: Nils Bertschinger, Thomas Natschläger, Robert A. Legenstein
Abstract: In this paper we analyze the relationship between the computational capabilities of randomly connected networks of threshold gates in the timeseries domain and their dynamical properties. In particular we propose a complexity measure which we find to assume its highest values near the edge of chaos, i.e. the transition from ordered to chaotic dynamics. Furthermore we show that the proposed complexity measure predicts the computational capabilities very well: only near the edge of chaos are such networks able to perform complex computations on time series. Additionally a simple synaptic scaling rule for self-organized criticality is presented and analyzed. 1
5 0.68059522 131 nips-2004-Non-Local Manifold Tangent Learning
Author: Yoshua Bengio, Martin Monperrus
Abstract: We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation suggests to explore non-local manifold learning algorithms which attempt to discover shared structure in the tangent planes at different positions. A criterion for such an algorithm is proposed and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where a local non-parametric method fails. 1
6 0.67872471 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons
7 0.6770767 28 nips-2004-Bayesian inference in spiking neurons
8 0.67515296 189 nips-2004-The Power of Selective Memory: Self-Bounded Learning of Prediction Suffix Trees
9 0.67228764 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
10 0.66848159 151 nips-2004-Rate- and Phase-coded Autoassociative Memory
11 0.66727489 178 nips-2004-Support Vector Classification with Input Data Uncertainty
12 0.66613489 60 nips-2004-Efficient Kernel Machines Using the Improved Fast Gauss Transform
13 0.66398662 4 nips-2004-A Generalized Bradley-Terry Model: From Group Competition to Individual Skill
14 0.66357887 168 nips-2004-Semigroup Kernels on Finite Sets
15 0.66236913 142 nips-2004-Outlier Detection with One-class Kernel Fisher Discriminants
16 0.66122466 25 nips-2004-Assignment of Multiplicative Mixtures in Natural Images
17 0.66082436 22 nips-2004-An Investigation of Practical Approximate Nearest Neighbor Algorithms
18 0.66046035 163 nips-2004-Semi-parametric Exponential Family PCA
19 0.65998137 102 nips-2004-Learning first-order Markov models for control
20 0.65920359 70 nips-2004-Following Curved Regularized Optimization Solution Paths