nips nips2006 nips2006-18 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Chiara Bartolozzi, Giacomo Indiveri
Abstract: Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1
Reference: text
sentIndex sentText sentNum sentScore
1 A selective attention multi–chip system with dynamic synapses and spiking neurons Chiara Bartolozzi Institute of neuroinformatics UNI-ETH Zurich Wintherthurerstr. [sent-1, score-0.512]
2 ch Abstract Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. [sent-9, score-0.356]
3 We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. [sent-10, score-0.312]
4 We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. [sent-11, score-0.436]
5 Biological systems solve this issue by sequentially allocating computational resources on small regions of the input stimuli, for analyzing them in parallel, with a strategy known as Selective Attention, that takes advantage of both sequential and parallel processing. [sent-15, score-0.158]
6 The psychophysical study of selective attention distinguished two complementary strategies for the selection of salient regions of the input stimuli, one depending on the physical (bottom-up) characteristic of the input, the other depending on its semantic (top-down) and task related properties. [sent-17, score-0.413]
7 Much of the applied research has focused on modeling the bottom-up aspect of selective attention. [sent-18, score-0.186]
8 As a consequence, several software [1, 2, 3] and hardware models [4, 5, 6, 7] based on the concept of saliency map, winner-takes-all (WTA) competition, and inhibition of return (IOR) [8] have been proposed. [sent-19, score-0.098]
9 We focus on HW implementation of such selective attention systems on compact, low-power, analogue VLSI chips. [sent-20, score-0.278]
10 Specifically we present a new chip with 32 × 32 cells, that can sequentially select the most active regions of the input stimuli, the Selective Attention Chip (SAC). [sent-22, score-0.469]
11 It is a transceiver chip employing a spike-based representation (AER, Address-Event-Representation [10]). [sent-23, score-0.295]
12 Its input signals topographically encode the local conspicuousness of the input over the entire visual scene. [sent-24, score-0.246]
13 Its output signals can be used in real time to drive motors of active vision systems or to select subregions of images captured from wide field-of-view cameras. [sent-25, score-0.189]
14 The AER communication protocol and the 2D structure of the network make it particularly suitable for processing signals from silicon spiking retinas. [sent-26, score-0.283]
15 The basic circuits of the chip we present have already been proposed in [11]. [sent-27, score-0.375]
16 The chip we present here comprises improvements in the basic circuits, and additional dynamic components that will be described in Section 3. [sent-28, score-0.295]
17 The chip’s improvements over previous implementations arise from the design of new AER interfacing circuits, both for the input decoding stage and the output arbitration, and new synaptic circuits: the Diff-Pair Integrator (DPI) described in [12]. [sent-29, score-0.286]
18 Besides having easily and independently tunable gain and time constant, it produces mean currents proportional to the input frequencies, more suitable for the input of the current-mode WTA cell employed as core computational unit in the SAC. [sent-31, score-0.307]
19 In the next sections we describe the chip’s architecture and present experimental results from a two chip system comprising the SAC and a silicon “transient” retina that produces spikes in response to temporal changes in scene contrast. [sent-33, score-0.902]
20 The chip comprises an array of 32 × 32 pixels, each one is 90 × 45µm2 and the whole chip with AER digital interface and pads occupies an area of 10mm2 . [sent-36, score-0.617]
21 The basic functionality of the SAC is to scan the input in order of decreasing activity. [sent-37, score-0.18]
22 The chip input and output signals are asynchronous digital pulses (spikes) that use the Address Event Representation (AER) [16]. [sent-38, score-0.553]
23 The input spikes to each pixel are translated into a current (see Iex of Fig. [sent-39, score-0.304]
24 1) by a circuit that models the dynamics of a biological excitatory synapse [12]. [sent-40, score-0.338]
25 A current mode hysteretic Winner–Take–All (WTA) competitive cell compares the input currents of each pixel; the winning cell sources a constant current to the correspondent output leaky Integrate and Fire (I&F;) neuron [15]. [sent-41, score-0.884]
26 The spiking neuron in the array then signals which pixel is winning the competition for saliency, and therefore the pixel that receives the highest input frequency. [sent-42, score-0.904]
27 The output spikes of the I&F; neuron are sent also to a feedback inhibitory synapse (see Fig. [sent-43, score-0.671]
28 1), that subtracts current (Iior ) from the input node of the WTA cell; the net input current to the winner pixel is then decreased, and a new pixel can eventually be selected. [sent-44, score-0.545]
29 This self-inhibition mechanism is known as Inhibition of Return (IOR) and allows the network to select sequentially the most salient regions of input images, reproducing the attentional scan path. [sent-45, score-0.205]
30 To nearest neighbors AER Input A E R Excitatory Synapse Iex + + Inhibitory Synapse Hysteretic WTA A E R Output I&F; Neuron AER Output Iior IOR To nearest neighbors Figure 1: Block diagram of a basic cell of the 32 × 32 selective attention architecture. [sent-46, score-0.339]
31 This basic functionality of the SAC is augmented by the introduction of dynamic properties such as Short-Term Depression (STD) in the input synapses and spike frequency adaptation in the output neuron. [sent-47, score-0.466]
32 STD is a property observed in physiological recordings[17] of synapses that decrease their efficacy when they receive consecutive stimulations. [sent-48, score-0.122]
33 In our synapse the effect of a single spike on the integrated current depends on a voltage, the synaptic weight. [sent-49, score-0.392]
34 The initial weight of the synapse is set by an external voltage reference, then as the synapse receives spikes the effective synaptic weight decreases. [sent-50, score-0.671]
35 STD is a local gain control, that increases sensitivity to changes in the input and makes the synapse insensitive to constant stimulation. [sent-51, score-0.298]
36 Spiking frequency adaptation is another property of neurons that when stimulated with constant input decrease their output firing rate with time. [sent-52, score-0.39]
37 The spiking frequency of the silicon I&F; neuron is monotonic with its input current, the adaptation neuron’s mechanism decreases the neuron’s firing rate with time [15]. [sent-53, score-0.654]
38 We exploit this property to decrease the output bandwidth of the SAC. [sent-54, score-0.097]
39 The SAC has been designed with tunable parameters that allow to modify the strength of synaptic contributions, the dynamics of synaptic short term depression and of neuronal adaptation, as well as the spatial extent of competition and the dynamics of IOR. [sent-55, score-0.511]
40 All these parameters enrich the dynamics of the network that can be exploited to model the complex selective attention scan path. [sent-56, score-0.346]
41 3 Multi–Chip Selective Attention System The SAC uses the asynchronous AER SCX (Silicon Cortex) protocol, that allows multiple AER chips to communicate using spikes, just like the cortex, and can be used in multi–chip systems, with multiple senders and multiple receivers [18, 19]. [sent-57, score-0.143]
42 The communication protocol used and the SAC’s bidimensional architecture make it particularly suitable for processing visual inputs coming from artificial spiking retinas. [sent-59, score-0.133]
43 We built a two chip system, connecting a silicon retina [21] to the SAC input. [sent-60, score-0.725]
44 The retina is an AER asynchronous imager that responds to contrast variations, it has 64 × 64 pixels that respond to on and off transients. [sent-61, score-0.457]
45 A dedicated PCI-AER board [18] connects the retina to the SAC, via a look-up table that maps the activity of the 64 × 64 pixels of the retina to the 32 × 32 pixels of the SAC. [sent-62, score-0.959]
46 In this setup the mapping is linear grouping 4 retina pixels to 1 SAC pixel, more complex mappings, as for example the foveal mapping, will be tested in the future. [sent-63, score-0.412]
47 The board allows also to monitor the activity of both chips on a Linux desktop. [sent-64, score-0.198]
48 4 Experimental Data We performed preliminary experiments with the two chips setup described in the previous section. [sent-65, score-0.098]
49 We stimulated the retina with two black squares flashing at 6Hz on a white background, on a LCD screen, using the matlab PsychoToolbox [22] as shown in Fig. [sent-66, score-0.465]
50 3 we show the response of the two chips to this stimulus: each dot represents the mean firing rate of the correspondent pixel in the chips. [sent-69, score-0.348]
51 The pixels of the retina that focus on the black squares are active and show a high mean firing rate, some other pixels in the array have spontaneous activity. [sent-70, score-0.687]
52 To show the mapping between the retina and the SAC we performed a control experiment: we turned off the competition and the IOR and also we disabled STD and the neuronal adaptation, in this way all the pixels that receive an input activity will be active. [sent-71, score-0.743]
53 All the pixels that receive the input from the pixels of the retina that we stimulate with the black squares are active, more over the spontaneous activity (noise) of the other pixels are ”cleaned”, thanks to the filtering property of the input synapses. [sent-72, score-1.018]
54 In all the figures the top and bottom boxes show raster plots, respectively of the retina and the SAC: each dot corresponds to a spike emitted by a pixel (or neuron) (y axis) at a certain time (x axis). [sent-74, score-0.541]
55 The middle trace shows the voltage Vnet , that is proportional to the total input current (Iex − Iior of Fig. [sent-75, score-0.212]
56 1) to the WTA cell that receives input from one of the most active pixels of the retina. [sent-76, score-0.362]
57 3, the retina sends many spikes every time the black squares appear and disappear from the screen, the WTA input node, with this settings, receives only the excitatory current from the input synapse, as shown by the increase of the voltage Vnet in correspondence of the retinal spikes. [sent-79, score-0.88]
58 Since in our control experiment there is no competition, all the stimulated pixels are active, as shown in the SAC raster plot. [sent-80, score-0.287]
59 4(b) we show the effect of Figure 2: Multi-chip system: The retina (top-right box) is stimulated with an LCD screen, its output is sent to the SAC (bottom-right box) via the PCIAER board (bottom-left box). [sent-82, score-0.493]
60 The activity of the two chips is monitored via the PCIAER board on a Linux desktop. [sent-83, score-0.229]
61 10 Neuron Y 5 20 Neuron Y 10 30 40 15 20 25 50 30 60 10 20 30 40 Neuron X (a) 50 60 5 10 15 20 Neuron X 25 30 (b) Figure 3: Response of the two chips to an image. [sent-84, score-0.098]
62 (a) The silicon retina is stimulated, via an LCD screen, with two flashing (6Hz) black squares on a white background (see Fig. [sent-85, score-0.516]
63 We show the mean firing output of each pixel of the retina. [sent-87, score-0.16]
64 The pixels corresponding to the black squares in the image have higher firing rate than the others, some of the pixels of the retina are spontaneously firing at lower frequencies. [sent-88, score-0.615]
65 (b) The activity of the retina is the input of the SAC: the 64 × 64 pixels of the retina are mapped with a ratio 4 : 1 to the 32 × 32 pixels of the SAC. [sent-89, score-0.977]
66 We show the mean firing rate of the SAC pixels in response to the retinal stimulation, when the Winner-Takes-All competition is disabled. [sent-90, score-0.296]
67 In this case the SAC output reflects the input, with some suppression of the noisy pixels due to the filtering properties of the input synaptic circuits. [sent-91, score-0.373]
68 introducing spike frequency adaptation: in this case the output frequency of each neuron decreases, reducing the output bandwidth and the AER-bus traffic. [sent-92, score-0.545]
69 5 we show the effect of competition and Inhibition of Return. [sent-94, score-0.145]
70 When we turn on the WTA competition only one pixel is selected at any time, therefore only one neuron is firing, as shown in the raster plot of Fig. [sent-95, score-0.568]
71 5(a); on the node Vnet we can observe that when the correspondent neuron is winning there is an extra input current, because it doesn’t reset to its resting value when the synapse is not active. [sent-96, score-0.827]
72 This positive current implements a form of self-excitation that gives hysteretic properties to the network dynamics, and stabilizes the WTA network. [sent-97, score-0.144]
73 5(b)), as soon as the neuron starts to fire, Neuron 4000 2000 Vnet (V) 0 0 0. [sent-99, score-0.244]
74 4 500 0 0 (b) Figure 4: Time response to black squares flashing on a white background: we use the same stimulation and setup described in Fig 3. [sent-137, score-0.12]
75 The top figure shows the raster plot of the retina output, one dot corresponds to a spike produced by one pixel at a specific time. [sent-138, score-0.541]
76 The retina produces events every time the squares appear on or disappear from the screen. [sent-139, score-0.369]
77 The middle plot shows the voltage Vnet of the input node of the WTA cell correspondent to the synapse that receives input from one of the most active pixel of the retina. [sent-140, score-0.867]
78 In the middle plot Vnet reflects the effect of the sole input current from the synapse, that integrates the spikes received from the correspondent pixel of the retina. [sent-143, score-0.427]
79 In this case, since the lateral inhibitory connections are switched off, there is no competition and all the output I&F; neurons correspondent to the stimulated input synapses are active. [sent-144, score-0.732]
80 (b) We add spike frequency adaptation to the previous experiment settings, the output firing rate of the neurons is decreased, reducing the bandwidth of the SAC output. [sent-145, score-0.303]
81 the inhibitory current decreases the total input current to the correspondent WTA cell: the voltage Vnet reflects this mechanism as it is reset to its resting value even before the input from the retina ceases. [sent-146, score-0.939]
82 The WTA cell is then deselected and the output neuron stops firing, while another neuron is selected and starts firing, as shown in the SAC raster plot. [sent-147, score-0.702]
83 The inhibitory synapse time constant is tunable and when it is slow the effect of inhibition lasts for hundreds of milliseconds after the I&F; stopped firing, in this way we prevent that pixel to be reselected immediately and we can have scan path with many different pixels. [sent-148, score-0.525]
84 5 Conclusions In this paper we presented a neuromorphic device implementing a Winner–Take–All network comprising dynamic synapses and adaptive neurons. [sent-149, score-0.242]
85 This device is designed to be a part of a multi–chip system for Selective Attention: via an AER communication system it can be interfaced to silicon spiking retinas and to software implementations of associative memories. [sent-150, score-0.341]
86 We built a multi–chip system with the SAC and a silicon transient retina. [sent-151, score-0.202]
87 Preliminary experiments confirmed the basic functionality of the SAC and the robustness of the system; the analysis will be extended with the systematic study of STD, IOR, adaptation and lateral excitatory coupling among the nearby cells. [sent-153, score-0.155]
88 4 500 0 0 (b) Figure 5: Response of the system with WTA competition and Inhibition of Return. [sent-207, score-0.184]
89 (a) We turn on the WTA competition, and the hysteretic self-excitation. [sent-210, score-0.106]
90 Now there is only one active neuron in the whole chip, when it does not win the competition for saliency the hysteretic current fades away and another neuron begins spiking. [sent-212, score-0.862]
91 (b) We turn on the inhibitory synapse that implements the self-inhibition (IOR). [sent-213, score-0.287]
92 We can observe the effect of the inhibitory current subtracted from the input node (see text) on Vnet , that with the same input as before sets back to its resting level much faster. [sent-214, score-0.41]
93 The raster plot shows how this mechanism allows to deselect the current winner and select other inputs. [sent-215, score-0.173]
94 Expectation-based selective attention for the visual monitoring and control of a robot vehicle. [sent-219, score-0.278]
95 Object-based selection within an analog VLSI visual attention system. [sent-239, score-0.131]
96 Modeling selective attention using a neuromorphic analog VLSI device. [sent-243, score-0.443]
97 Shifts in selective visual-attention – towards the underlying neural circuitry. [sent-247, score-0.186]
98 A multi-chip pulse-based neuromorphic infrastructure and its application to a model of orientation selectivity. [sent-264, score-0.126]
99 Selective attention implemented with dynamic synapses and integrate-andfire neurons. [sent-270, score-0.181]
100 A neuromorphic VLSI device for implementing 2-D selective attention systems. [sent-337, score-0.431]
wordName wordTfidf (topN-words)
[('sac', 0.383), ('retina', 0.295), ('chip', 0.295), ('neuron', 0.244), ('vnet', 0.229), ('wta', 0.223), ('synapse', 0.198), ('selective', 0.186), ('aer', 0.156), ('competition', 0.145), ('silicon', 0.135), ('neuromorphic', 0.126), ('correspondent', 0.123), ('ior', 0.123), ('pixels', 0.117), ('ring', 0.109), ('hysteretic', 0.106), ('input', 0.1), ('chips', 0.098), ('pixel', 0.093), ('attention', 0.092), ('inhibitory', 0.089), ('synapses', 0.089), ('synaptic', 0.089), ('depression', 0.088), ('raster', 0.086), ('stimulated', 0.084), ('circuits', 0.08), ('multi', 0.078), ('std', 0.078), ('voltage', 0.074), ('vlsi', 0.074), ('spikes', 0.073), ('spiking', 0.071), ('adaptation', 0.069), ('output', 0.067), ('spike', 0.067), ('cell', 0.061), ('inhibition', 0.058), ('activity', 0.053), ('ashing', 0.053), ('bartolozzi', 0.053), ('iex', 0.053), ('iior', 0.053), ('resting', 0.049), ('winner', 0.049), ('excitatory', 0.047), ('board', 0.047), ('signals', 0.046), ('horiuchi', 0.046), ('lcd', 0.046), ('tunable', 0.046), ('winning', 0.046), ('squares', 0.046), ('active', 0.045), ('screen', 0.045), ('asynchronous', 0.045), ('scan', 0.041), ('sec', 0.041), ('black', 0.04), ('saliency', 0.04), ('system', 0.039), ('receives', 0.039), ('functionality', 0.039), ('analog', 0.039), ('current', 0.038), ('stimuli', 0.036), ('frequency', 0.035), ('salient', 0.035), ('neurons', 0.035), ('chiara', 0.035), ('dpi', 0.035), ('fire', 0.035), ('pciaer', 0.035), ('dedicated', 0.035), ('response', 0.034), ('node', 0.034), ('douglas', 0.033), ('switzerland', 0.033), ('circuit', 0.033), ('reset', 0.033), ('biological', 0.033), ('receive', 0.033), ('protocol', 0.031), ('architecture', 0.031), ('dante', 0.031), ('giacomo', 0.031), ('lichtsteiner', 0.031), ('monitored', 0.031), ('subregions', 0.031), ('bandwidth', 0.03), ('implementations', 0.03), ('parallel', 0.029), ('sequentially', 0.029), ('disappear', 0.028), ('indiveri', 0.028), ('transient', 0.028), ('array', 0.027), ('device', 0.027), ('dynamics', 0.027)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000005 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
Author: Chiara Bartolozzi, Giacomo Indiveri
Abstract: Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1
2 0.39555666 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons
Author: Elisabetta Chicca, Giacomo Indiveri, Rodney J. Douglas
Abstract: Cooperative competitive networks are believed to play a central role in cortical processing and have been shown to exhibit a wide set of useful computational properties. We propose a VLSI implementation of a spiking cooperative competitive network and show how it can perform context dependent computation both in the mean firing rate domain and in spike timing correlation space. In the mean rate case the network amplifies the activity of neurons belonging to the selected stimulus and suppresses the activity of neurons receiving weaker stimuli. In the event correlation case, the recurrent network amplifies with a higher gain the correlation between neurons which receive highly correlated inputs while leaving the mean firing rate unaltered. We describe the network architecture and present experimental data demonstrating its context dependent computation capabilities. 1
3 0.24321906 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu
Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1
4 0.22360083 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
5 0.19221246 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
6 0.15722302 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall
7 0.13990939 154 nips-2006-Optimal Change-Detection and Spiking Neurons
8 0.13859759 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
9 0.097021967 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
10 0.094185844 16 nips-2006-A Theory of Retinal Population Coding
11 0.07301221 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
12 0.063672163 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
13 0.060724303 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields
14 0.058106992 86 nips-2006-Graph-Based Visual Saliency
15 0.055809166 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
16 0.053009793 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
17 0.050175328 141 nips-2006-Multiple timescales and uncertainty in motor adaptation
18 0.049624957 17 nips-2006-A recipe for optimizing a time-histogram
19 0.04646096 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis
20 0.046029802 129 nips-2006-Map-Reduce for Machine Learning on Multicore
topicId topicWeight
[(0, -0.147), (1, -0.435), (2, 0.047), (3, 0.083), (4, 0.044), (5, 0.037), (6, -0.045), (7, 0.073), (8, -0.023), (9, -0.015), (10, 0.032), (11, -0.007), (12, -0.034), (13, -0.035), (14, -0.002), (15, 0.027), (16, 0.045), (17, 0.029), (18, -0.017), (19, 0.089), (20, 0.021), (21, -0.049), (22, -0.098), (23, 0.05), (24, -0.072), (25, -0.033), (26, -0.089), (27, 0.083), (28, 0.001), (29, -0.021), (30, 0.041), (31, -0.018), (32, -0.058), (33, 0.031), (34, 0.061), (35, 0.001), (36, -0.011), (37, 0.007), (38, 0.058), (39, -0.026), (40, 0.038), (41, -0.028), (42, 0.001), (43, -0.053), (44, -0.032), (45, 0.061), (46, 0.032), (47, -0.022), (48, -0.033), (49, -0.004)]
simIndex simValue paperId paperTitle
same-paper 1 0.97875762 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
Author: Chiara Bartolozzi, Giacomo Indiveri
Abstract: Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1
2 0.91659325 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu
Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1
Author: Elisabetta Chicca, Giacomo Indiveri, Rodney J. Douglas
Abstract: Cooperative competitive networks are believed to play a central role in cortical processing and have been shown to exhibit a wide set of useful computational properties. We propose a VLSI implementation of a spiking cooperative competitive network and show how it can perform context dependent computation both in the mean firing rate domain and in spike timing correlation space. In the mean rate case the network amplifies the activity of neurons belonging to the selected stimulus and suppresses the activity of neurons receiving weaker stimuli. In the event correlation case, the recurrent network amplifies with a higher gain the correlation between neurons which receive highly correlated inputs while leaving the mean firing rate unaltered. We describe the network architecture and present experimental data demonstrating its context dependent computation capabilities. 1
4 0.79205287 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
5 0.77263325 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall
Author: Máté Lengyel, Peter Dayan
Abstract: Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1
6 0.68391842 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
7 0.46802872 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
8 0.45137724 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
9 0.41962406 154 nips-2006-Optimal Change-Detection and Spiking Neurons
10 0.41082543 16 nips-2006-A Theory of Retinal Population Coding
11 0.39916366 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis
12 0.35845333 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields
13 0.32763267 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
14 0.3135812 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
15 0.29080629 86 nips-2006-Graph-Based Visual Saliency
16 0.27550173 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
17 0.2408231 13 nips-2006-A Scalable Machine Learning Approach to Go
18 0.2303032 96 nips-2006-In-Network PCA and Anomaly Detection
19 0.22125657 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model
20 0.22040962 29 nips-2006-An Information Theoretic Framework for Eukaryotic Gradient Sensing
topicId topicWeight
[(1, 0.056), (3, 0.012), (7, 0.053), (9, 0.049), (20, 0.014), (22, 0.03), (44, 0.048), (47, 0.016), (57, 0.066), (65, 0.017), (69, 0.02), (71, 0.104), (84, 0.343), (93, 0.065)]
simIndex simValue paperId paperTitle
same-paper 1 0.84429741 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
Author: Chiara Bartolozzi, Giacomo Indiveri
Abstract: Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1
2 0.59404218 2 nips-2006-A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation
Author: Yee W. Teh, David Newman, Max Welling
Abstract: Latent Dirichlet allocation (LDA) is a Bayesian network that has recently gained much popularity in applications ranging from document modeling to computer vision. Due to the large scale nature of these applications, current inference procedures like variational Bayes and Gibbs sampling have been found lacking. In this paper we propose the collapsed variational Bayesian inference algorithm for LDA, and show that it is computationally efficient, easy to implement and significantly more accurate than standard variational Bayesian inference for LDA.
3 0.50198686 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons
Author: Elisabetta Chicca, Giacomo Indiveri, Rodney J. Douglas
Abstract: Cooperative competitive networks are believed to play a central role in cortical processing and have been shown to exhibit a wide set of useful computational properties. We propose a VLSI implementation of a spiking cooperative competitive network and show how it can perform context dependent computation both in the mean firing rate domain and in spike timing correlation space. In the mean rate case the network amplifies the activity of neurons belonging to the selected stimulus and suppresses the activity of neurons receiving weaker stimuli. In the event correlation case, the recurrent network amplifies with a higher gain the correlation between neurons which receive highly correlated inputs while leaving the mean firing rate unaltered. We describe the network architecture and present experimental data demonstrating its context dependent computation capabilities. 1
4 0.48584449 117 nips-2006-Learning on Graph with Laplacian Regularization
Author: Rie K. Ando, Tong Zhang
Abstract: We consider a general form of transductive learning on graphs with Laplacian regularization, and derive margin-based generalization bounds using appropriate geometric properties of the graph. We use this analysis to obtain a better understanding of the role of normalization of the graph Laplacian matrix as well as the effect of dimension reduction. The results suggest a limitation of the standard degree-based normalization. We propose a remedy from our analysis and demonstrate empirically that the remedy leads to improved classification performance.
5 0.46158195 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu
Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1
6 0.42131901 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes
7 0.40554914 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
8 0.40398693 191 nips-2006-The Robustness-Performance Tradeoff in Markov Decision Processes
9 0.39404947 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
10 0.36746523 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
11 0.36577824 154 nips-2006-Optimal Change-Detection and Spiking Neurons
12 0.36469686 98 nips-2006-Inferring Network Structure from Co-Occurrences
13 0.36024374 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
15 0.34118667 167 nips-2006-Recursive ICA
16 0.33912823 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
17 0.33893663 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
18 0.33803195 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
19 0.33608031 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
20 0.33446646 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis