nips nips2006 nips2006-36 knowledge-graph by maker-knowledge-mining

36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network


Source: pdf

Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu

Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 ch Abstract The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). [sent-4, score-0.506]

2 However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. [sent-5, score-0.582]

3 The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. [sent-6, score-0.841]

4 In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. [sent-7, score-0.488]

5 We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. [sent-8, score-0.569]

6 In this paper we use an hVLSI network to implement a spiking version of the ’pointer-map’ architecture previously described for rate networks by Hahnloser and colleagues [2]. [sent-11, score-0.451]

7 In this architecture, a small number of pointer neurons are incorporated in the feedback of a recurrently connected network. [sent-12, score-1.036]

8 The pointers steer the feedback onto the map, and so focus processing on the attended map neurons. [sent-13, score-0.376]

9 Directing attention, foveating eyes, and reaching limbs all appeal to a pointer like interaction with the world, and such pointing is known to modulate the responses of neurons in a number of cortical and subcortical areas. [sent-15, score-1.059]

10 The operation of the pointer-map depends on the steady focusing of feedback on the map neurons during the period of attention. [sent-16, score-0.723]

11 It is easy to see how this steady control can be achieved in the neurons have continuous, rate outputs; but it is not obvious whether this behavior can be achieved also with intermittently spiking neural outputs. [sent-17, score-0.726]

12 Our objective was thus to evaluate whether networks of spiking neurons would be able to combine the benefits of both event-based processing and the attentional properties of pointer-map architecture. [sent-18, score-1.012]

13 2 Pointer-Map Architecture A pointer-map network consists of two reciprocally connected populations of excitatory neurons. [sent-19, score-0.277]

14 Firstly, there is a large population of map neurons that for example provide a place encoding of some variable such as the orientation of a visual bar stimulus. [sent-20, score-0.66]

15 A second, small population of pointer neurons exercises attentional control on the map. [sent-21, score-1.349]

16 In addition to the reciprocal connections between the two populations, the map neurons receive feedforward (e. [sent-22, score-0.787]

17 sensory) input; and the pointer neurons receive top-down attentional inputs that instruct the pointers to modulate the location and intensity of the processing on the map (see Fig. [sent-24, score-1.763]

18 The important functional difference between conventional recurrent networks (equivalently, ’recurrent maps’) and the pointer-map, is that the pointer neurons are inserted in the feedback loop, and so are able to modulate the effect of the feedback by their top-down inputs. [sent-26, score-1.26]

19 The usual recurrent excitatory connections between neurons are replaced in the pointer-map by recurrent connections between the map neurons and the pointer neurons that have sine and cosine weight profiles. [sent-27, score-2.721]

20 Consequently, the activities of the pointer neurons generate a vectorial pattern of recurrent excitation whose direction points to a particular location on the map (Fig. [sent-28, score-1.362]

21 Global inhibition provides competition between the map neurons, so that overall the pointer-map behaves as an attentionally selective soft winner-take-all network. [sent-30, score-0.254]

22 The map layer receives feedforward sensory inputs and inputs from two pointer neurons. [sent-33, score-1.094]

23 The pointer neurons receive top-down attentional inputs and also inputs from the map layer. [sent-34, score-1.767]

24 The recurrent connections between the map neurons and pointer neurons are set according to sine and cosine profiles. [sent-35, score-1.889]

25 (b) The interaction between pointer neurons and map neurons. [sent-36, score-1.145]

26 Clear circles indicate silent neurons and the sizes of the gray circles are proportional to the activation of the active neurons. [sent-38, score-0.473]

27 The vector formed by the activities of the two pointer neurons on this angular plot points in the direction (the pointer angle γ) of the map neurons where the pointer-to-map input is the largest. [sent-39, score-2.268]

28 The map-to-pointer input is proportional to the population vector of activities of the map neurons. [sent-40, score-0.318]

29 3 Spiking Network Chip Architecture We implemented the pointer-map architecture on a multi-neuron transceiver chip fabricated in a 4metal, 2-poly 0. [sent-41, score-0.221]

30 Each neuron has 8 input synapses (excitatory and inhibitory). [sent-45, score-0.333]

31 In this protocol the action potentials that travel along point-to-point axonal connections are replaced by digital addresses on a bus that are usually Figure 2: Architecture of multi-neuron chip. [sent-48, score-0.21]

32 The chip has 15 integrate-and-fire excitatory neurons and one global inhibitory neuron. [sent-49, score-0.909]

33 Input spikes and output spikes are communicated using an asynchronous handshaking protocol called Address Event Representation. [sent-51, score-0.253]

34 When an input spike is to be sent to the chip, the handshaking signals, Req and Ack, are used to ensure that only valid addresses on a common digital bus are latched and decoded by X- and Y-decoders. [sent-52, score-0.275]

35 The arbiter block arbitrates between all outgoing neuron spikes; and the neuron spike is sent off as the address of the neuron on a common digital bus through two handshaking signals (Reqout and Ackout). [sent-53, score-0.859]

36 The synaptic weights of 2 out of the 8 synapses can be specified uniquely through an on-chip Digital-to-Analog converter that sets the synaptic weight of each synapse before that particular synapse is stimulated. [sent-54, score-0.386]

37 The synaptic weight is specified as part of the digital address that normally codes the synaptic address. [sent-55, score-0.263]

38 the labels of source neurons and/or target synapses. [sent-56, score-0.473]

39 An on-chip Digital-to-Analog Converter (DAC) transforms the digital weights into the analog signals that set the individual efficacy of the excitatory synapses and inhibitory synapses for each neuron (Fig. [sent-58, score-0.765]

40 (a) (b) Figure 3: Resulting spatial distribution of activity in map neurons in response to attentional input to pointer neurons. [sent-60, score-1.575]

41 The frequencies of the attentional inputs to P1, P2 are (a) [200Hz, 0Hz] (b) [0Hz,200Hz]. [sent-61, score-0.483]

42 The y-axis shows the firing rate (Hz) of the map neurons (1–9) listed on the x-axis. [sent-62, score-0.667]

43 The polar plot on the side of each figure shows the pointer angle γ described by the pointer neuron activities. [sent-63, score-1.231]

44 4 Experiments Our pointer-map was composed of a total of 12 neurons: 2 served as pointer neurons; 9 as map neurons; and 1 as the global inhibitory neuron. [sent-64, score-0.869]

45 The synaptic weights of these neurons have a coefficient of variance in synaptic efficacy of about 0. [sent-65, score-0.667]

46 Through the on-chip DAC, we were able to reduce this variance for the excitatory synapses by a factor of 10. [sent-67, score-0.263]

47 We did not compensate for the variance in the inhibitory synapses because it was technically more challenging to do that. [sent-68, score-0.266]

48 The synaptic weights from each pointer neuron to every map neuron j = 1, 2, . [sent-69, score-1.155]

49 (a) (b) Figure 4: Network architecture used on the chip for attentional modulation. [sent-79, score-0.544]

50 (b) New network architecture with no requirement for strong excitatory recurrent connections. [sent-81, score-0.46]

51 1 Attentional Input Control We tested the attentional control of the pointer neurons for this network (Fig. [sent-84, score-1.386]

52 The location and activity on the map layer can be steered via the inputs to the pointer neurons, as seen in the three examples of Fig. [sent-87, score-0.959]

53 2 Attentional Target Selection One computational feature of the rate pointer-map is its multistablity: If two or more sensory stimuli are presented to the network, strong attentional inputs to the pointer can select one of these stimuli even if the stimulus is not the strongest one. [sent-91, score-1.14]

54 The preferred stimulus depends on the initial activities of the map and pointer neurons. [sent-92, score-0.803]

55 Moreover, attentional inputs can steer the attention to a different location on the map, even after a stronger stimulus is selected initially. [sent-93, score-0.579]

56 In our experiments, only two map neurons received feedforward sensory inputs which consist of two regular spike trains of different frequencies. [sent-96, score-0.962]

57 6(a), the map neuron with the stronger feedforward input was selected. [sent-98, score-0.429]

58 Attention could be steered to a different part of the map array by providing the necessary attentional inputs. [sent-99, score-0.511]

59 And, the map neuron receiving the weaker stimulus could suppress the activity of another map neuron. [sent-100, score-0.633]

60 Furthermore, the original rate model can produce attentional memorization effects, that is, the location of the map layer activity is retained even after the inputs to the pointer neurons are withdrawn. [sent-101, score-1.861]

61 The activities of the pointer neurons are given by P1 = p1 + α (cos θ1 M1 + cos θ2 M2 ) + P2 = p2 + α (sin θ1 M1 + sin θ2 M2 ) + where p1 and p2 are the acitivities induced by inputs to the two pointer neurons. [sent-105, score-1.89]

62 (5) 1 − cos (θ1 − θ2 ) There are several factors that make it difficult for us to reproduce the attentional memorization experiments. [sent-109, score-0.506]

63 Firstly, since we are only using a small number of neurons, each input spike has to create more than one output spike from a neuron in order to satisfy the above condition. [sent-110, score-0.385]

64 On the one hand, this is very hard to implement, because the neurons have a refractory period, any input currents during this time will not influence the neuron. [sent-111, score-0.513]

65 On the other hand, even for α = 1 (one input spike causes one output spike), it can easily lead to instability in the network because the timing of the arrival of the inhibitory and the excitatory inputs becomes a critical factor of the system stability. [sent-113, score-0.632]

66 Secondly, the network has to operate in a hard winner-take-all mode because of the variance in the inhibitory synaptic efficacies. [sent-114, score-0.313]

67 This means that the neuron is reset to its resting potential whenever it receives an inhibitory spike, thus removing all memory. [sent-115, score-0.338]

68 4(b)), we were able to avoid using the strong excitatory connections as required in the original network. [sent-118, score-0.256]

69 Instead, each neuron inhibits all other neurons in the map population but itself. [sent-120, score-0.895]

70 The steady state rate activities M1 and M2 are now given by M1 = m1 − β M2 + α (cos θ1 P1 + sin θ1 P2 ) + (6) M2 = m2 − β M1 + α (cos θ2 P1 + sin θ2 P2 ) + (7) The equations for the steady-state pointer neuron activities P1 and P2 remain as before. [sent-121, score-1.122]

71 The intuitive explanation for the decrease of α is that, in the original architecture, the global inhibition inhibits all the map neurons including the winner. [sent-123, score-0.768]

72 Therefore, in order to memorize the attended stimulus, the excitatory connections need to be strengthen to compensate for the self inhibition. [sent-124, score-0.428]

73 2 and we were now able to demonstrate attentional memorization. [sent-127, score-0.323]

74 That is, the attended neuron with the weaker sensory input stimulus survived even after the attentional inputs were withdrawn. [sent-128, score-0.916]

75 The same qualitative results were obtained even if all the remaining map neurons had a low background firing rate which mimic the effect of weak sensory inputs to different locations. [sent-129, score-0.884]

76 (a) (b) (c) Figure 5: Results of experiments showing responses of map neurons for 3 settings of input strengths to the pointer neurons. [sent-130, score-1.206]

77 Each map neuron has a background firing rate of 30Hz measured in the absence of activated recurrent connections and global inhibition. [sent-131, score-0.641]

78 The attentional inputs to pointer neurons P1 and P2 are (a) [700Hz,50Hz], (b) [700Hz,700Hz], (c) [50Hz,700Hz]. [sent-132, score-1.452]

79 The y-axis shows the firing rate (Hz) of the map neurons (1–9) listed on the x-axis. [sent-133, score-0.667]

80 5 Conclusion In this paper, we have described a hardware ’pointer-map’ neural network composed of spiking neurons that performs an interesting task of selective attentional processing previously described in a simulated ’pointer-map’ rate model by Hahnloser and colleagues. [sent-134, score-1.142]

81 Neural network behaviors in computer simulations that use rate equations would likely be observed also in spiking networks if many input spikes can be integrated before the post-synaptic neuron’s threshold is reached. [sent-135, score-0.421]

82 However, extensive integration is not possible for practical electronic networks, in which there are relatively small numbers of neurons and synapses. [sent-136, score-0.473]

83 We were find that most of the computational features of their simulated rate model could be reproduced in our hardware spiking implementation despite imprecisions of synaptic weights, and the inevitable fabrication related variability in the performance of individual neurons. [sent-137, score-0.321]

84 One significant difference between our spiking implementation and the rate model is the mechanism required to memorize a previously attended target. [sent-138, score-0.329]

85 In our spike-based implementation, it was necessary to modify the original pointer-map architecture so that the inhibition no longer depends on a single global inhibitory neuron. [sent-139, score-0.368]

86 Instead, each excitatory neuron inhibits all other neurons in the map population but itself. [sent-140, score-1.058]

87 Unfortunately, this approximate equivalence between excitatory and inhbitory neurons is inconsistent with the anatomical observation that only about 15% of cortical neurons are inhibitory. [sent-141, score-1.134]

88 However, the original architecture could probably work if we had larger populations of map neurons, more synapses, and/or NMDA-like synapses with longer time constants. [sent-142, score-0.419]

89 This is a scenario that we will explore in the future along with better characterization of the switching time dynamics of the attentional memorization experiments. [sent-143, score-0.399]

90 (a) (b) Figure 6: Results of attentional memorization experiments using the two different architectures in Fig. [sent-144, score-0.399]

91 The sensory inputs to two map neurons M3 and M7 were set to [200Hz,230Hz]. [sent-147, score-0.843]

92 In phase 1, the bottomup connections and inhibitory connections were inactivated. [sent-149, score-0.373]

93 In phase 2, the inhibitory connections were activated thus map neuron M3 which received the weaker input, was suppressed. [sent-150, score-0.681]

94 Map neuron M3 was now active because of the steering activity from the pointer neurons. [sent-152, score-0.779]

95 In phase 4, the pointer neurons P1 and P2 were stimulated by attentional inputs of frequencies [700Hz,0Hz] which amplified the activity of M3 but the map activity returned back to the activity shown in phase 3 once the attentional inputs were withdrawn in phase 5. [sent-153, score-2.45]

96 The sensory inputs to M3 and M7 were of frequencies [200Hz,230Hz] for the red curve and [40Hz,50Hz] for the blue curve. [sent-155, score-0.24]

97 However in phase 5, we could see that map neuron M3 retained its activity even after the attentional inputs were withdrawn (attentional inputs to P1 and P2 were [700Hz,0Hz] for the red curve and [300Hz,0Hz] for the blue curve). [sent-157, score-1.111]

98 Liu, “Programmable synaptic weights for an aVLSI network of spiking neurons,” in Proceedings of the 2006 IEEE International Symposium on Circuits and Systems, pp. [sent-167, score-0.327]

99 Hepp, “Feedback interactions between neuronal pointers and maps for attentional processing,” Nature Neuroscience, vol. [sent-175, score-0.369]

100 Douglas, “A competitive network of spiking VLSI neurons,” in World Congress on Neuroinformatics, F. [sent-213, score-0.23]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('pointer', 0.519), ('neurons', 0.473), ('attentional', 0.323), ('neuron', 0.193), ('excitatory', 0.163), ('spiking', 0.159), ('map', 0.153), ('inhibitory', 0.145), ('inputs', 0.137), ('architecture', 0.123), ('recurrent', 0.103), ('synapses', 0.1), ('chip', 0.098), ('synaptic', 0.097), ('connections', 0.093), ('activities', 0.091), ('cos', 0.084), ('sensory', 0.08), ('attended', 0.076), ('hahnloser', 0.076), ('memorization', 0.076), ('spike', 0.076), ('network', 0.071), ('hvlsi', 0.07), ('inhibition', 0.07), ('sin', 0.067), ('activity', 0.067), ('douglas', 0.056), ('spikes', 0.055), ('steady', 0.053), ('handshaking', 0.053), ('memorize', 0.053), ('vlsi', 0.049), ('pointers', 0.046), ('converter', 0.046), ('digital', 0.044), ('feedback', 0.044), ('feedforward', 0.043), ('populations', 0.043), ('phase', 0.042), ('bus', 0.042), ('inhibits', 0.042), ('modulate', 0.042), ('sine', 0.042), ('rate', 0.041), ('stimulus', 0.04), ('input', 0.04), ('aer', 0.039), ('zurich', 0.037), ('pro', 0.035), ('networks', 0.035), ('dac', 0.035), ('steer', 0.035), ('steered', 0.035), ('withdrawn', 0.035), ('liu', 0.034), ('population', 0.034), ('ring', 0.034), ('cosine', 0.033), ('asynchronous', 0.033), ('selective', 0.031), ('protocol', 0.031), ('global', 0.03), ('activated', 0.028), ('chips', 0.028), ('neuromorphic', 0.028), ('oster', 0.028), ('weaker', 0.027), ('event', 0.027), ('niebur', 0.026), ('eth', 0.026), ('neuroinformatics', 0.026), ('communicated', 0.026), ('layer', 0.025), ('cortical', 0.025), ('receive', 0.025), ('address', 0.025), ('retained', 0.024), ('silicon', 0.024), ('reproduced', 0.024), ('synapse', 0.023), ('reproduce', 0.023), ('location', 0.023), ('frequencies', 0.023), ('processing', 0.022), ('colleagues', 0.022), ('self', 0.022), ('cacy', 0.022), ('infrastructure', 0.022), ('composed', 0.022), ('attention', 0.021), ('strengths', 0.021), ('firstly', 0.021), ('compensate', 0.021), ('behaviors', 0.02), ('sent', 0.02), ('signals', 0.02), ('hybrid', 0.019), ('hz', 0.019), ('connection', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu

Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1

2 0.5003559 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

Author: Elisabetta Chicca, Giacomo Indiveri, Rodney J. Douglas

Abstract: Cooperative competitive networks are believed to play a central role in cortical processing and have been shown to exhibit a wide set of useful computational properties. We propose a VLSI implementation of a spiking cooperative competitive network and show how it can perform context dependent computation both in the mean firing rate domain and in spike timing correlation space. In the mean rate case the network amplifies the activity of neurons belonging to the selected stimulus and suppresses the activity of neurons receiving weaker stimuli. In the event correlation case, the recurrent network amplifies with a higher gain the correlation between neurons which receive highly correlated inputs while leaving the mean firing rate unaltered. We describe the network architecture and present experimental data demonstrating its context dependent computation capabilities. 1

3 0.38192213 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons

Author: Thomas Voegtlin

Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1

4 0.30316681 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall

Author: Máté Lengyel, Peter Dayan

Abstract: Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1

5 0.24321906 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

Author: Chiara Bartolozzi, Giacomo Indiveri

Abstract: Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1

6 0.22719526 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

7 0.2160465 154 nips-2006-Optimal Change-Detection and Spiking Neurons

8 0.20049725 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation

9 0.13356124 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron

10 0.098376662 16 nips-2006-A Theory of Retinal Population Coding

11 0.076473415 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex

12 0.075736605 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields

13 0.063453861 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

14 0.058857042 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments

15 0.058829941 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity

16 0.05611084 17 nips-2006-A recipe for optimizing a time-histogram

17 0.055291653 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis

18 0.05208493 167 nips-2006-Recursive ICA

19 0.051022116 86 nips-2006-Graph-Based Visual Saliency

20 0.047051426 113 nips-2006-Learning Structural Equation Models for fMRI


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.157), (1, -0.587), (2, 0.038), (3, 0.143), (4, 0.073), (5, 0.081), (6, -0.022), (7, 0.092), (8, -0.014), (9, -0.029), (10, 0.025), (11, -0.019), (12, -0.091), (13, -0.03), (14, -0.021), (15, 0.034), (16, 0.029), (17, 0.058), (18, 0.007), (19, 0.11), (20, -0.028), (21, -0.009), (22, -0.039), (23, 0.068), (24, -0.047), (25, -0.062), (26, -0.07), (27, 0.073), (28, 0.034), (29, -0.041), (30, 0.072), (31, 0.013), (32, -0.086), (33, 0.036), (34, 0.047), (35, -0.124), (36, -0.019), (37, 0.015), (38, -0.05), (39, 0.006), (40, 0.018), (41, 0.001), (42, -0.033), (43, -0.045), (44, -0.037), (45, 0.032), (46, 0.016), (47, 0.03), (48, 0.018), (49, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99115396 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu

Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1

2 0.96172976 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

Author: Elisabetta Chicca, Giacomo Indiveri, Rodney J. Douglas

Abstract: Cooperative competitive networks are believed to play a central role in cortical processing and have been shown to exhibit a wide set of useful computational properties. We propose a VLSI implementation of a spiking cooperative competitive network and show how it can perform context dependent computation both in the mean firing rate domain and in spike timing correlation space. In the mean rate case the network amplifies the activity of neurons belonging to the selected stimulus and suppresses the activity of neurons receiving weaker stimuli. In the event correlation case, the recurrent network amplifies with a higher gain the correlation between neurons which receive highly correlated inputs while leaving the mean firing rate unaltered. We describe the network architecture and present experimental data demonstrating its context dependent computation capabilities. 1

3 0.89850408 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

Author: Chiara Bartolozzi, Giacomo Indiveri

Abstract: Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1

4 0.84168708 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall

Author: Máté Lengyel, Peter Dayan

Abstract: Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1

5 0.80831242 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons

Author: Thomas Voegtlin

Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1

6 0.66709721 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

7 0.62457025 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation

8 0.44750032 154 nips-2006-Optimal Change-Detection and Spiking Neurons

9 0.39373431 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields

10 0.38384694 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron

11 0.38373166 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex

12 0.31980449 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis

13 0.28678802 16 nips-2006-A Theory of Retinal Population Coding

14 0.21154891 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity

15 0.1808323 13 nips-2006-A Scalable Machine Learning Approach to Go

16 0.17719454 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model

17 0.1745064 113 nips-2006-Learning Structural Equation Models for fMRI

18 0.14731939 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

19 0.14372776 88 nips-2006-Greedy Layer-Wise Training of Deep Networks

20 0.14145614 86 nips-2006-Graph-Based Visual Saliency


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.041), (3, 0.013), (7, 0.061), (9, 0.048), (20, 0.011), (22, 0.031), (44, 0.043), (57, 0.029), (65, 0.016), (69, 0.027), (71, 0.469), (84, 0.031), (93, 0.066)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.93524683 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation

Author: Jason M. Samonds, Brian R. Potetz, Tai S. Lee

Abstract: Although there has been substantial progress in understanding the neurophysiological mechanisms of stereopsis, how neurons interact in a network during stereo computation remains unclear. Computational models on stereopsis suggest local competition and long-range cooperation are important for resolving ambiguity during stereo matching. To test these predictions, we simultaneously recorded from multiple neurons in V1 of awake, behaving macaques while presenting surfaces of different depths rendered in dynamic random dot stereograms. We found that the interaction between pairs of neurons was a function of similarity in receptive fields, as well as of the input stimulus. Neurons coding the same depth experienced common inhibition early in their responses for stimuli presented at their nonpreferred disparities. They experienced mutual facilitation later in their responses for stimulation at their preferred disparity. These findings are consistent with a local competition mechanism that first removes gross mismatches, and a global cooperative mechanism that further refines depth estimates. 1 In trod u ction The human visual system is able to extract three-dimensional (3D) structures in random noise stereograms even when such images evoke no perceptible patterns when viewed monocularly [1]. Bela Julesz proposed that this is accomplished by a stereopsis mechanism that detects correlated shifts in 2D noise patterns between the two eyes. He also suggested that this mechanism likely involves cooperative neural processing early in the visual system. Marr and Poggio formalized the computational constraints for solving stereo matching (Fig. 1a) and devised an algorithm that can discover the underlying 3D structures in a variety of random dot stereogram patterns [2]. Their algorithm was based on two rules: (1) each element or feature is unique (i.e., can be assigned only one disparity) and (2) surfaces of objects are cohesive (i.e., depth changes gradually across space). To describe their algorithm in neurophysiological terms, we can consider neurons in primary visual cortex as simple element or feature detectors. The first rule is implemented by introducing competitive interactions (mutual inhibition) among neurons of different disparity tuning at each location (Fig. 1b, blue solid horizontal or vertical lines), allowing only one disparity to be detected at each location. The second rule is implemented by introducing cooperative interactions (mutual facilitation) among neurons tuned to the same depth (image disparity) across different spatial locations (Fig. 1b, along the red dashed diagonal lines). In other words, a disparity estimate at one location is more likely to be correct if neighboring locations have similar disparity estimates. A dynamic system under such constraints can relax to a stable global disparity map. Here, we present neurophysiological evidence of interactions between disparity-tuned neurons in the primary visual cortex that is consistent with this general approach. We sampled from a variety of spatially distributed disparity tuned neurons (see electrodes Fig. 1b) while displaying DRDS stimuli defined at various disparities (see stimulus Fig.1b). We then measured the dynamics of interactions by assessing the temporal evolution of correlation in neural responses. a Left Image b Right Image Electrodes Disparity Left Image ? Stimulus Right Image Figure 1: (a) Left and right images of random dot stereogram (right image has been shifted to the right). (b) 1D graphical depiction of competition (blue solid lines) and cooperation (red dashed lines) among disparity-tuned neurons with respect to space as defined by Marr and Poggio’s stereo algorithm [2]. 2 2.1 Methods Recording and stimulation a Posterior - Anterior Recordings were made in V1 of two awake, behaving macaques. We simultaneously recorded from 4-8 electrodes providing data from up to 10 neurons in a single recording session (some electrodes recorded from as many as 3 neurons). We collected data from 112 neurons that provided 224 pairs for cross-correlation analysis. For stimuli, we used 12 Hz dynamic random dot stereograms (DRDS; 25% density black and white pixels on a mean luminance background) presented in a 3.5-degree aperture. Liquid crystal shutter goggles were used to present random dot patterns to each eye separately. Eleven horizontal disparities between the two eyes, ranging from ±0.9 degrees, were tested. Seventy-four neurons (66%) had significant disparity tuning and 99 pairs (44%) were comprised of neurons that both had significant disparity tuning (1-way ANOVA, p<0.05). b 5mm Medial - Lateral 100µV 0.2ms 1° Figure 2: (a) Example recording session from five electrodes in V1. (b) Receptive field (white box—arrow represents direction preference) and random dot stereogram locations for same recording session (small red square is the fixation spot). 2.2 Data analysis Interaction between neurons was described as

same-paper 2 0.9188323 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu

Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1

3 0.78630459 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes

Author: Neil D. Lawrence, Guido Sanguinetti, Magnus Rattray

Abstract: Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies.

4 0.68036538 191 nips-2006-The Robustness-Performance Tradeoff in Markov Decision Processes

Author: Huan Xu, Shie Mannor

Abstract: Computation of a satisfactory control policy for a Markov decision process when the parameters of the model are not exactly known is a problem encountered in many practical applications. The traditional robust approach is based on a worstcase analysis and may lead to an overly conservative policy. In this paper we consider the tradeoff between nominal performance and the worst case performance over all possible models. Based on parametric linear programming, we propose a method that computes the whole set of Pareto efficient policies in the performancerobustness plane when only the reward parameters are subject to uncertainty. In the more general case when the transition probabilities are also subject to error, we show that the strategy with the “optimal” tradeoff might be non-Markovian and hence is in general not tractable. 1

5 0.65221167 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

Author: Elisabetta Chicca, Giacomo Indiveri, Rodney J. Douglas

Abstract: Cooperative competitive networks are believed to play a central role in cortical processing and have been shown to exhibit a wide set of useful computational properties. We propose a VLSI implementation of a spiking cooperative competitive network and show how it can perform context dependent computation both in the mean firing rate domain and in spike timing correlation space. In the mean rate case the network amplifies the activity of neurons belonging to the selected stimulus and suppresses the activity of neurons receiving weaker stimuli. In the event correlation case, the recurrent network amplifies with a higher gain the correlation between neurons which receive highly correlated inputs while leaving the mean firing rate unaltered. We describe the network architecture and present experimental data demonstrating its context dependent computation capabilities. 1

6 0.54894769 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons

7 0.50200939 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

8 0.46063113 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

9 0.4512822 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron

10 0.44305614 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex

11 0.39423674 154 nips-2006-Optimal Change-Detection and Spiking Neurons

12 0.37394899 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments

13 0.36854702 71 nips-2006-Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning

14 0.36062735 29 nips-2006-An Information Theoretic Framework for Eukaryotic Gradient Sensing

15 0.33703169 81 nips-2006-Game Theoretic Algorithms for Protein-DNA binding

16 0.32933369 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall

17 0.32240695 16 nips-2006-A Theory of Retinal Population Coding

18 0.30536205 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields

19 0.30015415 192 nips-2006-Theory and Dynamics of Perceptual Bistability

20 0.29887819 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity