nips nips2004 nips2004-140 knowledge-graph by maker-knowledge-mining

140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity


Source: pdf

Author: Marcelo A. Montemurro, Stefano Panzeri

Abstract: A typical neuron in visual cortex receives most inputs from other cortical neurons with a roughly similar stimulus preference. Does this arrangement of inputs allow efficient readout of sensory information by the target cortical neuron? We address this issue by using simple modelling of neuronal population activity and information theoretic tools. We find that efficient synaptic information transmission requires that the tuning curve of the afferent neurons is approximately as wide as the spread of stimulus preferences of the afferent neurons reaching the target neuron. By meta analysis of neurophysiological data we found that this is the case for cortico-cortical inputs to neurons in visual cortex. We suggest that the organization of V1 cortico-cortical synaptic inputs allows optimal information transmission. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Optimal information decoding from neuronal populations with specific stimulus selectivity Marcelo A. [sent-1, score-0.87]

2 uk Abstract A typical neuron in visual cortex receives most inputs from other cortical neurons with a roughly similar stimulus preference. [sent-8, score-1.488]

3 Does this arrangement of inputs allow efficient readout of sensory information by the target cortical neuron? [sent-9, score-0.41]

4 We address this issue by using simple modelling of neuronal population activity and information theoretic tools. [sent-10, score-0.571]

5 We find that efficient synaptic information transmission requires that the tuning curve of the afferent neurons is approximately as wide as the spread of stimulus preferences of the afferent neurons reaching the target neuron. [sent-11, score-2.664]

6 By meta analysis of neurophysiological data we found that this is the case for cortico-cortical inputs to neurons in visual cortex. [sent-12, score-0.523]

7 We suggest that the organization of V1 cortico-cortical synaptic inputs allows optimal information transmission. [sent-13, score-0.248]

8 1 Introduction A typical neuron in visual cortex receives most of its inputs from other visual cortical neurons. [sent-14, score-0.895]

9 The majority of cortico-cortical inputs arise from afferent cortical neurons with a preference to stimuli which is similar to that of the target neuron [1, 2, 3]. [sent-15, score-1.37]

10 For example, orientation selective neurons in superficial layers in ferret visual cortex receive more than 50% of their cortico-cortical excitatory inputs from neurons with orientation preference which is less than 30o apart. [sent-16, score-1.481]

11 However, this input structure is rather broad in terms of stimulus-specificity: cortico-cortical connections between neurons tuned to dissimilar stimulus orientation also exist [4]. [sent-17, score-0.921]

12 The structure and spread of the stimulus specificity of cortico-cortical connections has received a lot of attention because of its importance for understanding the mechanisms of generation of orientation tuning (see [4] for a review). [sent-18, score-0.999]

13 However, little is still known on whether this structure of inputs allows efficient transmission of sensory information across cortico-cortical synapses. [sent-19, score-0.297]

14 It is likely that efficiency of information transmission across cortico-cortical synapses also depends on the width of tuning curves of the afferent cortical neurons to stimuli. [sent-20, score-1.328]

15 In fact, theoretical work on population coding has shown that the width of the tuning curves has ∗ Corresponding author an important influence on the quality and the nature of the information encoding in cortical populations [5, 6, 7, 8]. [sent-21, score-0.822]

16 Another factor that may influence the efficiency of cortico-cortical synaptic information transmission is the biophysical capability of the target neuron. [sent-22, score-0.417]

17 To conserve all information during synaptic transmission, the target neuron must conserve the ‘label’ of the spikes arriving from multiple input neurons at different places on its dendritic tree [9]. [sent-23, score-1.009]

18 Because of biophysical limitations, a target neuron that e. [sent-24, score-0.362]

19 can only sum inputs at the soma may lose a large part of the information present in the afferent activity. [sent-26, score-0.491]

20 The optimal arrangement of cortico-cortical synapses may also depend on the capability of postsynaptic neurons in processing separately spikes from different neurons. [sent-27, score-0.439]

21 We introduce a simple model of neuronal information processing that takes into account both the selective distribution of stimulus preferences typical of cortico-cortical connections and the potential biophysical limitations of cortical neurons. [sent-29, score-1.082]

22 We use this model and information theoretic tools to investigate whether there is an optimal trade-off between the spread of distribution of stimulus preference across the afferent neurons and the tuning width of the afferent neurons itself. [sent-30, score-2.347]

23 We find that efficient synaptic information transmission requires that the tuning curve of the afferent neurons is approximately as wide as the spread of stimulus preferences of the afferent fibres reaching the target neuron. [sent-31, score-2.369]

24 These results suggest that neurons in visual cortex are wired to decode optimally information from a stimulus-specific distribution of synaptic inputs. [sent-33, score-0.789]

25 2 Model of the activity of the afferent neuronal population We consider a simple model for the activity of the afferent neuronal population based on the known tuning properties and spatial and synaptic organisation of sensory areas. [sent-34, score-2.153]

26 1 Stimulus tuning of individual afferent neurons We assume that the the population is made of a large number N of neurons (for a real cortical neuron, the number N of afferents is in the order of few thousands [10]). [sent-36, score-1.543]

27 The response of each neuron rk (k = 1, · · · , N ) is quantified as the number of spikes fired in a salient post-stimulus time window of a length τ . [sent-37, score-0.365]

28 Thus, the overall neuronal population response is represented as a spike count vector r = (r1 , · · · , rN ). [sent-38, score-0.644]

29 We assume that the neurons are tuned to a small number D of relevant stimulus parameters [11, 12], such as e. [sent-39, score-0.761]

30 The stimulus variable will thus be described as a vector s = (s1 , . [sent-42, score-0.44]

31 The number of stimulus features that are encoded by the neuron will be left as a free parameter to be varied within the range 1-5 for two reasons. [sent-46, score-0.751]

32 First, although there is evidence that the number of stimulus features encoded by a single neuron is limited [11, 12], more research is still needed to determine exactly how many stimulus parameters are encoded in different areas. [sent-47, score-1.197]

33 Thus, it is interesting to investigate how the optimal arrangement of corticocortical synaptic systems depends on the number of stimulus features being encoded. [sent-49, score-0.636]

34 V1 or MT neurons to variables such as stimulus orientation motion direction [13, 14, 15], and is hence widely used in models of sensory coding [16, 17]. [sent-52, score-0.992]

35 Spike count responses of each neuron on each trial are assumed to follow a Poisson distribution whose mean is given by the above neuronal tuning function (Eq. [sent-54, score-0.763]

36 The Poisson model is widely used because it is the simplest model of neuronal firing that captures the salient property of neuronal firing that the variance of spike counts is proportional to its mean. [sent-56, score-0.642]

37 This assumption is certainly a simplification but it is sufficient to account for the majority of the information transmitted by real cortical neurons [18, 19, 20] and, as we shall see later, it is mathematically convenient because it makes our model tractable. [sent-58, score-0.481]

38 2 Distribution of stimulus preferences among the afferent population Neurons in sensory cortex receive a large number of inputs from other neurons with a variety of stimulus preferences. [sent-60, score-2.158]

39 However, the majority of their inputs come from neurons with roughly similar stimulus preference [1, 2, 3]. [sent-61, score-0.945]

40 (2) the parameter ˆ0 represents the the center of the distribution, thus being the s most represented preferred stimulus in the population. [sent-63, score-0.48]

41 ) The parameter σp controls the spread of stimulus preferences of the afferent s neuronal population: a small value of σp indicates that a large fraction of the population have similar stimulus preferences, and a large value of σp indicates that all stimuli are represented similarly. [sent-65, score-2.03]

42 A Gaussian distribution of stimulus preferences of the afferent population fits well empirical data on distribution of preferred orientations of synaptic inputs of neurons in both deep and superficial layers of ferret primary visual cortex [3]. [sent-66, score-2.203]

43 3 Width of tuning and spread of stimulus preferences in visual cortex To estimate the width of tuning σf and the spread of stimulus preferences σp of corticocortical afferent populations in visual cortex, we reviewed critically published anatomical and physiological data. [sent-67, score-2.899]

44 We concentrated on excitatory synaptic inputs, which form the majority of inputs to a cortical pyramidal neuron [10]. [sent-68, score-0.642]

45 We computed σp by fitting (by a least square method) the published histograms of synaptic connections as function of stimulus preference of the input neuron to Gaussian distributions. [sent-69, score-0.982]

46 Similarly, we determined σf by fitting spike count histograms to Gaussian tuning curves. [sent-70, score-0.365]

47 When considering a target neuron in ferret primary visual cortex and using orientation as the stimulus parameters, the spread of stimulus preferences σp of its inputs is ≈ 20o for layer 5/6 neurons [3], and 16o [3] to 23o [21] for layer 2/3 neurons. [sent-71, score-2.494]

48 The orientation tuning width σf of the cortical inputs to the V1 target neuron is that of other V1 neurons that project to it. [sent-72, score-1.263]

49 This σf is 17o for Layer 4 neurons [22], and it is similar for neurons in deep and superficial layers [3]. [sent-73, score-0.643]

50 When considering a target neuron in Layer 4 of cat visual cortex and orientation tuning, the spread of stimulus preference σp is 20o [2] and σf is ≈ 17o . [sent-74, score-1.421]

51 When considering a target neuron in ferret visual cortex and motion direction tuning, the spread of tuning of its inputs σp is ≈ 30 o [1]. [sent-75, score-1.255]

52 Motion direction tuning widths of macaque neurons is ≈ 28o , and this width is similar across species (see [13]). [sent-76, score-0.65]

53 4 Information theoretic quantification of population decoding To characterise how a target neuronal system can decode the information about sensory stimuli contained in the activity of its afferent neuronal population, we use mutual information [23]. [sent-81, score-1.621]

54 The mutual information between a set of stimuli and the neuronal responses quantifies how well any decoder can discriminate among stimuli by observing the neuronal responses. [sent-82, score-0.899]

55 This measure has the advantage of being independent of the decoding mechanism used, and thus puts precise constraints on the information that can be decoded by any biological system operating on the afferent activity. [sent-83, score-0.468]

56 Previous studies on the information content of an afferent neuronal population [7, 8] have assumed that the target neuronal decoding system can extract all the information during synaptic transmission. [sent-84, score-1.412]

57 To do so, the target neuron must conserve the ”label” of the spikes arriving from multiple neurons at different sites on its dendritic tree [9]. [sent-85, score-0.786]

58 Given the potential biophysical difficulty in processing each spike separately, a simple alternative to spike labelling has been proposed, - spike pooling [10, 24]. [sent-86, score-0.419]

59 In this scheme, the target neuron simply sums up the afferent activity. [sent-87, score-0.686]

60 P (r|s) is the probability of observing a neuronal population response r conditional to the occurrence of stimulus s, and P (r) = dsP (s)P (r|s). [sent-92, score-0.983]

61 (3) is only accessible to a decoder than can keep the label of which neuron fired which spike [9, 24]. [sent-94, score-0.452]

62 The probability P (r|s) is computed according to the Poisson distribution, which is entirely determined by the knowledge of the tuning curves [5]. [sent-95, score-0.284]

63 The Fisher information matrix can be computed by taking into account that for a population of Poisson neurons is just the sum of the Fisher information for individual neurons, and the latter has a simple expression in terms of tuning curves [16]. [sent-103, score-0.895]

64 Since the neuronal population size N is is large, the sum over Fisher information of individual neurons can be replaced by an integral over the stimulus preferences of the neurons in the population, weighted by their probability density P (ˆ). [sent-104, score-1.676]

65 (3) of requiring only an integral over a D-dimensional stimulus rather than a sum over an infinite population. [sent-110, score-0.44]

66 We have studied numerically the dependence of the labeled-line information on the parameters σf and σp as a function of the number of encoded stimulus features D 1 . [sent-111, score-0.51]

67 We found that, unlike the case of a uniform distribution of stimulus preferences [8], there is a finite value of the width of tuning σf that maximizes the information for all D ≥ 2. [sent-115, score-0.921]

68 1 found in visual cortex either contains the maximum or corresponds to near optimal values of information transmission. [sent-118, score-0.331]

69 For D = 1, information is maximal for very narrow tuning curves. [sent-119, score-0.271]

70 However, also in this case the information values are still efficient in the cortical range σf /σp ≈ 1, in that the tail of the D = 1 information curve is avoided in that region. [sent-120, score-0.255]

71 Thus, the range of values of σf and σp found in visual cortex allows efficient synaptic information transmission over a wide range of number of stimulus features encoded by the neuron. [sent-121, score-1.176]

72 1 ILL(S,R) D=1 D=5 0 2 σ /σ 4 f p 6 8 Figure 1: Mutual labeled-line information as a function of the ratio of tuning curve width and stimulus preference spread σf /σp . [sent-123, score-1.059]

73 The curves for each stimulus dimensionality D were shifted by a constant factor to separate them for visual inspection (lower curves correspond to higher values of D). [sent-124, score-0.666]

74 The position of the maximal information for each stimulus dimension falls either inside the range of values of σf /σp found in visual cortex, or very close to it (see text) . [sent-126, score-0.646]

75 2 The information available to the the pooling decoder We now consider the case in which the target neuron cannot process separately spikes from different neurons (for example, a neuron that just sums up post-synaptic potentials of approximately equal weight at the soma). [sent-129, score-1.152]

76 In this case the label of the neuron that fired each spike is lost by the target neuron, and it can only operate on the pooled neuronal signal, in which the identity of each spike is lost. [sent-130, score-0.89]

77 The probability P (ρ|s) can be computed by noting that a sum of Poisson-distributed responses is still a Poissondistributed response whose tuning curve to stimuli is just the sum of the individual tuning curves. [sent-135, score-0.663]

78 The pooled mutual information is thus a function of a single Poisson-distributed response variables and can be computed easily also for large populations. [sent-136, score-0.264]

79 The dependence of the pooled information on the parameters σf and σp as a function of the number of encoded stimulus features D is reported in Fig. [sent-137, score-0.608]

80 The difference is that, for the pooled information, there is a finite optimal value for information transmission also when the neurons are tuned to one-dimensional stimulus feature. [sent-140, score-1.016]

81 For all cases of stimulus dimensionality considered, the efficient information transmission though the pooled IP(S,R) D=1 D=3 0 1 2 σf/σp 3 4 Figure 2: Pooled mutual information as a function of the ratio of tuning curve width and stimulus preference spread σf /σp . [sent-141, score-1.839]

82 1, the curves for each stimulus dimensionality D were shifted by a constant factor to separate them for visual inspection (lower curves correspond to higher values of D). [sent-144, score-0.666]

83 neuronal decoder can still be reached in the visual cortical range 0. [sent-147, score-0.665]

84 This finding shows that the range of values of σf and σp found in visual cortex allows efficient synaptic information transmission even if the target neuron does not rely on complex dendritic processing to conserve the label of the neuron that fired the spike. [sent-150, score-1.295]

85 5 Conclusions The stimulus specificity of cortico-cortical connections is important for understanding the mechanisms of generation of orientation tuning (see [4]) for a review). [sent-151, score-0.865]

86 Our results suggest that, whatever the exact role of cortico-cortical synapses in generating orientation tuning, their wiring allows efficient transmission of sensory information. [sent-153, score-0.321]

87 Organization of intracortical circuits in relation to direction preference maps in ferret visual cortex. [sent-163, score-0.389]

88 Oriena tation topography of layer 4 lateral networks revealed by optical imaging in cat visual cortex (area 18). [sent-176, score-0.375]

89 Relations of local inhibitory and excitatory circuits to orientation preference maps in ferret visual cortex. [sent-183, score-0.504]

90 Narrow versus wide tuning curves: what’s best for a population code? [sent-212, score-0.483]

91 Isolation of relevant visual features from random stimuli for cortical complex cells. [sent-247, score-0.344]

92 Direction and orientation selectivity of neurons in visual area MT of the macaque. [sent-253, score-0.575]

93 The analysis of visual-motion - a comparison of neuronal and psychophysical performance. [sent-266, score-0.261]

94 Information tuning of population of neurons in primary visual cortex. [sent-271, score-0.912]

95 Population coding of stimulus location in rat somatosensory cortex. [sent-304, score-0.529]

96 Excess synchrony in motor cortical neurons provides redundant direction information with that from coarse temporal measures. [sent-315, score-0.482]

97 Relations between local synaptic connections and orientation domains in primary visual cortex. [sent-324, score-0.46]

98 Receptive fields and response properties of neurons in layer 4 of ferret visual cortex. [sent-332, score-0.661]

99 Decoding neuronal population activity in rat somatosensory cortex: role of columnar organization. [sent-350, score-0.566]

100 Mutual information of population codes and distance measures in probability space. [sent-353, score-0.237]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('stimulus', 0.44), ('afferent', 0.35), ('neurons', 0.295), ('neuronal', 0.261), ('tuning', 0.242), ('neuron', 0.235), ('population', 0.208), ('cortex', 0.16), ('visual', 0.142), ('spread', 0.134), ('synaptic', 0.133), ('cortical', 0.13), ('transmission', 0.128), ('preferences', 0.125), ('ferret', 0.122), ('orientation', 0.112), ('spike', 0.098), ('pooled', 0.098), ('decoder', 0.097), ('preference', 0.097), ('inputs', 0.086), ('width', 0.085), ('mutual', 0.085), ('target', 0.078), ('pooling', 0.076), ('stimuli', 0.072), ('manchester', 0.07), ('decoding', 0.063), ('conserve', 0.061), ('fisher', 0.057), ('spikes', 0.056), ('sensory', 0.054), ('dsp', 0.052), ('panzeri', 0.052), ('petersen', 0.052), ('response', 0.052), ('populations', 0.051), ('layer', 0.05), ('biophysical', 0.049), ('connections', 0.048), ('poisson', 0.044), ('activity', 0.043), ('curves', 0.042), ('super', 0.042), ('encoded', 0.041), ('preferred', 0.04), ('dendritic', 0.037), ('range', 0.035), ('coding', 0.035), ('kang', 0.035), ('moffat', 0.035), ('nevado', 0.035), ('roerig', 0.035), ('arrangement', 0.033), ('wide', 0.033), ('quanti', 0.033), ('curve', 0.032), ('excitatory', 0.031), ('decode', 0.03), ('corticocortical', 0.03), ('somatosensory', 0.03), ('theoretic', 0.03), ('information', 0.029), ('layers', 0.029), ('published', 0.029), ('city', 0.029), ('motion', 0.028), ('direction', 0.028), ('characterise', 0.028), ('po', 0.028), ('faculty', 0.028), ('shadlen', 0.028), ('anatomical', 0.028), ('separately', 0.028), ('expression', 0.027), ('synapses', 0.027), ('majority', 0.027), ('tuned', 0.026), ('rmax', 0.026), ('selectivity', 0.026), ('decoded', 0.026), ('soma', 0.026), ('red', 0.026), ('count', 0.025), ('primary', 0.025), ('arriving', 0.024), ('rat', 0.024), ('orientations', 0.024), ('deep', 0.024), ('implications', 0.023), ('cat', 0.023), ('mechanisms', 0.023), ('sums', 0.023), ('individual', 0.023), ('cerebral', 0.022), ('salient', 0.022), ('label', 0.022), ('observing', 0.022), ('stands', 0.021), ('life', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity

Author: Marcelo A. Montemurro, Stefano Panzeri

Abstract: A typical neuron in visual cortex receives most inputs from other cortical neurons with a roughly similar stimulus preference. Does this arrangement of inputs allow efficient readout of sensory information by the target cortical neuron? We address this issue by using simple modelling of neuronal population activity and information theoretic tools. We find that efficient synaptic information transmission requires that the tuning curve of the afferent neurons is approximately as wide as the spread of stimulus preferences of the afferent neurons reaching the target neuron. By meta analysis of neurophysiological data we found that this is the case for cortico-cortical inputs to neurons in visual cortex. We suggest that the organization of V1 cortico-cortical synaptic inputs allows optimal information transmission. 1

2 0.28826159 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons

Author: Rajesh P. Rao

Abstract: There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrate-and-fire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a direction-selective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a two-level hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1

3 0.21929218 28 nips-2004-Bayesian inference in spiking neurons

Author: Sophie Deneve

Abstract: We propose a new interpretation of spiking neurons as Bayesian integrators accumulating evidence over time about events in the external world or the body, and communicating to other neurons their certainties about these events. In this model, spikes signal the occurrence of new information, i.e. what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic representation of probabilities. We proceed to develop a theory of Bayesian inference in spiking neural networks, recurrent interactions implementing a variant of belief propagation. Many perceptual and motor tasks performed by the central nervous system are probabilistic, and can be described in a Bayesian framework [4, 3]. A few important but hidden properties, such as direction of motion, or appropriate motor commands, are inferred from many noisy, local and ambiguous sensory cues. These evidences are combined with priors about the sensory world and body. Importantly, because most of these inferences should lead to quick and irreversible decisions in a perpetually changing world, noisy cues have to be integrated on-line, but in a way that takes into account unpredictable events, such as a sudden change in motion direction or the appearance of a new stimulus. This raises the question of how this temporal integration can be performed at the neural level. It has been proposed that single neurons in sensory cortices represent and compute the log probability that a sensory variable takes on a certain value (eg Is visual motion in the neuron’s preferred direction?) [9, 7]. Alternatively, to avoid normalization issues and provide an appropriate signal for decision making, neurons could represent the log probability ratio of a particular hypothesis (eg is motion more likely to be towards the right than towards the left) [7, 6]. Log probabilities are convenient here, since under some assumptions, independent noisy cues simply combine linearly. Moreover, there are physiological evidence for the neural representation of log probabilities and log probability ratios [9, 6, 7]. However, these models assume that neurons represent probabilities in their firing rates. We argue that it is important to study how probabilistic information are encoded in spikes. Indeed, it seems spurious to marry the idea of an exquisite on-line integration of noisy cues with an underlying rate code that requires averaging on large populations of noisy neurons and long periods of time. In particular, most natural tasks require this integration to take place on the time scale of inter-spike intervals. Spikes are more efficiently signaling events ∗ Institute of Cognitive Science, 69645 Bron, France than analog quantities. In addition, a neural theory of inference with spikes will bring us closer to the physiological level and generate more easily testable predictions. Thus, we propose a new theory of neural processing in which spike trains provide a deterministic, online representation of a log-probability ratio. Spikes signals events, eg that the log-probability ratio has exceeded what could be predicted from previous spikes. This form of coding was loosely inspired by the idea of ”energy landscape” coding proposed by Hinton and Brown [2]. However, contrary to [2] and other theories using rate-based representation of probabilities, this model is self-consistent and does not require different models for encoding and decoding: As output spikes provide new, unpredictable, temporally independent evidence, they can be used directly as an input to other Bayesian neurons. Finally, we show that these neurons can be used as building blocks in a theory of approximate Bayesian inference in recurrent spiking networks. Connections between neurons implement an underlying Bayesian network, consisting of coupled hidden Markov models. Propagation of spikes is a form of belief propagation in this underlying graphical model. Our theory provides computational explanations of some general physiological properties of cortical neurons, such as spike frequency adaptation, Poisson statistics of spike trains, the existence of strong local inhibition in cortical columns, and the maintenance of a tight balance between excitation and inhibition. Finally, we discuss the implications of this model for the debate about temporal versus rate-based neural coding. 1 Spikes and log posterior odds 1.1 Synaptic integration seen as inference in a hidden Markov chain We propose that each neuron codes for an underlying ”hidden” binary variable, xt , whose state evolves over time. We assume that xt depends only on the state at the previous time step, xt−dt , and is conditionally independent of other past states. The state xt can switch 1 from 0 to 1 with a constant rate ron = dt limdt→0 P (xt = 1|xt−dt = 0), and from 1 to 0 with a constant rate roff . For example, these transition rates could represent how often motion in a preferred direction appears the receptive field and how long it is likely to stay there. The neuron infers the state of its hidden variable from N noisy synaptic inputs, considered to be observations of the hidden state. In this initial version of the model, we assume that these inputs are conditionally independent homogeneous Poisson processes, synapse i i emitting a spike between time t and t + dt (si = 1) with constant probability qon dt if t i xt = 1, and another constant probability qoff dt if xt = 0. The synaptic spikes are assumed to be otherwise independent of previous synaptic spikes, previous states and spikes at other synapses. The resulting generative model is a hidden Markov chain (figure 1-A). However, rather than estimating the state of its hidden variable and communicating this estimate to other neurons (for example by emitting a spike when sensory evidence for xt = 1 goes above a threshold) the neuron reports and communicates its certainty that the current state is 1. This certainty takes the form of the log of the ratio of the probability that the hidden state is 1, and the probability that the state is 0, given all the synaptic inputs P (xt =1|s0→t ) received so far: Lt = log P (xt =0|s0→t ) . We use s0→t as a short hand notation for the N synaptic inputs received at present and in the past. We will refer to it as the log odds ratio. Thanks to the conditional independencies assumed in the generative model, we can compute this Log odds ratio iteratively. Taking the limit as dt goes to zero, we get the following differential equation: ˙ L = ron 1 + e−L − roff 1 + eL + i wi δ(si − 1) − θ t B. A. xt ron .roff dt qon , qoff st xt ron .roff i t st dt s qon , qoff qon , qoff st dt xt j st Ot It Gt Ot Lt t t dt C. E. 2 0 -2 -4 D. 500 1000 1500 2000 2500 2 3000 Count Log odds 4 20 Lt 0 -2 0 500 1000 1500 2000 2500 Time Ot 3000 0 200 400 600 ISI Figure 1: A. Generative model for the synaptic input. B. Schematic representation of log odds ratio encoding and decoding. The dashed circle represents both eventual downstream elements and the self-prediction taking place inside the model neuron. A spike is fired only when Lt exceeds Gt . C. One example trial, where the state switches from 0 to 1 (shaded area) and back to 0. plain: Lt , dotted: Gt . Black stripes at the top: corresponding spikes train. D. Mean Log odds ratio (dark line) and mean output firing rate (clear line). E. Output spike raster plot (1 line per trial) and ISI distribution for the neuron shown is C. and D. Clear line: ISI distribution for a poisson neuron with the same rate. wi , the synaptic weight, describe how informative synapse i is about the state of the hidden i qon variable, e.g. wi = log qi . Each synaptic spike (si = 1) gives an impulse to the log t off odds ratio, which is positive if this synapse is more active when the hidden state if 1 (i.e it increases the neuron’s confidence that the state is 1), and negative if this synapse is more active when xt = 0 (i.e it decreases the neuron’s confidence that the state is 1). The bias, θ, is determined by how informative it is not to receive any spike, e.g. θ = i i i qon − qoff . By convention, we will consider that the ”bias” is positive or zero (if not, we need simply to invert the status of the state x). 1.2 Generation of output spikes The spike train should convey a sparse representation of Lt , so that each spike reports new information about the state xt that is not redundant with that reported by other, preceding, spikes. This proposition is based on three arguments: First, spikes, being metabolically expensive, should be kept to a minimum. Second, spikes conveying redundant information would require a decoding of the entire spike train, whereas independent spike can be taken into account individually. And finally, we seek a self consistent model, with the spiking output having a similar semantics to its spiking input. To maximize the independence of the spikes (conditioned on xt ), we propose that the neuron fires only when the difference between its log odds ratio Lt and a prediction Gt of this log odds ratio based on the output spikes emitted so far reaches a certain threshold. Indeed, supposing that downstream elements predicts Lt as best as they can, the neuron only needs to fire when it expects that prediction to be too inaccurate (figure 1-B). In practice, this will happen when the neuron receives new evidence for xt = 1. Gt should thereby follow the same dynamics as Lt when spikes are not received. The equation for Gt and the output Ot (Ot = 1 when an output spike is fired) are given by: ˙ G = Ot = ron 1 + e−L − roff 1 + eL + go δ(Ot − 1) go 1. when Lt > Gt + , 0 otherwise, 2 (1) (2) Here go , a positive constant, is the only free parameter, the other parameters being constrained by the statistics of the synaptic input. 1.3 Results Figure 1-C plots a typical trial, showing the behavior of L, G and O before, during and after presentation of the stimulus. As random synaptic inputs are integrated, L fluctuates and eventually exceeds G + 0.5, leading to an output spike. Immediately after a spike, G jumps to G + go , which prevents (except in very rare cases) a second spike from immediately following the first. Thus, this ”jump” implements a relative refractory period. However, ron G decays as it tends to converge back to its stable level gstable = log roff . Thus L eventually exceeds G again, leading to a new spike. This threshold crossing happens more often during stimulation (xt = 1) as the net synaptic input alters to create a higher overall level of certainty, Lt . Mean Log odds ratio and output firing rate ¯ The mean firing rate Ot of the Bayesian neuron during presentation of its preferred stimulus (i.e. when xt switches from 0 to 1 and back to 0) is plotted in figure 1-D, together with the ¯ mean log posterior ratio Lt , both averaged over trials. Not surprisingly, the log-posterior ratio reflects the leaky integration of synaptic evidence, with an effective time constant that depends on the transition probabilities ron , roff . If the state is very stable (ron = roff ∼ 0), synaptic evidence is integrated over almost infinite time periods, the mean log posterior ratio tending to either increase or decrease linearly with time. In the example in figure 1D, the state is less stable, so ”old” synaptic evidence are discounted and Lt saturates. ¯ In contrast, the mean output firing rate Ot tracks the state of xt almost perfectly. This is because, as a form of predictive coding, the output spikes reflect the new synaptic i evidence, It = i δ(st − 1) − θ, rather than the log posterior ratio itself. In particular, the mean output firing rate is a rectified linear function of the mean input, e. g. + ¯ ¯ wi q i −θ . O= 1I= go i on(off) Analogy with a leaky integrate and fire neuron We can get an interesting insight into the computation performed by this neuron by linearizing L and G around their mean levels over trials. Here we reduce the analysis to prolonged, statistically stable periods when the state is constant (either ON or OFF). In this case, the ¯ ¯ mean level of certainty L and its output prediction G are also constant over time. We make the rough approximation that the post spike jump, go , and the input fluctuations are small ¯ compared to the mean level of certainty L. Rewriting Vt = Lt − Gt + go 2 as the ”membrane potential” of the Bayesian neuron: ˙ V = −kL V + It − ∆go − go Ot ¯ ¯ ¯ where kL = ron e−L + roff eL , the ”leak” of the membrane potential, depends on the overall ¯ level of certainty. ∆go is positive and a monotonic increasing function of go . A. s t1 dt s t1 s t1 dt B. C. x t1 x t3 dt x t3 x t3 dt x t1 x t1 x t1 x t2 x t3 x t1 … x tn x t3 x t2 … x tn … dt dt Lx2 D. x t2 dt s t2 dt x t2 s t2 x t2 dt s t2 dt Log odds 10 No inh -0.5 -1 -1 -1.5 -2 5 Feedback 500 1000 1500 2000 Tiger Stripes 0 -5 -10 500 1000 1500 2000 2500 Time Figure 2: A. Bayesian causal network for yt (tiger), x1 (stripes) and x2 (paws). B. A nett t work feedforward computing the log posterior for x1 . C. A recurrent network computing t the log posterior odds for all variables. D. Log odds ratio in a simulated trial with the net2 1 1 work in C (see text). Thick line: Lx , thin line: Lx , dash-dotted: Lx without inhibition. t t t 2 Insert: Lx averaged over trials, showing the effect of feedback. t The linearized Bayesian neuron thus acts in its stable regime as a leaky integrate and fire (LIF) neuron. The membrane potential Vt integrates its input, Jt = It − ∆go , with a leak kL . The neuron fires when its membrane potential reaches a constant threshold go . After ¯ each spikes, Vt is reset to 0. Interestingly, for appropriately chosen compression factor go , the mean input to the lin¯ ¯ earized neuron J = I − ∆go ≈ 0 1 . This means that the membrane potential is purely driven to its threshold by input fluctuations, or a random walk in membrane potential. As a consequence, the neuron’s firing will be memoryless, and close to a Poisson process. In particular, we found Fano factor close to 1 and quasi-exponential ISI distribution (figure 1E) on the entire range of parameters tested. Indeed, LIF neurons with balanced inputs have been proposed as a model to reproduce the statistics of real cortical neurons [8]. This balance is implemented in our model by the neuron’s effective self-inhibition, even when the synaptic input itself is not balanced. Decoding As we previously said, downstream elements could predict the log odds ratio Lt by computing Gt from the output spikes (Eq 1, fig 1-B). Of course, this requires an estimate of the transition probabilities ron , roff , that could be learned from the observed spike trains. However, we show next that explicit decoding is not necessary to perform bayesian inference in spiking networks. Intuitively, this is because the quantity that our model neurons receive and transmit, eg new information, is exactly what probabilistic inference algorithm propagate between connected statistical elements. 1 ¯ Even if go is not chosen optimally, the influence of the drift J is usually negligible compared to the large fluctuations in membrane potential. 2 Bayesian inference in cortical networks The model neurons, having the same input and output semantics, can be used as building blocks to implement more complex generative models consisting of coupled Markov chains. Consider, for example, the example in figure 2-A. Here, a ”parent” variable x1 t (the presence of a tiger) can cause the state of n other ”children” variables ([xk ]k=2...n ), t of whom two are represented (the presence of stripes,x2 , and motion, x3 ). The ”chilt t dren” variables are Bayesian neurons identical to those described previously. The resulting bayesian network consist of n + 1 coupled hidden Markov chains. Inference in this architecture corresponds to computing the log posterior odds ratio for the tiger, x1 , and the log t posterior of observing stripes or motion, ([xk ]k=2...n ), given the synaptic inputs received t by the entire network so far, i.e. s2 , . . . , sk . 0→t 0→t Unfortunately, inference and learning in this network (and in general in coupled Markov chains) requires very expensive computations, and cannot be performed by simply propagating messages over time and among the variable nodes. In particular, the state of a child k variable xt depends on xk , sk , x1 and the state of all other children at the previous t t t−dt time step, [xj ]2

4 0.20224725 84 nips-2004-Inference, Attention, and Decision in a Bayesian Neural Architecture

Author: Angela J. Yu, Peter Dayan

Abstract: We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an explicit explanation for the experimentally observed modulation that prior information in one stimulus feature (location) can have on an independent feature (orientation). The network’s intermediate levels of representation instantiate known physiological properties of visual cortical neurons. The model also illustrates a possible reconciliation of cortical and neuromodulatory representations of uncertainty. 1

5 0.19878861 148 nips-2004-Probabilistic Computation in Spiking Populations

Author: Richard S. Zemel, Rama Natarajan, Peter Dayan, Quentin J. Huys

Abstract: As animals interact with their environments, they must constantly update estimates about their states. Bayesian models combine prior probabilities, a dynamical model and sensory evidence to update estimates optimally. These models are consistent with the results of many diverse psychophysical studies. However, little is known about the neural representation and manipulation of such Bayesian information, particularly in populations of spiking neurons. We consider this issue, suggesting a model based on standard neural architecture and activations. We illustrate the approach on a simple random walk example, and apply it to a sensorimotor integration task that provides a particularly compelling example of dynamic probabilistic computation. Bayesian models have been used to explain a gamut of experimental results in tasks which require estimates to be derived from multiple sensory cues. These include a wide range of psychophysical studies of perception;13 motor action;7 and decision-making.3, 5 Central to Bayesian inference is that computations are sensitive to uncertainties about afferent and efferent quantities, arising from ignorance, noise, or inherent ambiguity (e.g., the aperture problem), and that these uncertainties change over time as information accumulates and dissipates. Understanding how neurons represent and manipulate uncertain quantities is therefore key to understanding the neural instantiation of these Bayesian inferences. Most previous work on representing probabilistic inference in neural populations has focused on the representation of static information.1, 12, 15 These encompass various strategies for encoding and decoding uncertain quantities, but do not readily generalize to real-world dynamic information processing tasks, particularly the most interesting cases with stimuli changing over the same timescale as spiking itself.11 Notable exceptions are the recent, seminal, but, as we argue, representationally restricted, models proposed by Gold and Shadlen,5 Rao,10 and Deneve.4 In this paper, we first show how probabilistic information varying over time can be represented in a spiking population code. Second, we present a method for producing spiking codes that facilitate further processing of the probabilistic information. Finally, we show the utility of this method by applying it to a temporal sensorimotor integration task. 1 TRAJECTORY ENCODING AND DECODING We assume that population spikes R(t) arise stochastically in relation to the trajectory X(t) of an underlying (but hidden) variable. We use RT and XT for the whole trajectory and spike trains respectively from time 0 to T . The spikes RT constitute the observations and are assumed to be probabilistically related to the signal by a tuning function f (X, θ i ): P (R(i, T )|X(T )) ∝ f (X, θi ) (1) for the spike train of the ith neuron, with parameters θi . Therefore, via standard Bayesian inference, RT determines a distribution over the hidden variable at time T , P (X(T )|RT ). We first consider a version of the dynamics and input coding that permits an analytical examination of the impact of spikes. Let X(t) follow a stationary Gaussian process such that the joint distribution P (X(t1 ), X(t2 ), . . . , X(tm )) is Gaussian for any finite collection of times, with a covariance matrix which depends on time differences: Ctt = c(|t − t |). Function c(|∆t|) controls the smoothness of the resulting random walks. Then, P (X(T )|RT ) ∝ p(X(T )) X(T ) dX(T )P (RT |X(T ))P (X(T )|X(T )) (2) where P (X(T )|X(T )) is the distribution over the whole trajectory X(T ) conditional on the value of X(T ) at its end point. If RT are a set of conditionally independent inhomogeneous Poisson processes, we have P (RT |X(T )) ∝ iτ f (X(tiτ ), θi ) exp − i τ dτ f (X(τ ), θi ) , (3) where tiτ ∀τ are the spike times τ of neuron i in RT . Let χ = [X(tiτ )] be the vector of stimulus positions at the times at which we observed a spike and Θ = [θ(tiτ )] be the vector of spike positions. If the tuning functions are Gaussian f (X, θi ) ∝ exp(−(X − θi )2 /2σ 2 ) and sufficiently dense that i τ dτ f (X, θi ) is independent of X (a standard assumption in population coding), then P (RT |X(T )) ∝ exp(− χ − Θ 2 /2σ 2 ) and in Equation 2, we can marginalize out X(T ) except at the spike times tiτ : P (X(T )|RT ) ∝ p(X(T )) −1 χ dχ exp −[χ, X(T )]T C 2 [χ, X(T )] − χ−Θ 2σ 2 2 (4) and C is the block covariance matrix between X(tiτ ), x(T ) at the spike times [ttτ ] and the final time T . This Gaussian integral has P (X(T )|RT ) ∼ N (µ(T ), ν(T )), with µ(T ) = CT t (Ctt + Iσ 2 )−1 Θ = kΘ ν(T ) = CT T − kCtT (5) CT T is the T, T th element of the covariance matrix and CT t is similarly a row vector. The dependence in µ on past spike times is specified chiefly by the inverse covariance matrix, and acts as an effective kernel (k). This kernel is not stationary, since it depends on factors such as the local density of spiking in the spike train RT . For example, consider where X(t) evolves according to a diffusion process with drift: dX = −αXdt + σ dN (t) (6) where α prevents it from wandering too far, N (t) is white Gaussian noise with mean zero and σ 2 variance. Figure 1A shows sample kernels for this process. Inspection of Figure 1A reveals some important traits. First, the monotonically decreasing kernel magnitude as the time span between the spike and the current time T grows matches the intuition that recent spikes play a more significant role in determining the posterior over X(T ). Second, the kernel is nearly exponential, with a time constant that depends on the time constant of the covariance function and the density of the spikes; two settings of these parameters produced the two groupings of kernels in the figure. Finally, the fully adaptive kernel k can be locally well approximated by a metronomic kernel k (shown in red in Figure 1A) that assumes regular spiking. This takes advantage of the general fact, indicated by the grouping of kernels, that the kernel depends weakly on the actual spike pattern, but strongly on the average rate. The merits of the metronomic kernel are that it is stationary and only depends on a single mean rate rather than the full spike train RT . It also justifies s Kernels k and k −0.5 C 5 0 0.03 0.06 0.09 0.04 0.06 0.08 t−t Time spike True stimulus and means D Full kernel E Regular, stationary kernel −0.5 0 −0.5 0.03 0.04 0.05 0.06 0.07 Time 0.08 0.09 0 0.5 0.1 Space 0 Space −4 10 Space Variance ratio 10 −2 10 0.5 B ν2 / σ2 Kernel size (weight) A 0.1 0 0.5 0.03 0.04 0.05 0.06 0.07 Time 0.08 0.09 0.1 Figure 1: Exact and approximate spike decoding with the Gaussian process prior. Spikes are shown in yellow, the true stimulus in green, and P (X(T )|RT ) in gray. Blue: exact inference with nonstationary and red: approximate inference with regular spiking. A Kernel samples for a diffusion process as defined by equations 5, 6. B, C: Mean and variance of the inference. D: Exact inference with full kernel k and E: approximation based on metronomic kernel k

6 0.19785331 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern

7 0.18784674 151 nips-2004-Rate- and Phase-coded Autoassociative Memory

8 0.18436097 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity

9 0.16436765 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons

10 0.15833148 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons

11 0.14240375 112 nips-2004-Maximising Sensitivity in a Spiking Network

12 0.13863823 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model

13 0.13201421 46 nips-2004-Constraining a Bayesian Model of Human Visual Speed Perception

14 0.11299112 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

15 0.095008947 170 nips-2004-Similarity and Discrimination in Classical Conditioning: A Latent Variable Account

16 0.086758316 97 nips-2004-Learning Efficient Auditory Codes Using Spikes Predicts Cochlear Filters

17 0.085158221 35 nips-2004-Chemosensory Processing in a Spiking Model of the Olfactory Bulb: Chemotopic Convergence and Center Surround Inhibition

18 0.084534623 12 nips-2004-A Temporal Kernel-Based Model for Tracking Hand Movements from Neural Activities

19 0.084075049 20 nips-2004-An Auditory Paradigm for Brain-Computer Interfaces

20 0.081772812 33 nips-2004-Brain Inspired Reinforcement Learning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.176), (1, -0.419), (2, -0.116), (3, 0.072), (4, 0.027), (5, 0.084), (6, 0.022), (7, -0.037), (8, 0.039), (9, -0.025), (10, -0.029), (11, 0.035), (12, 0.001), (13, -0.08), (14, 0.113), (15, 0.039), (16, -0.045), (17, -0.014), (18, -0.044), (19, 0.028), (20, 0.094), (21, 0.038), (22, 0.16), (23, -0.209), (24, 0.002), (25, -0.063), (26, 0.09), (27, -0.131), (28, 0.092), (29, 0.1), (30, 0.012), (31, -0.05), (32, 0.035), (33, -0.051), (34, -0.081), (35, 0.074), (36, 0.032), (37, 0.058), (38, 0.059), (39, 0.038), (40, -0.057), (41, 0.068), (42, 0.042), (43, -0.022), (44, 0.042), (45, 0.001), (46, -0.057), (47, 0.03), (48, -0.011), (49, 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99018878 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity

Author: Marcelo A. Montemurro, Stefano Panzeri

Abstract: A typical neuron in visual cortex receives most inputs from other cortical neurons with a roughly similar stimulus preference. Does this arrangement of inputs allow efficient readout of sensory information by the target cortical neuron? We address this issue by using simple modelling of neuronal population activity and information theoretic tools. We find that efficient synaptic information transmission requires that the tuning curve of the afferent neurons is approximately as wide as the spread of stimulus preferences of the afferent neurons reaching the target neuron. By meta analysis of neurophysiological data we found that this is the case for cortico-cortical inputs to neurons in visual cortex. We suggest that the organization of V1 cortico-cortical synaptic inputs allows optimal information transmission. 1

2 0.79346317 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons

Author: Rajesh P. Rao

Abstract: There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrate-and-fire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a direction-selective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a two-level hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1

3 0.72281307 84 nips-2004-Inference, Attention, and Decision in a Bayesian Neural Architecture

Author: Angela J. Yu, Peter Dayan

Abstract: We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an explicit explanation for the experimentally observed modulation that prior information in one stimulus feature (location) can have on an independent feature (orientation). The network’s intermediate levels of representation instantiate known physiological properties of visual cortical neurons. The model also illustrates a possible reconciliation of cortical and neuromodulatory representations of uncertainty. 1

4 0.65761608 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons

Author: R. J. Vogelstein, Udayan Mallik, Eugenio Culurciello, Gert Cauwenberghs, Ralph Etienne-Cummings

Abstract: We have constructed a system that uses an array of 9,600 spiking silicon neurons, a fast microcontroller, and digital memory, to implement a reconfigurable network of integrate-and-fire neurons. The system is designed for rapid prototyping of spiking neural networks that require high-throughput communication with external address-event hardware. Arbitrary network topologies can be implemented by selectively routing address-events to specific internal or external targets according to a memory-based projective field mapping. The utility and versatility of the system is demonstrated by configuring it as a three-stage network that accepts input from an address-event imager, detects salient regions of the image, and performs spatial acuity modulation around a high-resolution fovea that is centered on the location of highest salience. 1

5 0.62813908 35 nips-2004-Chemosensory Processing in a Spiking Model of the Olfactory Bulb: Chemotopic Convergence and Center Surround Inhibition

Author: Baranidharan Raman, Ricardo Gutierrez-osuna

Abstract: This paper presents a neuromorphic model of two olfactory signalprocessing primitives: chemotopic convergence of olfactory receptor neurons, and center on-off surround lateral inhibition in the olfactory bulb. A self-organizing model of receptor convergence onto glomeruli is used to generate a spatially organized map, an olfactory image. This map serves as input to a lattice of spiking neurons with lateral connections. The dynamics of this recurrent network transforms the initial olfactory image into a spatio-temporal pattern that evolves and stabilizes into odor- and intensity-coding attractors. The model is validated using experimental data from an array of temperature-modulated gas sensors. Our results are consistent with recent neurobiological findings on the antennal lobe of the honeybee and the locust. 1 In trod u ction An artificial olfactory system comprises of an array of cross-selective chemical sensors followed by a pattern recognition engine. An elegant alternative for the processing of sensor-array signals, normally performed with statistical pattern recognition techniques [1], involves adopting solutions from the biological olfactory system. The use of neuromorphic approaches provides an opportunity for formulating new computational problems in machine olfaction, including mixture segmentation, background suppression, olfactory habituation, and odor-memory associations. A biologically inspired approach to machine olfaction involves (1) identifying key signal processing primitives in the olfactory pathway, (2) adapting these primitives to account for the unique properties of chemical sensor signals, and (3) applying the models to solving specific computational problems. The biological olfactory pathway can be divided into three general stages: (i) olfactory epithelium, where primary reception takes place, (ii) olfactory bulb (OB), where the bulk of signal processing is performed and, (iii) olfactory cortex, where odor associations are stored. A review of literature on olfactory signal processing reveals six key primitives in the olfactory pathway that can be adapted for use in machine olfaction. These primitives are: (a) chemical transduction into a combinatorial code by a large population of olfactory receptor neurons (ORN), (b) chemotopic convergence of ORN axons onto glomeruli (GL), (c) logarithmic compression through lateral inhibition at the GL level by periglomerular interneurons, (d) contrast enhancement through lateral inhibition of mitral (M) projection neurons by granule interneurons, (e) storage and association of odor memories in the piriform cortex, and (f) bulbar modulation through cortical feedback [2, 3]. This article presents a model that captures the first three abovementioned primitives: population coding, chemotopic convergence and contrast enhancement. The model operates as follows. First, a large population of cross-selective pseudosensors is generated from an array of metal-oxide (MOS) gas sensors by means of temperature modulation. Next, a self-organizing model of convergence is used to cluster these pseudo-sensors according to their relative selectivity. This clustering generates an initial spatial odor map at the GL layer. Finally, a lattice of spiking neurons with center on-off surround lateral connections is used to transform the GL map into identity- and intensity-specific attractors. The model is validated using a database of temperature-modulated sensor patterns from three analytes at three concentration levels. The model is shown to address the first problem in biologically-inspired machine olfaction: intensity and identity coding of a chemical stimulus in a manner consistent with neurobiology [4, 5]. 2 M o d e l i n g c h e m o t opi c c o n v e r g e n c e The projection of sensory signals onto the olfactory bulb is organized such that ORNs expressing the same receptor gene converge onto one or a few GLs [3]. This convergence transforms the initial combinatorial code into an organized spatial pattern (i.e., an olfactory image). In addition, massive convergence improves the signal to noise ratio by integrating signals from multiple receptor neurons [6]. When incorporating this principle into machine olfaction, a fundamental difference between the artificial and biological counterparts must be overcome: the input dimensionality at the receptor/sensor level. The biological olfactory system employs a large population of ORNs (over 100 million in humans, replicated from 1,000 primary receptor types), whereas its artificial analogue uses a few chemical sensors (commonly one replica of up to 32 different sensor types). To bridge this gap, we employ a sensor excitation technique known as temperature modulation [7]. MOS sensors are conventionally driven in an isothermal fashion by maintaining a constant temperature. However, the selectivity of these devices is a function of the operating temperature. Thus, capturing the sensor response at multiple temperatures generates a wealth of additional information as compared to the isothermal mode of operation. If the temperature is modulated slow enough (e.g., mHz), the behavior of the sensor at each point in the temperature cycle can then be treated as a pseudo-sensor, and thus used to simulate a large population of cross-selective ORNs (refer to Figure 1(a)). To model chemotopic convergence, these temperature-modulated pseudo-sensors (referred to as ORNs in what follows) must be clustered according to their selectivity [8]. As a first approximation, each ORN can be modeled by an affinity vector [9] consisting of the responses across a set of C analytes: r K i = K i1 , K i2 ,..., K iC (1) [ ] where K ia is the response of the ith ORN to analyte a. The selectivity of this ORN r is then defined by the orientation of the affinity vector Κ i . A close look at the OB also shows that neighboring GLs respond to similar odors [10]. Therefore, we model the ORN-GL projection with a Kohonen self-organizing map (SOM) [11]. In our model, the SOM is trained to model the distribution of r ORNs in chemical sensitivity space, defined by the affinity vector Κ i . Once the training of the SOM is completed, each ORN is assigned to the closest SOM node (a simulated GL) in affinity space, thereby forming a convergence map. The response of each GL can then be computed as Ga = σ j (∑ N i =1 Wij ⋅ ORN ia ) (2) where ORN ia is the response of pseudo-sensor i to analyte a, Wij=1 if pseudo-sensor i converges to GL j and zero otherwise, and σ (⋅) is a squashing sigmoidal function that models saturation. This convergence model works well under the assumption that the different sensory inputs are reasonably uncorrelated. Unfortunately, most gas sensors are extremely collinear. As a result, this convergence model degenerates into a few dominant GLs that capture most of the sensory activity, and a large number of dormant GLs that do not receive any projections. To address this issue, we employ a form of competition known as conscience learning [12], which incorporates a habituation mechanism to prevent certain SOM nodes from dominating the competition. In this scheme, the fraction of times that a particular SOM node wins the competition is used as a bias to favor non-winning nodes. This results in a spreading of the ORN projections to neighboring units and, therefore, significantly reduces the number of dormant units. We measure the performance of the convergence mapping with the entropy across the lattice, H = −∑ Pi log Pi , where Pi is the fraction of ORNs that project to SOM node i [13]. To compare Kohonen and conscience learning, we built convergence mappings with 3,000 pseudo-sensors and 400 GL units (refer to section 4 for details). The theoretical maximum of the entropy for this network, which corresponds to a uniform distribution, is 8.6439. When trained with Kohonen’s algorithm, the entropy of the SOM is 7.3555. With conscience learning, the entropy increases to 8.2280. Thus, conscience is an effective mechanism to improve the spreading of ORN projections across the GL lattice. 3 M o d e l i n g t h e o l f a c t o r y b u l b n e t wo r k Mitral cells, which synapse ORNs at the GL level, transform the initial olfactory image into a spatio-temporal code by means of lateral inhibition. Two roles have been suggested for this lateral inhibition: (a) sharpening of the molecular tuning range of individual M cells with respect to that of their corresponding ORNs [10], and (b) global redistribution of activity, such that the bulb-wide representation of an odorant, rather than the individual tuning ranges, becomes specific and concise over time [3]. More recently, center on-off surround inhibitory connections have been found in the OB [14]. These circuits have been suggested to perform pattern normalization, noise reduction and contrast enhancement of the spatial patterns. We model each M cell using a leaky integrate-and-fire spiking neuron [15]. The input current I(t) and change in membrane potential u(t) of a neuron are given by: I (t ) = du u (t ) +C dt R (3) du τ = −u (t ) + R ⋅ I (t ) [τ = RC ] dt Each M cell receives current Iinput from ORNs and current Ilateral from lateral connections with other M cells: I input ( j ) = ∑Wij ⋅ ORNi i (4) I lateral ( j , t ) = ∑ Lkj ⋅ α (k , t − 1) k where Wij indicates the presence/absence of a synapse between ORNi and Mj, as determined by the chemotopic mapping, Lkj is the efficacy of the lateral connection between Mk and Mj, and α(k,t-1) is the post-synaptic current generated by a spike at Mk: α (k , t − 1) = − g (k , t − 1) ⋅ [u ( j, t − 1) + − Esyn ] (5) g(k,t-1) is the conductance of the synapse between Mk and Mj at time t-1, u(j,t-1) is the membrane potential of Mj at time t-1 and the + subscript indicates this value becomes zero if negative, and Esyn is the reverse synaptic potential. The change in conductance of post-synaptic membrane is: & g (k , t ) = dg (k , t ) − g (k , t ) = + z (k , t ) dt τ syn & z (k , t ) = dz (k , t ) − z ( k , t ) = + g norm ⋅ spk ( k , t ) dt τ syn (6) where z(.) and g(.) are low pass filters of the form exp(-t/τsyn) and t ⋅ exp(−t / τ syn ) , respectively, τsyn is the synaptic time constant, gnorm is a normalization constant, and spk(j,t) marks the occurrence of a spike in neuron i at time t: 1 u ( j , t ) = Vspike  spk ( j , t ) =   0 u ( j , t ) ≠ Vspike  (7) Combining equations (3) and (4), the membrane potential can be expressed as: du ( j , t ) − u ( j, t ) I lateral ( j, t ) I input ( j ) = + + dt RC C C & u ( j , t − 1) + u ( j , t − 1) ⋅ dt u ( j, t ) < Vthreshold  u ( j, t ) =   Vspike u ( j, t ) ≥ Vthreshold   & u ( j, t ) = (8) When the membrane potential reaches Vthreshold, a spike is generated, and the membrane potential is reset to Vrest. Any further inputs to the neuron are ignored during the subsequent refractory period. Following [14], lateral interactions are modeled with a center on-off surround matrix Lij. Each M cell makes excitatory synapses to nearby M cells (d

6 0.61770034 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern

7 0.59931743 28 nips-2004-Bayesian inference in spiking neurons

8 0.59616458 46 nips-2004-Constraining a Bayesian Model of Human Visual Speed Perception

9 0.56271333 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons

10 0.50883287 151 nips-2004-Rate- and Phase-coded Autoassociative Memory

11 0.50069153 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity

12 0.45845449 170 nips-2004-Similarity and Discrimination in Classical Conditioning: A Latent Variable Account

13 0.42192957 112 nips-2004-Maximising Sensitivity in a Spiking Network

14 0.40827677 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

15 0.40357459 148 nips-2004-Probabilistic Computation in Spiking Populations

16 0.39192861 193 nips-2004-Theories of Access Consciousness

17 0.34947857 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model

18 0.30663398 97 nips-2004-Learning Efficient Auditory Codes Using Spikes Predicts Cochlear Filters

19 0.29307088 75 nips-2004-Heuristics for Ordering Cue Search in Decision Making

20 0.28605255 33 nips-2004-Brain Inspired Reinforcement Learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.073), (15, 0.068), (26, 0.053), (31, 0.01), (33, 0.091), (35, 0.533), (50, 0.016), (82, 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94605523 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity

Author: Marcelo A. Montemurro, Stefano Panzeri

Abstract: A typical neuron in visual cortex receives most inputs from other cortical neurons with a roughly similar stimulus preference. Does this arrangement of inputs allow efficient readout of sensory information by the target cortical neuron? We address this issue by using simple modelling of neuronal population activity and information theoretic tools. We find that efficient synaptic information transmission requires that the tuning curve of the afferent neurons is approximately as wide as the spread of stimulus preferences of the afferent neurons reaching the target neuron. By meta analysis of neurophysiological data we found that this is the case for cortico-cortical inputs to neurons in visual cortex. We suggest that the organization of V1 cortico-cortical synaptic inputs allows optimal information transmission. 1

2 0.73704398 134 nips-2004-Object Classification from a Single Example Utilizing Class Relevance Metrics

Author: Michael Fink

Abstract: We describe a framework for learning an object classifier from a single example. This goal is achieved by emphasizing the relevant dimensions for classification using available examples of related classes. Learning to accurately classify objects from a single training example is often unfeasible due to overfitting effects. However, if the instance representation provides that the distance between each two instances of the same class is smaller than the distance between any two instances from different classes, then a nearest neighbor classifier could achieve perfect performance with a single training example. We therefore suggest a two stage strategy. First, learn a metric over the instances that achieves the distance criterion mentioned above, from available examples of other related classes. Then, using the single examples, define a nearest neighbor classifier where distance is evaluated by the learned class relevance metric. Finding a metric that emphasizes the relevant dimensions for classification might not be possible when restricted to linear projections. We therefore make use of a kernel based metric learning algorithm. Our setting encodes object instances as sets of locality based descriptors and adopts an appropriate image kernel for the class relevance metric learning. The proposed framework for learning from a single example is demonstrated in a synthetic setting and on a character classification task. 1

3 0.59659493 1 nips-2004-A Cost-Shaping LP for Bellman Error Minimization with Performance Guarantees

Author: Daniela D. Farias, Benjamin V. Roy

Abstract: We introduce a new algorithm based on linear programming that approximates the differential value function of an average-cost Markov decision process via a linear combination of pre-selected basis functions. The algorithm carries out a form of cost shaping and minimizes a version of Bellman error. We establish an error bound that scales gracefully with the number of states without imposing the (strong) Lyapunov condition required by its counterpart in [6]. We propose a path-following method that automates selection of important algorithm parameters which represent counterparts to the “state-relevance weights” studied in [6]. 1

4 0.46765593 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model

Author: Taro Toyoizumi, Jean-pascal Pfister, Kazuyuki Aihara, Wulfram Gerstner

Abstract: We derive an optimal learning rule in the sense of mutual information maximization for a spiking neuron model. Under the assumption of small fluctuations of the input, we find a spike-timing dependent plasticity (STDP) function which depends on the time course of excitatory postsynaptic potentials (EPSPs) and the autocorrelation function of the postsynaptic neuron. We show that the STDP function has both positive and negative phases. The positive phase is related to the shape of the EPSP while the negative phase is controlled by neuronal refractoriness. 1

5 0.43410602 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern

Author: Kosuke Hamaguchi, Masato Okada, Kazuyuki Aihara

Abstract: Repeated spike patterns have often been taken as evidence for the synfire chain, a phenomenon that a stable spike synchrony propagates through a feedforward network. Inter-spike intervals which represent a repeated spike pattern are influenced by the propagation speed of a spike packet. However, the relation between the propagation speed and network structure is not well understood. While it is apparent that the propagation speed depends on the excitatory synapse strength, it might also be related to spike patterns. We analyze a feedforward network with Mexican-Hattype connectivity (FMH) using the Fokker-Planck equation. We show that both a uniform and a localized spike packet are stable in the FMH in a certain parameter region. We also demonstrate that the propagation speed depends on the distinct firing patterns in the same network.

6 0.43296078 84 nips-2004-Inference, Attention, and Decision in a Bayesian Neural Architecture

7 0.43125501 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons

8 0.42572188 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons

9 0.4239063 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity

10 0.42371517 21 nips-2004-An Information Maximization Model of Eye Movements

11 0.41711873 28 nips-2004-Bayesian inference in spiking neurons

12 0.4129259 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons

13 0.4023279 151 nips-2004-Rate- and Phase-coded Autoassociative Memory

14 0.3975471 172 nips-2004-Sparse Coding of Natural Images Using an Overcomplete Set of Limited Capacity Units

15 0.38236305 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

16 0.37642354 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid

17 0.37268201 112 nips-2004-Maximising Sensitivity in a Spiking Network

18 0.37239379 170 nips-2004-Similarity and Discrimination in Classical Conditioning: A Latent Variable Account

19 0.35577697 35 nips-2004-Chemosensory Processing in a Spiking Model of the Olfactory Bulb: Chemotopic Convergence and Center Surround Inhibition

20 0.35181952 148 nips-2004-Probabilistic Computation in Spiking Populations