nips nips2006 nips2006-162 knowledge-graph by maker-knowledge-mining

162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron


Source: pdf

Author: Ryota Kobayashi, Shigeru Shinomoto

Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Predicting spike times from subthreshold dynamics of a neuron Ryota Kobayashi Department of Physics Kyoto University Kyoto 606-8502, Japan kobayashi@ton. [sent-1, score-1.057]

2 jp Abstract It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. [sent-8, score-0.894]

3 We wish to accurately predict the firing times of a given neuron for any input current. [sent-9, score-0.29]

4 For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. [sent-10, score-1.36]

5 It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. [sent-11, score-0.89]

6 In the simplification proposed by FitzHugh [2] and Nagumo et al [3], the number of equations is reduced to two: the fast and slow variables which minimally represent the excitable dynamics. [sent-13, score-0.138]

7 The leaky integrate-and-fire model [4], originally proposed far in advance of the Hodgkin-Huxley model, consists of only one variable that corresponds to the membrane potential, with a voltage resetting mechanism. [sent-14, score-0.48]

8 Those simplified models have been successful in not only extracting the essence of the dynamics, but also in reducing the computational cost of studying the large-scale dynamics of an assembly of neurons. [sent-15, score-0.133]

9 In contrast to such taste for simplification, there are also a number of studies that pursue realism by developing multi-compartment models and installing newly found ionic channels. [sent-16, score-0.071]

10 User-friendly simulation platforms, such as NEURON [5] and GENESIS [6], enable experimental neurophysiologists to reproduce casually their experimental results or to explore potentially interesting phenomena for a new experiment to be performed. [sent-17, score-0.095]

11 Though those simulators have been successful in reproducing qualitative aspects of neuronal responses to various conditions, quantitative reproduction as well as prediction for novel experiments appears to be difficult to realize [7]. [sent-18, score-0.149]

12 The difficulty is due to the complexity of the model accompanied with a large number of undetermined free parameters. [sent-19, score-0.034]

13 Even if a true model of a particular neuron is included in the family of models, it is practically difficult to explore the true parameters in the high-dimensional space of parameters that dominate the nonlinear dynamics. [sent-20, score-0.405]

14 Recently it was suggested by Kistler et al [8, 9] to extend the leaky integrate-and-fire model so that real membrane dynamics of any neuron can be adopted. [sent-21, score-0.868]

15 The so called “spike response model” has been successful in not only reproducing the data but also in predicting the spike timing for a novel input current [8, 9, 10, 11]. [sent-22, score-0.703]

16 The fairly precise prediction achieved by such a simple model indicates that the spike occurrence is determined principally by the subthreshold dynamics. [sent-24, score-1.088]

17 In other words, the highly nonlinear dynamics of a neuron can be decomposed into two simple, predictable processes: a relatively simple subthreshold dynamics, and the dynamics of an action potential of a nearly fixed shape (Fig. [sent-25, score-0.936]

18 action potential (nearly fixed) V subthreshold (predictable) dV/dt Figure 1: The highly nonlinear dynamics of a neuron is decomposed into two simple, predictable processes. [sent-27, score-0.83]

19 In this paper, we propose a framework of improving the prediction of spike times by paying close attention to the transfer between the two predictable processes mentioned above. [sent-28, score-0.7]

20 It is assumed in the original spike response model that a spike occurs if the membrane potential exceeds a certain threshold [9]. [sent-29, score-1.687]

21 We revised this rule to maximally utilize the information of a higher-dimensional state space, consisting of not only the instantaneous membrane potential, but also its time derivative(s). [sent-30, score-0.451]

22 Such a subthreshold state can provide cues for the occurrence of a spike, but with a certain time difference. [sent-31, score-0.569]

23 For the purpose of exploring the optimal time shift, we propose a method of maximizing the mutual information between the subthreshold state and the occurrence of a spike. [sent-32, score-0.591]

24 By employing the linear filter model [12] and the spike response model [9] for mimicking the subthreshold voltage response of a neuron, we examine how much the present framework may improve the prediction for simulation data of the fast-spiking model [13]. [sent-33, score-1.202]

25 2 Methods The response of a neuron is precisely reproduced when presented with identical fluctuating input currents [14]. [sent-34, score-0.553]

26 This implies that the neuronal membrane potential V (t) is determined by the past input current {I(t)}, or V (t) = F ({I(t)}), (1) where F ({I(t)}) represents a functional of a time-dependent current I(t). [sent-35, score-0.491]

27 A rapid swing in the polarity of the membrane potential is called a “spike. [sent-36, score-0.401]

28 ” The occurrence of a spike could be defined practically by measuring the membrane potential V (t) exceeding a certain threshold, V (t) > Vth . [sent-37, score-1.203]

29 (2) The time of each spike could be defined either as the first time the threshold is exceeded, or as the peak of the action potential that follows the crossing. [sent-38, score-0.791]

30 Kistler et al [8] and Jolivet et al [10, 11] proposed a method of mimicking the membrane dynamics of a target neuron with the simple spike response model in which an input current is linearly integrated. [sent-39, score-1.697]

31 The leaky integrate-and-fire model can be regarded as an example of the spike response model [9]; the differential equation can be rewritten as an integral equation in which the membrane potential is given as the integral of the past input current with an exponentially decaying kernel. [sent-40, score-1.201]

32 The spike response model is an extension of the leaky integrate-and-fire model, where the integrating kernel is adaptively determined by the data, and the after hyperpolarizing potential is added subsequently to every spike. [sent-41, score-0.85]

33 It is also possible to further include terms that reduce the responsiveness and increase the threshold after an action potential takes place. [sent-42, score-0.215]

34 Even in the learning stage, no model is able to perfectly reproduce the output V (t) of a target neuron for a given input I(t). [sent-43, score-0.423]

35 We will denote the output of the model (in the lower case) as v(t) = fk ({I(t)}), (3) where k represents a set of model parameters. [sent-44, score-0.068]

36 The model parameters are learned by mimicking sample input-output data. [sent-45, score-0.096]

37 1 State space method As the output of the model v(t) is not identical to the true membrane potential of the target neuron V (t), a spike occurrence cannot be determined accurately by simply applying the same threshold rule Eq. [sent-48, score-1.649]

38 In this paper, we suggest revising the spike generation rule so that a spike occurrence is best predicted from the model potential v(t). [sent-50, score-1.497]

39 Suppose that we have adjusted the parameters of fk ({I(t)}) so that the output of the model {v(t)} best approximates the membrane potential {V (t)} of a target neuron for a given set of currents {I(t)}. [sent-51, score-0.986]

40 If the sample data set {I(t), V (t)} employed in learning is large enough, the spike occurrence can be predicted by estimating an empirical probability of a spike being generated at the time t, given a time-dependent orbit of an estimated output, {v(t)}, as pspike (t|{v(t)}). [sent-52, score-1.677]

41 (5) In a practical experiment, however, the amount of collectable data is insufficient for estimating the spiking probability with respect to any orbit of v(t). [sent-53, score-0.275]

42 In place of such exhaustive examination, we suggest utilizing the state space information such as the time derivatives of the model potential at a certain time. [sent-54, score-0.345]

43 The spike occurrence at time t could be predicted from the m-dimensional state space information ⃗ ≡ (v, v ′ , · · · , v (m−1) ), as observed at a time s before t, as v pspike (t|⃗t−s ), v (6) where ⃗t−s ≡ (v(t − s), v ′ (t − s), · · · , v (m−1) (t − s)). [sent-55, score-1.11]

44 2 Determination of the optimal time shift The time shift s introduced in the spike time prediction, Eq. [sent-57, score-0.68]

45 We propose optimizing the time shift s by maximizing the mutual information between the state space information ⃗t−s and the presence or absence of a spike at a time interval v (t − δt/2, t + δt/2], which is denoted as zt = 1 or 0. [sent-59, score-1.014]

46 The mutual information [15] is given as M I(zt ; ⃗t−s ) = M I(⃗t−s ; zt ) = H(⃗t−s ) − H(⃗t−s |zt ), v v v v where (7) ∫ H(⃗t−s ) v H(⃗t−s |zt ) v = − d⃗t−s p(⃗t−s ) log p(⃗t−s ), v v v ∫ ∑ = − d⃗t−s p(⃗t−s |zt )p(zt ) log p(⃗t−s |zt ). [sent-60, score-0.254]

47 v v v (8) (9) zt ∈{0,1} Here, p(⃗t−s |zt ) is the probability, given the presence or absence of a spike at time t ∈ (t−δt/2, t+ v δt/2], of the state being ⃗t−s , a time s before the spike. [sent-61, score-0.888]

48 v With the time difference s optimized, we then obtain the empirical probability of the spike occurrence at the time t, given the state space information at the time t − s, using the Bayes theorem, pspike (t|⃗t−s ) ∝ p(zt = 1|⃗t−s ) = v v 3 p(⃗t−s |zt )p(zt ) v . [sent-62, score-1.092]

49 p(⃗t−s ) v (10) Results We evaluated our state space method of predicting spike times by applying it to target data obtained for a fast-spiking neuron model proposed by Erisir et al [13] (see Appendix). [sent-63, score-1.217]

50 In this virtual experiment, two fluctuating currents characterized by the same mean and standard deviation are injected to the (model) fast-spiking neuron to obtain two sets of input-output data {I(t), V (t)}. [sent-64, score-0.49]

51 A predictive model was trained using one sample data set, and then its predictive performance for the other sample data was evaluated. [sent-65, score-0.126]

52 Each input current is generated by the Ornstein-Uhlenbeck process. [sent-66, score-0.028]

53 We have tested two kinds of fluctuating currents characterized with different means and standard deviations: (Currents I) the mean µ = 1. [sent-67, score-0.183]

54 0 [µA] and the time scale of the fluctuation τ = 2 [msec]; (Currents II) the mean µ = 0. [sent-69, score-0.026]

55 0 [µA] and the time scale of the fluctuation τ = 2 [msec]. [sent-71, score-0.026]

56 In this study we adopted the linear filter model and the spike response model as prediction models. [sent-73, score-0.794]

57 B: The mutual information between the estimated potential and the occurrence of a spike. [sent-78, score-0.411]

58 We briefly describe here the results for the linear filter model [12], ∫ ∞ v(t) = K(t′ )I(t − t′ ) dt′ + v0 . [sent-79, score-0.034]

59 (11) 0 The model parameters k consist of the shape of the kernel K(t) and the constant v0 . [sent-80, score-0.06]

60 In learning the target sample data {I(t), V (t)}, these parameters are adjusted to minimize the integrated square error, Eq. [sent-81, score-0.106]

61 Figure 2A depicts the shape of the kernel K(t) estimated from the target sample data {I(t), V (t)} obtained from the virtual experiment of the fast-spiking neuron model. [sent-83, score-0.451]

62 Based on the voltage estimation v(t) with respect to sample data, we compute the empirical probabilities, p(⃗t−s ), p(⃗t−s |zt ) and p(zt ) for two-dimensional state space information ⃗t−s ≡ v v v (v(t − s), v ′ (t − s)). [sent-84, score-0.195]

63 In computing empirical probabilities, we quantized the two-dimensional phase space ⃗ ≡ (v, v ′ ), and the time. [sent-85, score-0.042]

64 In the discretized time, we counted the occurrence of a spike, v zt = 1, for the bins in which the true membrane potential V (t) exceeds a reasonable threshold Vth . [sent-86, score-0.937]

65 With a sufficiently small time step (we adopted δt = 0. [sent-87, score-0.06]

66 1 [msec]), a single spike is transformed into a succession of spike occurrences zt = 1. [sent-88, score-1.257]

67 2B whose maximum position of s ≈ 2 [msec] determines the optimal time shift. [sent-91, score-0.026]

68 The spike is predicted if the estimated probability pspike (t|⃗t−s ) of Eq. [sent-92, score-0.71]

69 Though it would be more efficient to use the systematic method suggested by Paninski et al [16], we determined the threshold value empirically so that the coincidence factor Γ described in the following is maximized. [sent-94, score-0.335]

70 Figure 3 compares a naive thresholding method and our state space method, in reference to the original spike times. [sent-95, score-0.825]

71 It is observed from this figure that the prediction of the state space method is more accurate and robust than that of thresholding method. [sent-96, score-0.36]

72 50 V(t) 0 50 v(t) 0 pspike(t) 3000 3500 t (ms) Figure 3: Comparison of the spike time predictions. [sent-97, score-0.55]

73 Figure 4 depicts an orbit in the state space of (V, V ′ ) of a target neuron for an instance of the spike generation, and the orbit of the predictive model in the state space of (v, v ′ ) that mimics it. [sent-102, score-1.69]

74 The predictive model can mimic the target orbit in the subthreshold region, but fails to catch the spiking orbit in the suprathreshold region. [sent-103, score-0.804]

75 The spike occurrence is predicted by estimating the conditional probability, Eq. [sent-104, score-0.794]

76 (10), given the state (v, v ′ ) of the predictive model. [sent-105, score-0.149]

77 The contour lines for higher probabilities of spiking resemble an ad hoc “dynamic spike threshold” introduced by Azouz and Gray [17]. [sent-107, score-0.698]

78 Namely, v drops with dv/dt along the contour lines. [sent-108, score-0.065]

79 Contrastingly, the contour lines for lower probabilities of spiking are inversely curved: v increases with dv/dt along the contour lines. [sent-109, score-0.239]

80 In the present framework, the state space information corresponding to the relatively low probability of spiking is effectively used for predicting spike times. [sent-110, score-0.8]

81 ∆ is chosen as 2 [msec] in accordance with Jolivet et al [10]. [sent-112, score-0.138]

82 Table 1: The coincidence factors evaluated for two methods of prediction based on the linear filter model. [sent-113, score-0.192]

83 666 B C v V c 14 14 b A c 12 12 10 10 8 b 8 V 120 80 40 a a 6 0 -500 0 500 dV/dt 6 -2 0 dV/dt 2 -2 0 dv/dt 2 Figure 4: A: An orbit in the state space of (V, V ′ ) of a target neuron for an instance of the spike generation (from 3240 to 3270 [msec] of Fig. [sent-118, score-1.217]

84 C: The orbit in the state space of (v, v ′ ) of the predictive model that mimics the target neuron. [sent-121, score-0.54]

85 Contours represent the probability of spike occurrence computed with the Bayes formula, Eq. [sent-122, score-0.75]

86 The dashed lines represent the threshold adopted in the naive thresholding method (Fig 3 Middle). [sent-124, score-0.283]

87 Table 2: The coincidence factors evaluated for two methods of prediction based on the spike response model. [sent-126, score-0.796]

88 842 The coincidence factors evaluated for a simple thresholding method and the state space method based on the linear filter model are summarized in Table 1, and those based on the spike response model are summarized in Table 2. [sent-131, score-1.048]

89 It is observed that the prediction is significantly improved by our state space method. [sent-132, score-0.233]

90 It should be noted, however, that a model with the same set of parameters does not perform well over a range of inputs generated with different mean and variance: The model parameterized with the Currents I does not effectively predict the spikes of the neuron for the Current II, and vice versa. [sent-133, score-0.379]

91 Nevertheless, our state-space method exhibits the better prediction than the naive thresholding strategy, if the statistics of the different inputs are relatively similar. [sent-134, score-0.244]

92 4 Summary We proposed a method of evaluating the probability of the spike occurrence by observing the state space of the membrane potential and its time derivative(s) in advance of the possible spike time. [sent-135, score-1.888]

93 It is found that the prediction is significantly improved by the state space method compared to the prediction obtained by simply thresholding an instantaneous value of the estimated potential. [sent-136, score-0.509]

94 It is interesting to apply our method to biological data and categorize neurons based on their spiking mechanisms. [sent-137, score-0.084]

95 The state space method developed here is a rather general framework that may be applicable to any nonlinear phenomena composed of locally predictable dynamics. [sent-138, score-0.309]

96 The generalization of linear filter analysis developed here has a certain similarity to the Linear-Nonlinear-Poisson (LNP) model [18, 19]. [sent-139, score-0.057]

97 It would be interesting to generalize the present method of analysis to a wider range of phenomena such as the analysis of the coding of visual system [19, 20]. [sent-140, score-0.038]

98 Appendix: Fast-spiking neuron model The fast-spiking neuron model proposed by Erisir et al [13] was used in this contribution as a (virtual) target experiment. [sent-142, score-0.797]

99 The details of the model were adjusted to Jolivet et al [10] to allow the direct comparison of the performances. [sent-143, score-0.211]

100 (2006) Integrate-and-Fire models with u adaptation are good enough: predicting spike times under random current injection. [sent-230, score-0.571]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('spike', 0.524), ('membrane', 0.284), ('neuron', 0.262), ('occurrence', 0.226), ('zt', 0.209), ('orbit', 0.191), ('subthreshold', 0.191), ('currents', 0.183), ('thresholding', 0.127), ('jolivet', 0.119), ('kistler', 0.119), ('pspike', 0.119), ('potential', 0.117), ('msec', 0.111), ('coincidence', 0.104), ('state', 0.103), ('al', 0.102), ('ncoinc', 0.095), ('prediction', 0.088), ('predictable', 0.088), ('spiking', 0.084), ('kyoto', 0.083), ('dynamics', 0.08), ('response', 0.08), ('uctuating', 0.076), ('gerstner', 0.076), ('erisir', 0.071), ('ionic', 0.071), ('ndata', 0.071), ('leaky', 0.07), ('threshold', 0.068), ('target', 0.067), ('contour', 0.065), ('shinomoto', 0.062), ('mimicking', 0.062), ('mimics', 0.057), ('paninski', 0.057), ('voltage', 0.05), ('spikes', 0.049), ('azouz', 0.048), ('fitzhugh', 0.048), ('genesis', 0.048), ('ina', 0.048), ('nagumo', 0.048), ('nmodel', 0.048), ('pillow', 0.048), ('lter', 0.047), ('predicting', 0.047), ('predictive', 0.046), ('virtual', 0.045), ('mutual', 0.045), ('predicted', 0.044), ('japan', 0.043), ('ek', 0.043), ('space', 0.042), ('advance', 0.042), ('kobayashi', 0.041), ('vth', 0.041), ('hodgkin', 0.041), ('huxley', 0.041), ('chichilnisky', 0.041), ('adjusted', 0.039), ('physics', 0.039), ('shift', 0.039), ('nonlinear', 0.038), ('phenomena', 0.038), ('instantaneous', 0.038), ('century', 0.038), ('neuronal', 0.037), ('et', 0.036), ('uctuation', 0.035), ('ms', 0.035), ('adopted', 0.034), ('model', 0.034), ('exceeds', 0.033), ('dt', 0.032), ('reproduce', 0.032), ('exp', 0.031), ('simoncelli', 0.03), ('differential', 0.03), ('action', 0.03), ('naive', 0.029), ('practically', 0.029), ('essence', 0.029), ('input', 0.028), ('generation', 0.028), ('depicts', 0.028), ('time', 0.026), ('shape', 0.026), ('derivative', 0.025), ('simulation', 0.025), ('lines', 0.025), ('determined', 0.025), ('successful', 0.024), ('ii', 0.024), ('decomposed', 0.024), ('estimated', 0.023), ('certain', 0.023), ('il', 0.023), ('table', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron

Author: Ryota Kobayashi, Shigeru Shinomoto

Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1

2 0.34793508 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons

Author: Thomas Voegtlin

Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1

3 0.29162234 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein

Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1

4 0.25219655 17 nips-2006-A recipe for optimizing a time-histogram

Author: Hideaki Shimazaki, Shigeru Shinomoto

Abstract: The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1

5 0.20003022 154 nips-2006-Optimal Change-Detection and Spiking Neurons

Author: Angela J. Yu

Abstract: Survival in a non-stationary, potentially adversarial environment requires animals to detect sensory changes rapidly yet accurately, two oft competing desiderata. Neurons subserving such detections are faced with the corresponding challenge to discern “real” changes in inputs as quickly as possible, while ignoring noisy fluctuations. Mathematically, this is an example of a change-detection problem that is actively researched in the controlled stochastic processes community. In this paper, we utilize sophisticated tools developed in that community to formalize an instantiation of the problem faced by the nervous system, and characterize the Bayes-optimal decision policy under certain assumptions. We will derive from this optimal strategy an information accumulation and decision process that remarkably resembles the dynamics of a leaky integrate-and-fire neuron. This correspondence suggests that neurons are optimized for tracking input changes, and sheds new light on the computational import of intracellular properties such as resting membrane potential, voltage-dependent conductance, and post-spike reset voltage. We also explore the influence that factors such as timing, uncertainty, neuromodulation, and reward should and do have on neuronal dynamics and sensitivity, as the optimal decision strategy depends critically on these factors. 1

6 0.14805473 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

7 0.13859759 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

8 0.13356124 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

9 0.13215336 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall

10 0.10872178 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments

11 0.073428996 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex

12 0.05478818 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation

13 0.054173581 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity

14 0.048760124 29 nips-2006-An Information Theoretic Framework for Eukaryotic Gradient Sensing

15 0.045587972 16 nips-2006-A Theory of Retinal Population Coding

16 0.043169942 12 nips-2006-A Probabilistic Algorithm Integrating Source Localization and Noise Suppression of MEG and EEG data

17 0.039884612 1 nips-2006-A Bayesian Approach to Diffusion Models of Decision-Making and Response Time

18 0.039618168 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields

19 0.039469205 64 nips-2006-Data Integration for Classification Problems Employing Gaussian Process Priors

20 0.037546158 33 nips-2006-Analysis of Representations for Domain Adaptation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.164), (1, -0.396), (2, -0.035), (3, 0.101), (4, 0.034), (5, 0.089), (6, -0.003), (7, 0.017), (8, 0.012), (9, -0.034), (10, -0.08), (11, 0.053), (12, 0.024), (13, 0.049), (14, 0.098), (15, 0.001), (16, -0.103), (17, -0.075), (18, -0.027), (19, -0.197), (20, 0.048), (21, -0.044), (22, 0.039), (23, -0.067), (24, -0.01), (25, 0.094), (26, 0.164), (27, 0.018), (28, -0.073), (29, 0.131), (30, -0.032), (31, 0.072), (32, 0.027), (33, 0.016), (34, -0.013), (35, 0.137), (36, 0.018), (37, 0.019), (38, 0.148), (39, -0.014), (40, 0.079), (41, -0.048), (42, 0.077), (43, 0.124), (44, 0.054), (45, -0.031), (46, -0.058), (47, -0.069), (48, -0.076), (49, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96470553 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron

Author: Ryota Kobayashi, Shigeru Shinomoto

Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1

2 0.85836637 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein

Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1

3 0.77664536 17 nips-2006-A recipe for optimizing a time-histogram

Author: Hideaki Shimazaki, Shigeru Shinomoto

Abstract: The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1

4 0.68069297 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons

Author: Thomas Voegtlin

Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1

5 0.53065002 154 nips-2006-Optimal Change-Detection and Spiking Neurons

Author: Angela J. Yu

Abstract: Survival in a non-stationary, potentially adversarial environment requires animals to detect sensory changes rapidly yet accurately, two oft competing desiderata. Neurons subserving such detections are faced with the corresponding challenge to discern “real” changes in inputs as quickly as possible, while ignoring noisy fluctuations. Mathematically, this is an example of a change-detection problem that is actively researched in the controlled stochastic processes community. In this paper, we utilize sophisticated tools developed in that community to formalize an instantiation of the problem faced by the nervous system, and characterize the Bayes-optimal decision policy under certain assumptions. We will derive from this optimal strategy an information accumulation and decision process that remarkably resembles the dynamics of a leaky integrate-and-fire neuron. This correspondence suggests that neurons are optimized for tracking input changes, and sheds new light on the computational import of intracellular properties such as resting membrane potential, voltage-dependent conductance, and post-spike reset voltage. We also explore the influence that factors such as timing, uncertainty, neuromodulation, and reward should and do have on neuronal dynamics and sensitivity, as the optimal decision strategy depends critically on these factors. 1

6 0.50923717 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall

7 0.4871667 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

8 0.40079707 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

9 0.37815478 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

10 0.35568613 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex

11 0.31894404 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments

12 0.26869938 34 nips-2006-Approximate Correspondences in High Dimensions

13 0.26317492 29 nips-2006-An Information Theoretic Framework for Eukaryotic Gradient Sensing

14 0.24731924 121 nips-2006-Learning to be Bayesian without Supervision

15 0.2197793 124 nips-2006-Linearly-solvable Markov decision problems

16 0.21614747 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity

17 0.19062705 192 nips-2006-Theory and Dynamics of Perceptual Bistability

18 0.18534689 178 nips-2006-Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation

19 0.17863435 16 nips-2006-A Theory of Retinal Population Coding

20 0.17821807 71 nips-2006-Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.082), (3, 0.064), (7, 0.054), (9, 0.051), (12, 0.01), (20, 0.012), (22, 0.058), (25, 0.015), (44, 0.06), (57, 0.033), (61, 0.252), (65, 0.047), (69, 0.025), (71, 0.106), (82, 0.018), (98, 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.7857765 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron

Author: Ryota Kobayashi, Shigeru Shinomoto

Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1

2 0.68610948 188 nips-2006-Temporal and Cross-Subject Probabilistic Models for fMRI Prediction Tasks

Author: Alexis Battle, Gal Chechik, Daphne Koller

Abstract: We present a probabilistic model applied to the fMRI video rating prediction task of the Pittsburgh Brain Activity Interpretation Competition (PBAIC) [2]. Our goal is to predict a time series of subjective, semantic ratings of a movie given functional MRI data acquired during viewing by three subjects. Our method uses conditionally trained Gaussian Markov random fields, which model both the relationships between the subjects’ fMRI voxel measurements and the ratings, as well as the dependencies of the ratings across time steps and between subjects. We also employed non-traditional methods for feature selection and regularization that exploit the spatial structure of voxel activity in the brain. The model displayed good performance in predicting the scored ratings for the three subjects in test data sets, and a variant of this model was the third place entrant to the 2006 PBAIC. 1

3 0.64659369 42 nips-2006-Bayesian Image Super-resolution, Continued

Author: Lyndsey C. Pickup, David P. Capel, Stephen J. Roberts, Andrew Zisserman

Abstract: This paper develops a multi-frame image super-resolution approach from a Bayesian view-point by marginalizing over the unknown registration parameters relating the set of input low-resolution views. In Tipping and Bishop’s Bayesian image super-resolution approach [16], the marginalization was over the superresolution image, necessitating the use of an unfavorable image prior. By integrating over the registration parameters rather than the high-resolution image, our method allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. In addition to the motion model used by Tipping and Bishop, illumination components are introduced into the generative model, allowing us to handle changes in lighting as well as motion. We show results on real and synthetic datasets to illustrate the efficacy of this approach.

4 0.57434583 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes

Author: Neil D. Lawrence, Guido Sanguinetti, Magnus Rattray

Abstract: Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies.

5 0.56218135 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons

Author: Thomas Voegtlin

Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1

6 0.56146032 191 nips-2006-The Robustness-Performance Tradeoff in Markov Decision Processes

7 0.53718561 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation

8 0.53447419 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

9 0.52412093 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

10 0.51865315 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments

11 0.50759172 154 nips-2006-Optimal Change-Detection and Spiking Neurons

12 0.49725088 17 nips-2006-A recipe for optimizing a time-histogram

13 0.4956758 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

14 0.48863319 65 nips-2006-Denoising and Dimension Reduction in Feature Space

15 0.48740247 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

16 0.48607507 167 nips-2006-Recursive ICA

17 0.4840149 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

18 0.48259065 127 nips-2006-MLLE: Modified Locally Linear Embedding Using Multiple Weights

19 0.48243099 67 nips-2006-Differential Entropic Clustering of Multivariate Gaussians

20 0.48204389 171 nips-2006-Sample Complexity of Policy Search with Known Dynamics