nips nips2010 nips2010-115 knowledge-graph by maker-knowledge-mining

115 nips-2010-Identifying Dendritic Processing


Source: pdf

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. [sent-6, score-0.097]

2 Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. [sent-7, score-1.417]

3 The input to the circuit is an analog signal that belongs to the space of bandlimited functions. [sent-8, score-0.403]

4 The output is a time sequence associated with the spike train. [sent-9, score-0.175]

5 We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. [sent-10, score-0.351]

6 1 Introduction The nature of encoding and processing of sensory information in the visual, auditory and olfactory systems has been extensively investigated in the systems neuroscience literature. [sent-11, score-0.164]

7 Many phenomenological [1, 2, 3] as well as mechanistic [4, 5, 6] models have been proposed to characterize and clarify the representation of sensory information on the level of single neurons. [sent-12, score-0.089]

8 Here we investigate a class of phenomenological neural circuit models in which the time-domain linear processing takes place in the dendritic tree and the resulting aggregate dendritic current is encoded in the spike domain by a spiking neuron. [sent-13, score-1.295]

9 While the LNP model also includes a linear processing stage, it describes spike generation using an inhomogeneous Poisson process. [sent-15, score-0.209]

10 In contrast, the [Filter]-[Spiking Neuron] model incorporates the temporal dynamics of spike generation and allows one to consider more biologically-plausible spike generators. [sent-16, score-0.37]

11 We perform identification of dendritic processing in the [Filter]-[Spiking Neuron] model assuming that input signals belong to the space of bandlimited functions, a class of functions that closely model natural stimuli in sensory systems. [sent-17, score-0.521]

12 Under this assumption, we show that the identification of dendritic processing in the above neural circuit becomes mathematically tractable. [sent-18, score-0.63]

13 Using simulated data, we demonstrate that under certain conditions it is possible to identify the impulse response of the dendritic processing filter with arbitrary precision. [sent-19, score-0.469]

14 The phenomenological neural circuit model and the identification problem are formally stated in section 2. [sent-22, score-0.344]

15 The Neural Identification Machine and its realization as an algorithm for identifying dendritic processing is extensively discussed in section 3. [sent-23, score-0.376]

16 1 2 Problem Statement In what follows we assume that the dendritic processing is linear [11] and any nonlinear effects arise as a result of the spike generation mechanism [12]. [sent-27, score-0.516]

17 We use linear BIBO-stable filters (not necessarily causal) to describe the computation performed by the dendritic tree. [sent-28, score-0.307]

18 Furthermore, a spiking neuron model (as opposed to a rate model) is used to model the generation of action potentials or spikes. [sent-29, score-0.394]

19 We investigate a general neural circuit comprised of a filter in cascade with a spiking neuron model (Fig. [sent-30, score-0.709]

20 This circuit is an instance of a Time Encoding Machine (TEM), a nonlinear asynchronous circuit that encodes analog signals in the time domain [13, 14]. [sent-32, score-0.586]

21 Examples of spiking neuron models considered in this paper include the ideal IAF neuron, the leaky IAF neuron and the threshold-and-feedback (TAF) neuron [15]. [sent-33, score-0.964]

22 However, the methodology developed below can be extended to many other spiking neuron models as well. [sent-34, score-0.38]

23 We break down the full identification of this circuit into two problems: (i) identification of linear operations in the dendritic tree and (ii) identification of spike generator parameters. [sent-35, score-0.757]

24 First, we consider problem (i) and assume that parameters of the spike generator can be obtained through biophysical experiments. [sent-36, score-0.2]

25 We consider a specific example of a neural circuit in Fig. [sent-38, score-0.302]

26 Dendritic Processing u(t) Linear F ilter Dendritic Processing Spike Generation v(t) Spiking N euron u(t) h(t) Spike Generation: Ideal IAF Neuron v(t) 1 C + (tk )k∈Z b (a) δ (tk )k∈Z voltage reset to 0 (b) Figure 1: Problem setup. [sent-40, score-0.081]

27 (a) The dendritic processing is described by a linear filter and spikes are produced by a (nonlinear) spiking neuron model. [sent-41, score-0.733]

28 (b) An example of a neural circuit in (a) is a linear filter in cascade with the ideal IAF neuron. [sent-42, score-0.477]

29 An input signal u is first passed through a filter with an impulse response h. [sent-43, score-0.183]

30 The output of the filter v(t) = (u ∗ h)(t), t ∈ R, is then encoded into a time sequence (tk )k∈Z by the ideal IAF neuron. [sent-44, score-0.172]

31 3 Neuron Identification Machines A Neuron Identification Machine (NIM) is the realization of an algorithm for the identification of the dendritic processing filter in cascade with a spiking neuron model. [sent-45, score-0.753]

32 First, we introduce several definitions needed to formally address the problem of identifying dendritic processing. [sent-46, score-0.337]

33 We derive an algorithm for a perfect identification of the impulse response of the filter and provide conditions for the identification with arbitrary precision. [sent-48, score-0.139]

34 1 Preliminaries We model signals u = u(t), t ∈ R, at the input to a neural circuit as elements of the Paley-Wiener space Ξ = u ∈ L2 (R) supp (Fu) ⊆ [−Ω, Ω] , i. [sent-51, score-0.437]

35 Furthermore, we assume that the dendritic processing filters h = h(t), t ∈ R, are linear, BIBO-stable and have a finite temporal support, i. [sent-54, score-0.358]

36 A signal u ∈ Ξ at the input to a neural circuit together with the resulting output T = (tk )k∈Z of that circuit is called an input/output (I/O) pair and is denoted by (u, T). [sent-58, score-0.662]

37 Two neural circuits are said to be Ξ-I/O-equivalent if their respective I/O pairs are identical for all u ∈ Ξ. [sent-60, score-0.056]

38 Signals {ui }N are said to be linearly independent if there do not exist real numbers i=1 N {αi }N , not all zero, and real numbers {βi }N such that i=1 αi ui (t + βi ) = 0. [sent-65, score-0.066]

39 2 NIM for the [Filter]-[Ideal IAF] Neural Circuit An example of a model circuit in Fig. [sent-67, score-0.274]

40 1(a) is the [Filter]-[Ideal IAF] circuit shown in Fig. [sent-68, score-0.274]

41 In this circuit, an input signal u ∈ Ξ is passed through a filter with an impulse response (kernel) h ∈ H and then encoded by an ideal IAF neuron with a bias b ∈ R+ , a capacitance C ∈ R+ and a threshold δ ∈ R+ . [sent-70, score-0.549]

42 The output of the circuit is a sequence of spike times (tk )k∈Z that is available to an observer. [sent-71, score-0.449]

43 This neural circuit is an instance of a TEM and its operation can be described by a set of equations (formally known as the t-transform [13]): tk+1 tk (u ∗ h)(s)ds = qk , k ∈ Z, (1) where qk Cδ−b(tk+1 −tk ). [sent-72, score-0.903]

44 Intuitively, at every spike time tk+1 the ideal IAF neuron is providing a measurement qk of the signal v(t) = (u ∗ h)(t) on the interval t ∈ [tk , tk+1 ]. [sent-73, score-0.622]

45 The left-hand side of the t-transform in (1) can be written as a bounded linear functional Lk : Ξ → R with Lk (Ph) = φk , Ph , where φk (t) = 1[tk , tk+1 ] ∗ u (t) and u = ˜ ˜ u(−t), t ∈ R, denotes the involution of u. [sent-75, score-0.056]

46 tk tk+1 (u∗h)(s)ds tk = Now since Ph is bounded, the expression on the right-hand side of the equality is a bounded linear functional Lk : Ξ → R with tk+1 Lk (Ph) = tk (u ∗ Ph)(s)ds = φk , Ph , (2) where φk ∈ Ξ and the last equality follows from the Riesz representation theorem [16]. [sent-77, score-1.263]

47 By the reproducing property of the kernel [17], we have φk (t) = φk , Kt = Lk (Kt ). [sent-79, score-0.044]

48 Letting u = u(−t) denote the involution of u and using (2), we obtain ˜ φk (t) = 1[tk , tk+1 ] ∗ u, Kt = 1[tk , tk+1 ] ∗ u (t). [sent-80, score-0.032]

49 To that end, we note that an observer can typically record both the input u = u(t), t ∈ R and the output T = (tk )k∈Z of a neural circuit. [sent-83, score-0.104]

50 Since (qk )k∈Z can be evaluated from (tk )k∈Z using the definition of qk in (1), the problem is reduced to identifying Ph from an I/O pair (u, T). [sent-84, score-0.124]

51 Then given an I/O pair (u, T) of the [Filter]-[Ideal IAF] neural circuit, Ph can be perfectly identified as (Ph)(t) = ck ψk (t), k∈Z where ψk (t) = g(t − tk ), t ∈ R. [sent-87, score-0.532]

52 Furthermore, c = G+ q with G+ denoting the Moore-Penrose t pseudoinverse of G, [G]lk = tll+1 u(s − tk )ds for all k, l ∈ Z, and [q]l = Cδ − b(tl+1 − tl ). [sent-88, score-0.559]

53 Proof: By appropriately bounding the input signal u, the spike density (the average number of spikes over arbitrarily long time intervals) of an ideal IAF neuron is given by D = b/(Cδ) [14]. [sent-89, score-0.632]

54 Therefore, for D > Ω/π the set of the representation functions (ψk )k∈Z , ψk (t) = g(t − tk ), is a frame in Ξ [18] and (Ph)(t) = k∈Z ck ψk (t). [sent-90, score-0.481]

55 To find the coefficients ck we note from (2) that ql = φl , Ph = ck φl , ψk = k∈Z [G]lk ck , (3) k∈Z t where [G]lk = φl , ψk = 1[tl , tl+1 ] ∗ u, g( · − tk ) = tll+1 u(s − tk )ds. [sent-91, score-1.09]

56 Writing (3) in matrix ˜ form, we obtain q = Gc with [q]l = ql and [c]k = ck . [sent-92, score-0.128]

57 Finally, the coefficients ck , k ∈ Z, can be computed as c = G+ q. [sent-93, score-0.068]

58 Thus, perfect identification of the projection of h onto Ξ can be achieved for a finite average spike rate. [sent-96, score-0.214]

59 Ideally, we would like to identify the kernel h ∈ H of the filter in cascade with the ideal IAF neuron. [sent-98, score-0.219]

60 Nevertheless, it is easy to show that (Ph)(t) approximates h(t) arbitrarily closely on t ∈ [T1 , T2 ], provided that the bandwidth Ω of u is sufficiently large. [sent-102, score-0.07]

61 , if there is no processing on the (arbitrary) t t input signal u(t), then ql = tll+1 (u ∗ h)(s)ds = tll+1 u(s)ds, l ∈ Z. [sent-106, score-0.144]

62 Furthermore, tl+1 tl tl+1 (u ∗ Ph)(s)ds = tl tl+1 (u ∗ h)(s)ds = tl+1 (u ∗ g)(s)ds, u(s)ds = tl tl l ∈ Z. [sent-107, score-0.584]

63 In other words, if h(t) = δ(t), then we identify Pδ(t) = sin(Ωt)/(πt), the projection of δ(t) onto Ξ. [sent-109, score-0.064]

64 2, we obtain W = [τ1 − τ + T, τ2 − τ + T ] ⊃ [T1 , T2 ] and the set of spike times (tk − τ + T )k: tk ∈W . [sent-115, score-0.565]

65 From Corollary 1 we see that if the [Filter]-[Ideal IAF] neural circuit is producing spikes with a spike density above the Nyquist rate, then we can use a set of spike times (tk )k: tk ∈W from a single temporal window W to identify (Ph)(t) to an arbitrary precision on [T1 , T2 ]. [sent-118, score-1.117]

66 Since the spike density is above the Nyquist rate, we could have also used a canonical time decoding machine (TDM) [13] to first perfectly recover the filter output v(t) and then employ one of the widely available LTI system techniques to estimate (Ph)(t). [sent-120, score-0.198]

67 However, the problem becomes much more difficult if the spike density is below the Nyquist rate. [sent-121, score-0.152]

68 (a) Top: example of a causal impulse response h(t) with supp(h) = [T1 , T2 ], T1 = 0. [sent-123, score-0.153]

69 (b) Top: an input signal u(t) with supp(Fu) = [−Ω, Ω]. [sent-127, score-0.063]

70 Middle: only red spikes from a temporal window W = (τ1 , τ2 ) are used to ˆ ˆ construct h(t). [sent-128, score-0.077]

71 Bottom: Ph is approximated by h(t) on t ∈ [T1 , T2 ] using spike times (tk − τ + T )k:tk ∈W . [sent-129, score-0.173]

72 (The Neuron Identification Machine) Let {ui | supp(Fui ) = [−Ω, Ω] }N be a coli=1 lection of N linearly independent and bounded stimuli at the input to a [Filter]-[Ideal IAF] neural circuit with a dendritic processing filter h ∈ H. [sent-131, score-0.712]

73 Furthermore, let Ti = (ti )k∈Z denote the output of k N b the neural circuit in response to the bounded input signal ui . [sent-132, score-0.514]

74 h(t) is the input to a population of N [Filter]-[Ideal IAF] neural circuits. [sent-136, score-0.058]

75 The spikes (ti )k∈Z at the output of each neural circuit represent k i i distinct measurements qk = φi , Ph of (Ph)(t). [sent-137, score-0.466]

76 Thus we can think of the qk ’s as projections k 1 1 1 N N N of Ph onto (φ1 , φ2 , . [sent-138, score-0.117]

77 Since the filters are linearly independent N b [14], it follows that, if {ui }N are appropriately bounded and j=1 Cδ > Ω or equivalently if the i=1 π j j Ω number of neurons N > ΩCδ = πD , the set of functions { (ψk )k∈Z }N with ψk (t) = g(t − tj ), is j=1 k πb a frame for Ξ [14], [18]. [sent-151, score-0.128]

78 k (Ph)(t) = (4) j=1 k∈Z To find the coefficients ck , we take the inner product of (4) with φ1 (t), φ2 (t), . [sent-153, score-0.068]

79 Letting [Gij ]lk = i ql = Gi1 k∈Z c1 lk k Gi2 + N cN φi , ψk k l k∈Z ≡ i ql , , we obtain c2 lk k k∈Z + ··· + GiN cN , lk k (5) k∈Z for i = 1, . [sent-160, score-0.618]

80 , qN ]T with [qi ]l = Cδ − b(ti − ti ), [Gij ]lk = l+1 l ti l+1 ti l ui (s − tj )ds and c = [c1 , c2 , . [sent-167, score-0.562]

81 Finally, k to find the coefficients ck , k ∈ Z, we compute c = G+ q. [sent-171, score-0.068]

82 Furthermore, let τ i = (τ1 + τ2 )/2, i i T = (T1 + T2 )/2 and let (tk )k∈Z denote those spikes of the I/O pair (u, T) that belong to W . [sent-185, score-0.047]

83 Then Ph can be approximated arbitrarily closely on [T1 , T2 ] by N ˆ h(t) = j cj ψk (t), k j=1 k: tk ∈W j j where ψk (t) = g(t − (tj − τ j + T )), c = G+ q with [Gij ]lk = k ti l+1 ti l u(s − (tj − τ j + T ))ds, k q = [q1 , q2 , . [sent-186, score-0.788]

84 , qN ]T , [qi ]l = Cδ − b(ti − ti ) for all k, l ∈ Z, provided that the number of l+1 l non-overlapping windows N is sufficiently large. [sent-189, score-0.142]

85 i i Proof: The input signal u restricted, respectively, to the collection of intervals W i τ1 , τ2 i N plays the same role here as the test stimuli {u }i=1 in Corollary 2. [sent-190, score-0.091]

86 The methodology presented in Theorem 2 can easily be applied to other spiking neuron models. [sent-199, score-0.38]

87 For example, for the leaky IAF neuron, we have ti − ti l+1 [q ]l = Cδ − bRC 1 − exp l RC i ti l+1 ij , [G ]lk = ti l ui s − tj exp k s − ti l+1 ds. [sent-200, score-0.892]

88 RC Similarly, for a threshold-and-feedback (TAF) neuron [15] with a bias b ∈ R+ , a threshold δ ∈ R+ , and a causal feedback filter with an impulse response f (t), t ∈ R, we obtain [qi ]l = δ − b + 3. [sent-201, score-0.37]

89 5 Conclusion Previous work in system identification of neural circuits (see [20] and references therein) calls for parameter identification using white noise input stimuli. [sent-203, score-0.086]

90 In our work, we presented the methodology for identifying dendritic processing in simple [Filter][Spiking Neuron] models from a single input stimulus. [sent-208, score-0.41]

91 The discussed spiking neurons include the ideal IAF neuron, the leaky IAF neuron and the threshold-and-fire neuron. [sent-209, score-0.564]

92 However, the methods presented in this paper are applicable to many other spiking neuron models as well. [sent-210, score-0.358]

93 The algorithm of the Neuron Identification Machine is based on the natural assumption that the dendritic processing filter has a finite temporal support. [sent-211, score-0.358]

94 Therefore, its action on the input stimulus can be observed in non-overlapping temporal windows. [sent-212, score-0.06]

95 The filter is recovered with arbitrary precision from an input/output pair of a neural circuit, where the input is a single signal assumed to be bandlimited. [sent-213, score-0.091]

96 Finally, the work presented here will be extended to spiking neurons with random parameters. [sent-216, score-0.175]

97 Computational model of the cAMPmediated sensory response and calcium-dependent adaptation in vertebrate olfactory receptor neurons. [sent-241, score-0.131]

98 Quantitative characterization procedure for auditory neurons based on the spectra-temporal receptive field. [sent-263, score-0.061]

99 Perfect recovery and sensitivity analysis of time encoded bandlimited o signals. [sent-280, score-0.071]

100 Complete functional characterization of sensory neurons by system identification. [sent-312, score-0.081]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('tk', 0.413), ('ph', 0.39), ('iaf', 0.35), ('dendritic', 0.307), ('circuit', 0.274), ('neuron', 0.217), ('lk', 0.166), ('spike', 0.152), ('tl', 0.146), ('filter', 0.146), ('ti', 0.142), ('spiking', 0.141), ('ideal', 0.126), ('ds', 0.122), ('identi', 0.098), ('qk', 0.094), ('lter', 0.089), ('supp', 0.085), ('impulse', 0.084), ('aurel', 0.08), ('tll', 0.08), ('tj', 0.07), ('ck', 0.068), ('ui', 0.066), ('tem', 0.064), ('ql', 0.06), ('lazar', 0.056), ('fu', 0.05), ('cascade', 0.049), ('bandlimited', 0.048), ('lnp', 0.048), ('olfactory', 0.048), ('taf', 0.048), ('sensory', 0.047), ('spikes', 0.047), ('voltage', 0.046), ('leaky', 0.046), ('nyquist', 0.042), ('phenomenological', 0.042), ('corollary', 0.039), ('response', 0.036), ('gij', 0.036), ('generation', 0.036), ('remark', 0.035), ('reset', 0.035), ('neurons', 0.034), ('cn', 0.034), ('causal', 0.033), ('signal', 0.033), ('involution', 0.032), ('nim', 0.032), ('simo', 0.032), ('temporal', 0.03), ('identifying', 0.03), ('input', 0.03), ('qi', 0.029), ('neural', 0.028), ('yevgeniy', 0.028), ('markus', 0.028), ('riesz', 0.028), ('circuits', 0.028), ('kt', 0.028), ('stimuli', 0.028), ('auditory', 0.027), ('arbitrarily', 0.027), ('drosophila', 0.026), ('cation', 0.025), ('qn', 0.025), ('eero', 0.024), ('biophysical', 0.024), ('generator', 0.024), ('bounded', 0.024), ('encoded', 0.023), ('bandwidth', 0.023), ('perfectly', 0.023), ('tn', 0.023), ('output', 0.023), ('onto', 0.023), ('cj', 0.023), ('gc', 0.023), ('observer', 0.023), ('kernel', 0.023), ('methodology', 0.022), ('spatiotemporal', 0.022), ('identify', 0.021), ('reproducing', 0.021), ('furthermore', 0.021), ('approximated', 0.021), ('neuroscience', 0.021), ('processing', 0.021), ('signals', 0.02), ('projection', 0.02), ('closely', 0.02), ('writing', 0.019), ('rc', 0.019), ('perfect', 0.019), ('jonathan', 0.018), ('realization', 0.018), ('columbia', 0.018), ('analog', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 115 nips-2010-Identifying Dendritic Processing

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

2 0.19587915 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

Author: Nicholas Fisher, Arunava Banerjee

Abstract: From a functional viewpoint, a spiking neuron is a device that transforms input spike trains on its various synapses into an output spike train on its axon. We demonstrate in this paper that the function mapping underlying the device can be tractably learned based on input and output spike train data alone. We begin by posing the problem in a classification based framework. We then derive a novel kernel for an SRM0 model that is based on PSP and AHP like functions. With the kernel we demonstrate how the learning problem can be posed as a Quadratic Program. Experimental results demonstrate the strength of our approach. 1

3 0.17429763 16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model

Author: Sebastian Millner, Andreas Grübl, Karlheinz Meier, Johannes Schemmel, Marc-olivier Schwartz

Abstract: We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. 1

4 0.14265709 96 nips-2010-Fractionally Predictive Spiking Neurons

Author: Jaldert Rombouts, Sander M. Bohte

Abstract: Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. 1

5 0.11253555 227 nips-2010-Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models

Author: Felipe Gerhard, Wulfram Gerstner

Abstract: Generalized Linear Models (GLMs) are an increasingly popular framework for modeling neural spike trains. They have been linked to the theory of stochastic point processes and researchers have used this relation to assess goodness-of-fit using methods from point-process theory, e.g. the time-rescaling theorem. However, high neural firing rates or coarse discretization lead to a breakdown of the assumptions necessary for this connection. Here, we show how goodness-of-fit tests from point-process theory can still be applied to GLMs by constructing equivalent surrogate point processes out of time-series observations. Furthermore, two additional tests based on thinning and complementing point processes are introduced. They augment the instruments available for checking model adequacy of point processes as well as discretized models. 1

6 0.11077109 205 nips-2010-Permutation Complexity Bound on Out-Sample Error

7 0.086386971 161 nips-2010-Linear readout from a neural population with partial correlation data

8 0.085259244 141 nips-2010-Layered image motion with explicit occlusions, temporal consistency, and depth ordering

9 0.083272688 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

10 0.079322755 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

11 0.07867761 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

12 0.077272139 268 nips-2010-The Neural Costs of Optimal Control

13 0.076161794 242 nips-2010-Slice sampling covariance hyperparameters of latent Gaussian models

14 0.073445983 157 nips-2010-Learning to localise sounds with spiking neural networks

15 0.067211792 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

16 0.066094577 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

17 0.065543227 32 nips-2010-Approximate Inference by Compilation to Arithmetic Circuits

18 0.061680473 269 nips-2010-Throttling Poisson Processes

19 0.05897684 154 nips-2010-Learning sparse dynamic linear systems using stable spline kernels and exponential hyperpriors

20 0.05086768 8 nips-2010-A Log-Domain Implementation of the Diffusion Network in Very Large Scale Integration


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.108), (1, 0.03), (2, -0.13), (3, 0.174), (4, 0.076), (5, 0.184), (6, -0.043), (7, 0.098), (8, 0.072), (9, -0.047), (10, -0.024), (11, 0.052), (12, -0.011), (13, 0.088), (14, 0.037), (15, 0.057), (16, 0.084), (17, -0.028), (18, -0.038), (19, -0.119), (20, -0.013), (21, 0.025), (22, -0.159), (23, -0.06), (24, -0.053), (25, -0.016), (26, 0.028), (27, -0.117), (28, 0.008), (29, -0.02), (30, 0.011), (31, -0.002), (32, -0.003), (33, -0.072), (34, 0.088), (35, 0.02), (36, 0.145), (37, 0.049), (38, 0.027), (39, 0.027), (40, 0.055), (41, 0.031), (42, -0.004), (43, -0.06), (44, 0.039), (45, 0.015), (46, -0.111), (47, -0.02), (48, 0.034), (49, 0.114)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97247475 115 nips-2010-Identifying Dendritic Processing

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

2 0.84131801 16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model

Author: Sebastian Millner, Andreas Grübl, Karlheinz Meier, Johannes Schemmel, Marc-olivier Schwartz

Abstract: We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. 1

3 0.74659127 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

Author: Nicholas Fisher, Arunava Banerjee

Abstract: From a functional viewpoint, a spiking neuron is a device that transforms input spike trains on its various synapses into an output spike train on its axon. We demonstrate in this paper that the function mapping underlying the device can be tractably learned based on input and output spike train data alone. We begin by posing the problem in a classification based framework. We then derive a novel kernel for an SRM0 model that is based on PSP and AHP like functions. With the kernel we demonstrate how the learning problem can be posed as a Quadratic Program. Experimental results demonstrate the strength of our approach. 1

4 0.72923887 157 nips-2010-Learning to localise sounds with spiking neural networks

Author: Dan Goodman, Romain Brette

Abstract: To localise the source of a sound, we use location-specific properties of the signals received at the two ears caused by the asymmetric filtering of the original sound by our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs change throughout an organism’s lifetime, during development for example, and so the required neural circuitry cannot be entirely hardwired. Since HRTFs are not directly accessible from perceptual experience, they can only be inferred from filtered sounds. We present a spiking neural network model of sound localisation based on extracting location-specific synchrony patterns, and a simple supervised algorithm to learn the mapping between synchrony patterns and locations from a set of example sounds, with no previous knowledge of HRTFs. After learning, our model was able to accurately localise new sounds in both azimuth and elevation, including the difficult task of distinguishing sounds coming from the front and back. Keywords: Auditory Perception & Modeling (Primary); Computational Neural Models, Neuroscience, Supervised Learning (Secondary) 1

5 0.71041024 96 nips-2010-Fractionally Predictive Spiking Neurons

Author: Jaldert Rombouts, Sander M. Bohte

Abstract: Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. 1

6 0.59000021 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

7 0.54480517 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

8 0.53359032 227 nips-2010-Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models

9 0.48296374 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

10 0.43306512 244 nips-2010-Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models

11 0.40681705 8 nips-2010-A Log-Domain Implementation of the Diffusion Network in Very Large Scale Integration

12 0.33030906 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

13 0.29719102 161 nips-2010-Linear readout from a neural population with partial correlation data

14 0.2863827 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

15 0.26147923 268 nips-2010-The Neural Costs of Optimal Control

16 0.26051027 205 nips-2010-Permutation Complexity Bound on Out-Sample Error

17 0.25114232 207 nips-2010-Phoneme Recognition with Large Hierarchical Reservoirs

18 0.23271574 154 nips-2010-Learning sparse dynamic linear systems using stable spline kernels and exponential hyperpriors

19 0.23181918 74 nips-2010-Empirical Bernstein Inequalities for U-Statistics

20 0.21300961 120 nips-2010-Improvements to the Sequence Memoizer


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.028), (17, 0.014), (27, 0.077), (30, 0.041), (35, 0.017), (45, 0.122), (50, 0.063), (52, 0.06), (60, 0.02), (77, 0.075), (79, 0.367), (90, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77720755 115 nips-2010-Identifying Dendritic Processing

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

2 0.60052484 204 nips-2010-Penalized Principal Component Regression on Graphs for Analysis of Subnetworks

Author: Ali Shojaie, George Michailidis

Abstract: Network models are widely used to capture interactions among component of complex systems, such as social and biological. To understand their behavior, it is often necessary to analyze functionally related components of the system, corresponding to subsystems. Therefore, the analysis of subnetworks may provide additional insight into the behavior of the system, not evident from individual components. We propose a novel approach for incorporating available network information into the analysis of arbitrary subnetworks. The proposed method offers an efficient dimension reduction strategy using Laplacian eigenmaps with Neumann boundary conditions, and provides a flexible inference framework for analysis of subnetworks, based on a group-penalized principal component regression model on graphs. Asymptotic properties of the proposed inference method, as well as the choice of the tuning parameter for control of the false positive rate are discussed in high dimensional settings. The performance of the proposed methodology is illustrated using simulated and real data examples from biology. 1

3 0.54919338 17 nips-2010-A biologically plausible network for the computation of orientation dominance

Author: Kritika Muralidharan, Nuno Vasconcelos

Abstract: The determination of dominant orientation at a given image location is formulated as a decision-theoretic question. This leads to a novel measure for the dominance of a given orientation θ, which is similar to that used by SIFT. It is then shown that the new measure can be computed with a network that implements the sequence of operations of the standard neurophysiological model of V1. The measure can thus be seen as a biologically plausible version of SIFT, and is denoted as bioSIFT. The network units are shown to exhibit trademark properties of V1 neurons, such as cross-orientation suppression, sparseness and independence. The connection between SIFT and biological vision provides a justification for the success of SIFT-like features and reinforces the importance of contrast normalization in computer vision. We illustrate this by replacing the Gabor units of an HMAX network with the new bioSIFT units. This is shown to lead to significant gains for classification tasks, leading to state-of-the-art performance among biologically inspired network models and performance competitive with the best non-biological object recognition systems. 1

4 0.53859872 162 nips-2010-Link Discovery using Graph Feature Tracking

Author: Emile Richard, Nicolas Baskiotis, Theodoros Evgeniou, Nicolas Vayatis

Abstract: We consider the problem of discovering links of an evolving undirected graph given a series of past snapshots of that graph. The graph is observed through the time sequence of its adjacency matrix and only the presence of edges is observed. The absence of an edge on a certain snapshot cannot be distinguished from a missing entry in the adjacency matrix. Additional information can be provided by examining the dynamics of the graph through a set of topological features, such as the degrees of the vertices. We develop a novel methodology by building on both static matrix completion methods and the estimation of the future state of relevant graph features. Our procedure relies on the formulation of an optimization problem which can be approximately solved by a fast alternating linearized algorithm whose properties are examined. We show experiments with both simulated and real data which reveal the interest of our methodology. 1

5 0.44261867 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

Author: Surya Ganguli, Haim Sompolinsky

Abstract: Recent proposals suggest that large, generic neuronal networks could store memory traces of past input sequences in their instantaneous state. Such a proposal raises important theoretical questions about the duration of these memory traces and their dependence on network size, connectivity and signal statistics. Prior work, in the case of gaussian input sequences and linear neuronal networks, shows that the duration of memory traces in a network cannot exceed the number of neurons (in units of the neuronal time constant), and that no network can out-perform an equivalent feedforward network. However a more ethologically relevant scenario is that of sparse input sequences. In this scenario, we show how linear neural networks can essentially perform compressed sensing (CS) of past inputs, thereby attaining a memory capacity that exceeds the number of neurons. This enhanced capacity is achieved by a class of “orthogonal” recurrent networks and not by feedforward networks or generic recurrent networks. We exploit techniques from the statistical physics of disordered systems to analytically compute the decay of memory traces in such networks as a function of network size, signal sparsity and integration time. Alternately, viewed purely from the perspective of CS, this work introduces a new ensemble of measurement matrices derived from dynamical systems, and provides a theoretical analysis of their asymptotic performance. 1

6 0.43469465 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

7 0.43353382 96 nips-2010-Fractionally Predictive Spiking Neurons

8 0.43310991 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

9 0.43038651 51 nips-2010-Construction of Dependent Dirichlet Processes based on Poisson Processes

10 0.42860168 117 nips-2010-Identifying graph-structured activation patterns in networks

11 0.42821065 109 nips-2010-Group Sparse Coding with a Laplacian Scale Mixture Prior

12 0.42670882 161 nips-2010-Linear readout from a neural population with partial correlation data

13 0.42536202 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

14 0.42506292 68 nips-2010-Effects of Synaptic Weight Diffusion on Learning in Decision Making Networks

15 0.42328122 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

16 0.42023289 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

17 0.41917077 98 nips-2010-Functional form of motion priors in human motion perception

18 0.41847005 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

19 0.41765979 268 nips-2010-The Neural Costs of Optimal Control

20 0.41450074 253 nips-2010-Spike timing-dependent plasticity as dynamic filter