nips nips2005 nips2005-106 knowledge-graph by maker-knowledge-mining

106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression


Source: pdf

Author: Misha Ahrens, Liam Paninski, Quentin J. Huys

Abstract: Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this regression, in turn, is efficiently solved using standard algorithms, without any “local minima” problems despite the large number of parameters and complex dynamics. The noise level 5) may also be estimated by standard techniques. We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Large-scale biophysical parameter estimation in single neurons via constrained linear regression Misha B. [sent-1, score-0.175]

2 uk Abstract Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. [sent-7, score-0.063]

3 The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. [sent-8, score-0.067]

4 We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e. [sent-10, score-0.41]

5 via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. [sent-12, score-0.815]

6 The noise level 5) may also be estimated by standard techniques. [sent-14, score-0.077]

7 We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. [sent-15, score-0.076]

8 1 Introduction The usual tradeoff in parameter estimation for single neuron models is between realism and tractability. [sent-16, score-0.084]

9 Support contributed by the Gatsby Charitable Foundation (LP, MA), a Royal Society International Fellowship (LP), the BIBA consortium and the UCL School of Medicine (QH). [sent-19, score-0.038]

10 , the percentage of correctly-predicted spike times [1]) and the abundance of local minima on the very large-dimensional allowed parameter space [2, 3]. [sent-29, score-0.032]

11 Here we present a method that is both computationally tractable and biophysically detailed. [sent-30, score-0.063]

12 by voltage-sensitive imaging methods; in electrotonically compact cells, single electrode recordings can be used). [sent-34, score-0.085]

13 This implies, somewhat counterintuitively, that optimizing the likelihood of the parameters in this setting is a convex problem, with no non-global local extrema. [sent-36, score-0.067]

14 Moreover, linearly constrained quadratic optimization is an extremely well-studied problem, with many efficient algorithms available. [sent-37, score-0.091]

15 In addition, we discuss methods for incorporating prior knowledge and analyzing uncertainty in our estimates, again basing our techniques on the well-founded probabilistic regression framework. [sent-39, score-0.141]

16 Modeling the cell under investigation in this discretized manner, a typical equation describing the voltage in compartment x is Cx dVx (t) = ai,x Ji,x (t) + Ix (t) dt + σx dNx,t . [sent-41, score-0.514]

17 (1) i Here σx Nx,t is evolution (current) noise and Ix (t) is externally injected current. [sent-42, score-0.126]

18 Dropping the subscript x where possible, the terms ai · Ji (t) represent currents due to: 1. [sent-43, score-0.16]

19 voltage mismatch in neighbouring compartments, fx,y (Vy (t) − Vx (t)), 2. [sent-44, score-0.292]

20 membrane channels, active (voltage-dependent) or passive, gj gj (t)(Ej − V (t)). [sent-46, score-0.361]

21 the spatiotemporal input from synapse s, us (t), from which gs (t) is obtained by dgs (t)/dt = −gs (t)/τs + us (t), (2) a linear convolution operation (the synaptic kinetic parameter τs is assumed known) which may be written in matrix notation gs = Ku. [sent-49, score-0.814]

22 The open probabilities of channel j, gj (t), are ¯ obtained from the channel kinetics, which are assumed to evolve deterministically, with a known dependence on V , as in the Hodgkin-Huxley model, gN a = m3 h, τm dm(t)/dt = m∞ (V ) − m, (3) and similarly for h. [sent-52, score-0.902]

23 Again, we emphasize that the kinetic parameters τm and m∞ (V ) are assumed known; only the inhomogeneous concentrations are unknown. [sent-53, score-0.344]

24 (For passive channels gj is taken constant and independent of voltage. [sent-54, score-0.458]

25 ) The parameters 1-3 are relative to membrane capacitance Cx . [sent-55, score-0.136]

26 1 When modeling the dynamics of a single neuron according to (1), the voltage V (t) and channel kinetics gj (t) are typically evolved in parallel, according to the injected current I(t) and synaptic inputs us (t). [sent-56, score-1.466]

27 Suppose, on the other hand, that we have observed the voltage Vx (t) in each compartment. [sent-57, score-0.292]

28 Since we have assumed we also know the channel kinetics (equation 3), the synaptic kinetics (equation 2) and the reversal potentials Ej of the channels present in each compartment, we may decouple the equations and determine the open probabilities gj,x (t) for t ∈ [0, T ]. [sent-58, score-1.431]

29 This, in turn, implies that the currents Ji,x (t) and ˙ voltage differentials Vx (t) are all known, and we may interpret equation 1 as a regression equation, linear in the unknown parameters ai , instead of an evolution equation. [sent-59, score-0.584]

30 Thus we can use linear regression methods to simultaneously infer optimal values of the ˙ parameters {¯j,x , us,x (t), fx,y }2 . [sent-61, score-0.132]

31 More precisely, rewrite equation (1) in matrix form, V = g Ma + ση, where each column of the matrix M is composed of one of the known currents ˙ {Ji (t), t ∈ [0, T ]} (with T the length of the experiment) and the column vectors V, a, and η are defined in the obvious way. [sent-62, score-0.051]

32 2 (4) a In addition, since on physical grounds the channel concentrations, synaptic input, and conductances must be non-negative, we require our solution ai ≥ 0. [sent-64, score-1.024]

33 The resulting linearlyconstrained quadratic optimization problem has no local minima (due to the convexity of the objective function and of the domain gi ≥ 0), and allows quadratic programming (QP) tools (e. [sent-65, score-0.128]

34 Quadratic programming tactics: As emphasized above, the dimension d of the parameter space to be optimized over in this application is quite large (d ∼ Ncomp (T Nsyn + Nchan ), with N denoting the number of compartments, synapse types, and membrane channel types respectively). [sent-69, score-0.48]

35 Fortunately, the correlational structure of the parameters allows us to perform this optimization more efficiently, by several natural decompositions: in particular, given the spatiotemporal voltage signal Vx (t), parameters which are distant in space (e. [sent-71, score-0.544]

36 , the densities of channels in widely-separated compartments) and time (i. [sent-73, score-0.345]

37 , the synaptic input us,x (t) for t = ti and tj with |ti − tj | large) may be optimized independently. [sent-75, score-0.352]

38 It is linear in the data and can be included with the other parameters ai in the joint estimation. [sent-77, score-0.176]

39 g g dV dt while holding all the other parameters fixed. [sent-79, score-0.107]

40 (The quadratic nature of the original problem guarantees that each of these subset problems will be quadratic, with no local minima. [sent-80, score-0.048]

41 1 The probabilistic framework If we assume the noise Nx,t is Gaussian and white, then the mean-square regression solution for a described above coincides exactly with the (constrained) maximum likelihood ˙ ˆ estimate, aM L = arg mina V−Ma 2 /2σ 2 . [sent-83, score-0.105]

42 (The noise scale σ may also be estimated via 2 maximum likelihood. [sent-84, score-0.077]

43 ) This suggests several straightforward likelihood-based techniques for representing the uncertainty in our estimates. [sent-85, score-0.076]

44 In each case, computing confidence intervals for ai reduces to computing moments of multidimensional Gaussian distributions, truncated to ai ≥ 0. [sent-88, score-0.296]

45 We use importance sampling methods [7] to compute these moments for the channel parameters. [sent-89, score-0.4]

46 Sampling from high-dimensional truncated Gaussians via sample-reject is inefficient (since samples from the non-truncated Gaussian – call this distribution p∗ (a|V) – may violate the constraint ai ≥ 0 with high probability). [sent-90, score-0.148]

47 Therefore we sample instead from a proposal density q(a) with support on ai ≥ 0 (specifically, a product of univariate truncated Gaussians with mean ai and appropriate variance) and evaluate the second moments around aM L by N 3 1 Z p∗ (an |V) n (ai −aM Li )2 q(an ) n=1 N p∗ (an |V) . [sent-91, score-0.296]

48 q(an ) n=1 (5) Hessian Principal Components Analysis: The procedure described above allows us to quantify the uncertainty of individual estimated parameters ai . [sent-92, score-0.254]

49 We are also interested in the uncertainty of our estimates in a joint sense (e. [sent-93, score-0.041]

50 channels whose corresponding currents are highly correlated (and therefore approximately interchangeable). [sent-98, score-0.317]

51 On biophysical grounds we require fx,y = fy,x ; we enforce this (linear) constraint by only including one parameter for each connected pair of compartments (x, y). [sent-105, score-0.176]

52 In this case the true channel kinetics were of standard Hodgkin-Huxley form (Na+ , K+ and leak), with inhomogeneous densities (figure 1). [sent-106, score-0.661]

53 To test the selectivity of the estimation procedure, we fitted Nchan = 8 candidate channels from [8, 9, 10] (five of which were absent in the true model cell). [sent-107, score-0.266]

54 The concentrations of the five channels that were not present when generating the data were set to approximately zero, as desired (data not shown). [sent-109, score-0.427]

55 The lower panels demonstrate the robustness of the methods on highly noisy (large σ) data, in which case the estimated errorbars become significant, but the performance degrades only slightly. [sent-110, score-0.096]

56 200 150 100 50 100 HH g 200 100 HH gNa 200 inferred HH gNa Na 200 150 100 50 Figure 1: Top panels: σ = 0. [sent-111, score-0.083]

57 14 compartment model neuron, Na+ channel concentration indicated by grey scale; estimated Na+ channel concentrations in the noiseless case; observed voltage traces (one per compartment); estimated concentrations. [sent-112, score-1.48]

58 Na+ channel concentration legend, values relative to Cm (e. [sent-114, score-0.397]

59 in mS/cm2 if Cm = 1µF/cm2 ); estimated Na+ concentrations in the noisy case; noisy voltage traces; estimated channel concentrations. [sent-116, score-0.888]

60 K+ channel concentrations and intercompartmental conductances fx,y not shown (similar performance). [sent-117, score-0.853]

61 2 Inferring synaptic input in a passive model Next we simulated a single-compartment, leaky neuron (i. [sent-119, score-0.482]

62 , no voltage-sensitive membrane channels) with synaptic input from three synapses, two excitatory (glutamatertic; τ = 3 ms, E = 0 mV) and one inhibitory (GABAA ; τ = 5 ms, E = −75 mV). [sent-121, score-0.538]

63 When we attempted to estimate the synaptic input us (t) via the ML estimator described above (figure 2, left), we observe an overfitting phenomenon: the current noise due to Nt is being “explained” by competing balanced excitatory and inhibitory synaptic inputs. [sent-122, score-0.815]

64 Once again, we may make use of well-known techniques from the regression literature to solve this problem: in this case, we need to regularize our estimated synaptic parameters. [sent-124, score-0.443]

65 As mentioned above, this maximum a posteriori (MAP) estimate corresponds to a product exponential prior on the synaptic input ut ; the multiplier λ may be chosen as the expected synaptic input per unit time. [sent-126, score-0.704]

66 This is visible in figure 2 (right); we see that the small, noise-matching synaptic activity is effectively suppressed, permitting much more accurate detection of the true input spike timing. [sent-128, score-0.352]

67 ˆ uM AP = arg min u with regularisation without regularisation Inh spikes 2 [mS/cm ] | Voltage [mV] | Exc spikes 2 [mS/cm ] 12 0 −49 −53 −57 23 0 0 0. [sent-129, score-0.2]

68 4 Figure 2: Inferring synaptic inputs to a passive membrane. [sent-145, score-0.392]

69 Top traces: excitatory inputs; bottom: inhibitory inputs; middle: the resulting voltage trace. [sent-146, score-0.409]

70 Left panels: synaptic inputs inferred by ML; right: MAP estimates under the exponential (shrinkage) prior. [sent-147, score-0.429]

71 Note the overfitting by the ML estimate (left) and the higher accuracy under the MAP estimate (right); in particular note that the two excitatory synapses of differing magnitudes may easily be distinguished. [sent-148, score-0.094]

72 3 Inferring synaptic input and channel distribution in an active model The optimization is, as mentioned earlier, jointly convex in both channel densities and synaptic input. [sent-150, score-1.459]

73 We illustrate the simultaneous inference of channel densities and synaptic inputs in a single compartment, writing the model as: dV = dt Nchan S gc gc (V, t)(Vc − V (t)) + ¯ c=1 gs (t)(Vs − V (t)) + σdN (t), (7) s=1 with the same channels and synapse types as above. [sent-151, score-1.477]

74 The combination of leak conductance and inhibitory synaptic input leads to very small eigenvalues in A and slow convergence when applying the above decomposition; thus, to speed convergence here we coarsened the time resolution of the synaptic input from 0. [sent-152, score-0.922]

75 The true parameters are in blue, the inferred parameters in red. [sent-157, score-0.217]

76 The top left panel shows the excitatory synaptic input, the middle left panel the voltage trace (the only data) and the bottom left traces the inhibitory synaptic input. [sent-158, score-1.082]

77 The right panel shows the true and inferred channel densities; channels are the same as in 3. [sent-159, score-0.71]

78 4 Eigenvector analysis for a single-compartment model Finally, as discussed above, the eigenvectors (“principal components”) of the loglikelihood Hessian A carry significant information about the dependence and redundancy of the parameters under study here. [sent-162, score-0.067]

79 In the leftmost panels, we see that the direction amost most highly-constrained by the data – the eigenvector corresponding to the largest eigenvalue of A – turns out to have the intuitive form of the balance between Na+ and K+ channels. [sent-164, score-0.118]

80 When we perturb this balance slightly (that is, when we shift the model parameters slightly along this direction in parameter space, aM L → aM L +ǫamost ), the cell’s behavior changes dramatically. [sent-165, score-0.113]

81 Conversely, the least-sensitive direction, aleast , corresponds roughly to the balance between the concentrations of two Na+ channels with similar kinetics, and moving in this direction in parameter space (aM L → aM L + ǫaleast ) has a negligible effect on the model’s dynamical behavior. [sent-166, score-0.545]

82 L R −100 0 20 40 60 time [ms] 80 100 Figure 4: Eigenvectors of A corresponding to largest (amost , left) and smallest (aleast , right) eigenvalues, and voltage traces of the model neuron after equal sized perturbations by both (solid line: perturbed model; dotted line: original model). [sent-195, score-0.437]

83 The first four parameters are the concentrations of four Na+ channels (the first two of which are in fact the same Hodgkin-Huxley channel, but with slightly different kinetic parameters); the next four of K+ channels; the next of the leak channel; the last of 1/C. [sent-196, score-0.655]

84 4 Discussion and future work We have developed a probabilistic regression framework for estimation of biophysical single neuron properties and synaptic input. [sent-197, score-0.522]

85 This framework leads directly to efficient, globally-convergent algorithms for determining these parameters, and also to well-founded methods for analyzing the uncertainty of the estimates. [sent-198, score-0.041]

86 We believe this is a key first step towards applying these techniques in detailed, quantitative studies of dendritic input and processing in vitro and in vivo. [sent-199, score-0.142]

87 This is a reasonable assumption when voltage is recorded directly, via patch-clamp methods. [sent-202, score-0.292]

88 However, while voltage-sensitive imaging techniques have seen dramatic improvements over the last few years (and will continue to do so in the near future), currently these methods still suffer from relatively low signalto-noise ratios and spatiotemporal sampling rates. [sent-203, score-0.206]

89 While the procedure proved to be robust to low-level noise of various forms (data not shown), it will be important to relax the noiseless-observation assumption, most likely by adapting standard techniques from the hidden Markov model signal processing literature [11]. [sent-204, score-0.075]

90 Hidden branches: Current imaging and dye technologies allow for the monitoring of only a fraction of a dendritic tree; therefore our focus will be on estimating the properties of these sub-structures. [sent-205, score-0.114]

91 Furthermore, these dyes diffuse very slowly and may miss small branches of dendrites, thereby effectively creating unobserved current sources. [sent-206, score-0.032]

92 Misspecified channel kinetics and channels with chemical dependence: Channels dependent on unobserved variables (e. [sent-207, score-0.816]

93 The techniques described here may thus be applied unmodified to experimental data for which such channels have been blocked pharmacologically. [sent-210, score-0.301]

94 However, we should note that our methods extend directly to the case where simultaneous access to voltage and calcium signals is possible; more generally, one could develop a semi-realistic model of calcium concentration, and optimize over the parameters of this model as well. [sent-211, score-0.427]

95 figure 1) the effect of misspecifications of voltagedependent channel kinetics and how the most relevant channels may be selected by supplying sufficiently rich “channel libraries”. [sent-214, score-0.816]

96 Such libraries can also contain several “copies” of the same channel, with one or more systematically varying parameters, thus allowing for a limited search in the nonlinear space of channel kinetics. [sent-215, score-0.403]

97 Finally, in our discussion of “equivalence classes” of channels (figure 4), we illustrate how eigenvector analysis of our objective function allows for insights into the joint behaviour of channels. [sent-216, score-0.266]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('channel', 0.361), ('synaptic', 0.306), ('voltage', 0.292), ('channels', 0.266), ('conductances', 0.21), ('kinetics', 0.189), ('mv', 0.161), ('concentrations', 0.161), ('na', 0.15), ('gj', 0.146), ('compartment', 0.134), ('intercompartmental', 0.121), ('spatiotemporal', 0.118), ('gc', 0.115), ('vx', 0.112), ('ai', 0.109), ('gs', 0.105), ('ej', 0.099), ('nchan', 0.096), ('ms', 0.087), ('neuron', 0.084), ('kinetic', 0.084), ('inferred', 0.083), ('densities', 0.079), ('leak', 0.077), ('reversal', 0.077), ('aleast', 0.072), ('amost', 0.072), ('compartments', 0.071), ('membrane', 0.069), ('biophysical', 0.067), ('parameters', 0.067), ('regression', 0.065), ('hh', 0.064), ('biophysically', 0.063), ('spikes', 0.062), ('dendritic', 0.061), ('cx', 0.061), ('traces', 0.061), ('inhibitory', 0.059), ('panels', 0.059), ('mt', 0.059), ('excitatory', 0.058), ('ix', 0.057), ('imaging', 0.053), ('currents', 0.051), ('synapse', 0.05), ('ahrens', 0.048), ('bower', 0.048), ('dvx', 0.048), ('exc', 0.048), ('gna', 0.048), ('inh', 0.048), ('multicompartmental', 0.048), ('vanier', 0.048), ('cell', 0.048), ('quadratic', 0.048), ('injected', 0.048), ('conductance', 0.048), ('passive', 0.046), ('ml', 0.046), ('balance', 0.046), ('um', 0.046), ('input', 0.046), ('inferring', 0.045), ('hessian', 0.044), ('potentials', 0.043), ('constrained', 0.043), ('libraries', 0.042), ('liam', 0.042), ('wood', 0.042), ('uncertainty', 0.041), ('inputs', 0.04), ('noise', 0.04), ('dt', 0.04), ('moments', 0.039), ('truncated', 0.039), ('misspeci', 0.038), ('contributed', 0.038), ('grounds', 0.038), ('thereof', 0.038), ('externally', 0.038), ('regularisation', 0.038), ('tting', 0.037), ('estimated', 0.037), ('vy', 0.036), ('nonlinearly', 0.036), ('dv', 0.036), ('synapses', 0.036), ('concentration', 0.036), ('techniques', 0.035), ('gure', 0.034), ('eigenvalues', 0.034), ('calcium', 0.034), ('evolve', 0.034), ('minima', 0.032), ('qp', 0.032), ('branches', 0.032), ('electrode', 0.032), ('inhomogeneous', 0.032)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression

Author: Misha Ahrens, Liam Paninski, Quentin J. Huys

Abstract: Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this regression, in turn, is efficiently solved using standard algorithms, without any “local minima” problems despite the large number of parameters and complex dynamics. The noise level 5) may also be estimated by standard techniques. We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. 1

2 0.17487282 188 nips-2005-Temporally changing synaptic plasticity

Author: Minija Tamosiunaite, Bernd Porr, Florentin Wörgötter

Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. 1

3 0.14550334 181 nips-2005-Spiking Inputs to a Winner-take-all Network

Author: Matthias Oster, Shih-Chii Liu

Abstract: Recurrent networks that perform a winner-take-all computation have been studied extensively. Although some of these studies include spiking networks, they consider only analog input rates. We present results of this winner-take-all computation on a network of integrate-and-fire neurons which receives spike trains as inputs. We show how we can configure the connectivity in the network so that the winner is selected after a pre-determined number of input spikes. We discuss spiking inputs with both regular frequencies and Poisson-distributed rates. The robustness of the computation was tested by implementing the winner-take-all network on an analog VLSI array of 64 integrate-and-fire neurons which have an innate variance in their operating parameters. 1

4 0.14129144 99 nips-2005-Integrate-and-Fire models with adaptation are good enough

Author: Renaud Jolivet, Alexander Rauch, Hans-rudolf Lüscher, Wulfram Gerstner

Abstract: Integrate-and-Fire-type models are usually criticized because of their simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. 1

5 0.11969766 15 nips-2005-A Theoretical Analysis of Robust Coding over Noisy Overcomplete Channels

Author: Eizaburo Doi, Doru C. Balcan, Michael S. Lewicki

Abstract: Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neurons. This problem can be expressed in information theoretic terms as coding and transmitting a multi-dimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplete codes can be learned by minimizing the reconstruction error with a constraint on the channel capacity. Here, we present a theoretical analysis that characterizes the optimal linear coder and decoder for one- and twodimensional data. The analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides a number of important insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions to achieve robustness. We also report numerical solutions for robust coding of highdimensional image data and show that these codes are substantially more robust compared against other image codes such as ICA and wavelets. 1

6 0.11031116 8 nips-2005-A Criterion for the Convergence of Learning with Spike Timing Dependent Plasticity

7 0.1093853 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches

8 0.096354611 118 nips-2005-Learning in Silicon: Timing is Everything

9 0.09249521 140 nips-2005-Nonparametric inference of prior probabilities from Bayes-optimal behavior

10 0.087597281 40 nips-2005-CMOL CrossNets: Possible Neuromorphic Nanoelectronic Circuits

11 0.084161304 134 nips-2005-Neural mechanisms of contrast dependent receptive field size in V1

12 0.083295859 29 nips-2005-Analyzing Coupled Brain Sources: Distinguishing True from Spurious Interaction

13 0.075221255 164 nips-2005-Representing Part-Whole Relationships in Recurrent Neural Networks

14 0.073655032 167 nips-2005-Robust design of biological experiments

15 0.067889467 39 nips-2005-Beyond Pair-Based STDP: a Phenomenological Rule for Spike Triplet and Frequency Effects

16 0.066460893 150 nips-2005-Optimizing spatio-temporal filters for improving Brain-Computer Interfacing

17 0.064817674 113 nips-2005-Learning Multiple Related Tasks using Latent Independent Component Analysis

18 0.062432379 135 nips-2005-Neuronal Fiber Delineation in Area of Edema from Diffusion Weighted MRI

19 0.060623378 165 nips-2005-Response Analysis of Neuronal Population with Synaptic Depression

20 0.060617097 132 nips-2005-Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.202), (1, -0.196), (2, -0.067), (3, -0.05), (4, 0.003), (5, -0.052), (6, -0.066), (7, -0.02), (8, 0.029), (9, -0.039), (10, -0.009), (11, -0.018), (12, 0.08), (13, -0.029), (14, -0.006), (15, -0.015), (16, -0.086), (17, 0.051), (18, -0.054), (19, -0.041), (20, -0.018), (21, -0.054), (22, 0.024), (23, -0.006), (24, -0.086), (25, 0.037), (26, 0.082), (27, -0.145), (28, 0.137), (29, 0.02), (30, -0.055), (31, 0.022), (32, -0.224), (33, -0.191), (34, 0.028), (35, 0.022), (36, 0.043), (37, 0.011), (38, 0.02), (39, 0.013), (40, -0.061), (41, 0.147), (42, 0.099), (43, -0.167), (44, 0.111), (45, 0.029), (46, -0.017), (47, 0.161), (48, 0.125), (49, -0.15)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95846951 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression

Author: Misha Ahrens, Liam Paninski, Quentin J. Huys

Abstract: Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this regression, in turn, is efficiently solved using standard algorithms, without any “local minima” problems despite the large number of parameters and complex dynamics. The noise level 5) may also be estimated by standard techniques. We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. 1

2 0.68748885 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches

Author: Anna Levina, Michael Herrmann

Abstract: There is experimental evidence that cortical neurons show avalanche activity with the intensity of firing events being distributed as a power-law. We present a biologically plausible extension of a neural network which exhibits a power-law avalanche distribution for a wide range of connectivity parameters. 1

3 0.57788992 40 nips-2005-CMOL CrossNets: Possible Neuromorphic Nanoelectronic Circuits

Author: Jung Hoon Lee, Xiaolong Ma, Konstantin K. Likharev

Abstract: Hybrid “CMOL” integrated circuits, combining CMOS subsystem with nanowire crossbars and simple two-terminal nanodevices, promise to extend the exponential Moore-Law development of microelectronics into the sub-10-nm range. We are developing neuromorphic network (“CrossNet”) architectures for this future technology, in which neural cell bodies are implemented in CMOS, nanowires are used as axons and dendrites, while nanodevices (bistable latching switches) are used as elementary synapses. We have shown how CrossNets may be trained to perform pattern recovery and classification despite the limitations imposed by the CMOL hardware. Preliminary estimates have shown that CMOL CrossNets may be extremely dense (~10 7 cells per cm2) and operate approximately a million times faster than biological neural networks, at manageable power consumption. In Conclusion, we discuss in brief possible short-term and long-term applications of the emerging technology. 1 Introduction: CMOL Circuits Recent results [1, 2] indicate that the current VLSI paradigm based on CMOS technology can be hardly extended beyond the 10-nm frontier: in this range the sensitivity of parameters (most importantly, the gate voltage threshold) of silicon field-effect transistors to inevitable fabrication spreads grows exponentially. This sensitivity will probably send the fabrication facilities costs skyrocketing, and may lead to the end of Moore’s Law some time during the next decade. There is a growing consensus that the impending Moore’s Law crisis may be preempted by a radical paradigm shift from the purely CMOS technology to hybrid CMOS/nanodevice circuits, e.g., those of “CMOL” variety (Fig. 1). Such circuits (see, e.g., Ref. 3 for their recent review) would combine a level of advanced CMOS devices fabricated by the lithographic patterning, and two-layer nanowire crossbar formed, e.g., by nanoimprint, with nanowires connected by simple, similar, two-terminal nanodevices at each crosspoint. For such devices, molecular single-electron latching switches [4] are presently the leading candidates, in particular because they may be fabricated using the self-assembled monolayer (SAM) technique which already gave reproducible results for simpler molecular devices [5]. (a) nanodevices nanowiring and nanodevices interface pins upper wiring level of CMOS stack (b) βFCMOS Fnano α Fig. 1. CMOL circuit: (a) schematic side view, and (b) top-view zoom-in on several adjacent interface pins. (For clarity, only two adjacent nanodevices are shown.) In order to overcome the CMOS/nanodevice interface problems pertinent to earlier proposals of hybrid circuits [6], in CMOL the interface is provided by pins that are distributed all over the circuit area, on the top of the CMOS stack. This allows to use advanced techniques of nanowire patterning (like nanoimprint) which do not have nanoscale accuracy of layer alignment [3]. The vital feature of this interface is the tilt, by angle α = arcsin(Fnano/βFCMOS), of the nanowire crossbar relative to the square arrays of interface pins (Fig. 1b). Here Fnano is the nanowiring half-pitch, FCMOS is the half-pitch of the CMOS subsystem, and β is a dimensionless factor larger than 1 that depends on the CMOS cell complexity. Figure 1b shows that this tilt allows the CMOS subsystem to address each nanodevice even if Fnano << βFCMOS. By now, it has been shown that CMOL circuits can combine high performance with high defect tolerance (which is necessary for any circuit using nanodevices) for several digital applications. In particular, CMOL circuits with defect rates below a few percent would enable terabit-scale memories [7], while the performance of FPGA-like CMOL circuits may be several hundred times above that of overcome purely CMOL FPGA (implemented with the same FCMOS), at acceptable power dissipation and defect tolerance above 20% [8]. In addition, the very structure of CMOL circuits makes them uniquely suitable for the implementation of more complex, mixed-signal information processing systems, including ultradense and ultrafast neuromorphic networks. The objective of this paper is to describe in brief the current status of our work on the development of so-called Distributed Crossbar Networks (“CrossNets”) that could provide high performance despite the limitations imposed by CMOL hardware. A more detailed description of our earlier results may be found in Ref. 9. 2 Synapses The central device of CrossNet is a two-terminal latching switch [3, 4] (Fig. 2a) which is a combination of two single-electron devices, a transistor and a trap [3]. The device may be naturally implemented as a single organic molecule (Fig. 2b). Qualitatively, the device operates as follows: if voltage V = Vj – Vk applied between the external electrodes (in CMOL, nanowires) is low, the trap island has no net electric charge, and the single-electron transistor is closed. If voltage V approaches certain threshold value V+ > 0, an additional electron is inserted into the trap island, and its field lifts the Coulomb blockade of the single-electron transistor, thus connecting the nanowires. The switch state may be reset (e.g., wires disconnected) by applying a lower voltage V < V- < V+. Due to the random character of single-electron tunneling [2], the quantitative description of the switch is by necessity probabilistic: actually, V determines only the rates Γ↑↓ of device switching between its ON and OFF states. The rates, in turn, determine the dynamics of probability p to have the transistor opened (i.e. wires connected): dp/dt = Γ↑(1 - p) - Γ↓p. (1) The theory of single-electron tunneling [2] shows that, in a good approximation, the rates may be presented as Γ↑↓ = Γ0 exp{±e(V - S)/kBT} , (2) (a) single-electron trap tunnel junction Vj Vk single-electron transistor (b) O clipping group O N C R diimide acceptor groups O O C N R R O OPE wires O N R R N O O R O N R R = hexyl N O O R R O N C R R R Fig. 2. (a) Schematics and (b) possible molecular implementation of the two-terminal single-electron latching switch where Γ0 and S are constants depending on physical parameters of the latching switches. Note that despite the random character of switching, the strong nonlinearity of Eq. (2) allows to limit the degree of the device “fuzziness”. 3 CrossNets Figure 3a shows the generic structure of a CrossNet. CMOS-implemented somatic cells (within the Fire Rate model, just nonlinear differential amplifiers, see Fig. 3b,c) apply their output voltages to “axonic” nanowires. If the latching switch, working as an elementary synapse, on the crosspoint of an axonic wire with the perpendicular “dendritic” wire is open, some current flows into the latter wire, charging it. Since such currents are injected into each dendritic wire through several (many) open synapses, their addition provides a natural passive analog summation of signals from the corresponding somas, typical for all neural networks. Examining Fig. 3a, please note the open-circuit terminations of axonic and dendritic lines at the borders of the somatic cells; due to these terminations the somas do not communicate directly (but only via synapses). The network shown on Fig. 3 is evidently feedforward; recurrent networks are achieved in the evident way by doubling the number of synapses and nanowires per somatic cell (Fig. 3c). Moreover, using dual-rail (bipolar) representation of the signal, and hence doubling the number of nanowires and elementary synapses once again, one gets a CrossNet with somas coupled by compact 4-switch groups [9]. Using Eqs. (1) and (2), it is straightforward to show that that the average synaptic weight wjk of the group obeys the “quasi-Hebbian” rule: d w jk = −4Γ0 sinh (γ S ) sinh (γ V j ) sinh (γ Vk ) . dt (3) (a) - +soma j (b) RL + -- jk+ RL (c) jk- RL + -- -+soma k RL Fig. 3. (a) Generic structure of the simplest, (feedforward, non-Hebbian) CrossNet. Red lines show “axonic”, and blue lines “dendritic” nanowires. Gray squares are interfaces between nanowires and CMOS-based somas (b, c). Signs show the dendrite input polarities. Green circles denote molecular latching switches forming elementary synapses. Bold red and blue points are open-circuit terminations of the nanowires, that do not allow somas to interact in bypass of synapses In the simplest cases (e.g., quasi-Hopfield networks with finite connectivity), the tri-level synaptic weights of the generic CrossNets are quite satisfactory, leading to just a very modest (~30%) network capacity loss. However, some applications (in particular, pattern classification) may require a larger number of weight quantization levels L (e.g., L ≈ 30 for a 1% fidelity [9]). This may be achieved by using compact square arrays (e.g., 4×4) of latching switches (Fig. 4). Various species of CrossNets [9] differ also by the way the somatic cells are distributed around the synaptic field. Figure 5 shows feedforward versions of two CrossNet types most explored so far: the so-called FlossBar and InBar. The former network is more natural for the implementation of multilayered perceptrons (MLP), while the latter system is preferable for recurrent network implementations and also allows a simpler CMOS design of somatic cells. The most important advantage of CrossNets over the hardware neural networks suggested earlier is that these networks allow to achieve enormous density combined with large cell connectivity M >> 1 in quasi-2D electronic circuits. 4 CrossNet training CrossNet training faces several hardware-imposed challenges: (i) The synaptic weight contribution provided by the elementary latching switch is binary, so that for most applications the multi-switch synapses (Fig. 4) are necessary. (ii) The only way to adjust any particular synaptic weight is to turn ON or OFF the corresponding latching switch(es). This is only possible to do by applying certain voltage V = Vj – Vk between the two corresponding nanowires. At this procedure, other nanodevices attached to the same wires should not be disturbed. (iii) As stated above, synapse state switching is a statistical progress, so that the degree of its “fuzziness” should be carefully controlled. (a) Vj (b) V w – A/2 i=1 i=1 2 2 … … n n Vj V w+ A/2 i' = 1 RL 2 … i' = 1 n RS ±(V t –A/2) 2 … RS n ±(V t +A/2) Fig. 4. Composite synapse for providing L = 2n2+1 discrete levels of the weight in (a) operation and (b) weight adjustment modes. The dark-gray rectangles are resistive metallic strips at soma/nanowire interfaces (a) (b) Fig. 5. Two main CrossNet species: (a) FlossBar and (b) InBar, in the generic (feedforward, non-Hebbian, ternary-weight) case for the connectivity parameter M = 9. Only the nanowires and nanodevices coupling one cell (indicated with red dashed lines) to M post-synaptic cells (blue dashed lines) are shown; actually all the cells are similarly coupled We have shown that these challenges may be met using (at least) the following training methods [9]: (i) Synaptic weight import. This procedure is started with training of a homomorphic “precursor” artificial neural network with continuous synaptic weighs wjk, implemented in software, using one of established methods (e.g., error backpropagation). Then the synaptic weights wjk are transferred to the CrossNet, with some “clipping” (rounding) due to the binary nature of elementary synaptic weights. To accomplish the transfer, pairs of somatic cells are sequentially selected via CMOS-level wiring. Using the flexibility of CMOS circuitry, these cells are reconfigured to apply external voltages ±VW to the axonic and dendritic nanowires leading to a particular synapse, while all other nanowires are grounded. The voltage level V W is selected so that it does not switch the synapses attached to only one of the selected nanowires, while voltage 2VW applied to the synapse at the crosspoint of the selected wires is sufficient for its reliable switching. (In the composite synapses with quasi-continuous weights (Fig. 4), only a part of the corresponding switches is turned ON or OFF.) (ii) Error backpropagation. The synaptic weight import procedure is straightforward when wjk may be simply calculated, e.g., for the Hopfield-type networks. However, for very large CrossNets used, e.g., as pattern classifiers the precursor network training may take an impracticably long time. In this case the direct training of a CrossNet may become necessary. We have developed two methods of such training, both based on “Hebbian” synapses consisting of 4 elementary synapses (latching switches) whose average weight dynamics obeys Eq. (3). This quasi-Hebbian rule may be used to implement the backpropagation algorithm either using a periodic time-multiplexing [9] or in a continuous fashion, using the simultaneous propagation of signals and errors along the same dual-rail channels. As a result, presently we may state that CrossNets may be taught to perform virtually all major functions demonstrated earlier with the usual neural networks, including the corrupted pattern restoration in the recurrent quasi-Hopfield mode and pattern classification in the feedforward MLP mode [11]. 5 C r o s s N e t p e r f o r m an c e e s t i m a t e s The significance of this result may be only appreciated in the context of unparalleled physical parameters of CMOL CrossNets. The only fundamental limitation on the half-pitch Fnano (Fig. 1) comes from quantum-mechanical tunneling between nanowires. If the wires are separated by vacuum, the corresponding specific leakage conductance becomes uncomfortably large (~10-12 Ω-1m-1) only at Fnano = 1.5 nm; however, since realistic insulation materials (SiO2, etc.) provide somewhat lower tunnel barriers, let us use a more conservative value Fnano= 3 nm. Note that this value corresponds to 1012 elementary synapses per cm2, so that for 4M = 104 and n = 4 the areal density of neural cells is close to 2×107 cm-2. Both numbers are higher than those for the human cerebral cortex, despite the fact that the quasi-2D CMOL circuits have to compete with quasi-3D cerebral cortex. With the typical specific capacitance of 3×10-10 F/m = 0.3 aF/nm, this gives nanowire capacitance C0 ≈ 1 aF per working elementary synapse, because the corresponding segment has length 4Fnano. The CrossNet operation speed is determined mostly by the time constant τ0 of dendrite nanowire capacitance recharging through resistances of open nanodevices. Since both the relevant conductance and capacitance increase similarly with M and n, τ0 ≈ R0C0. The possibilities of reduction of R0, and hence τ0, are limited mostly by acceptable power dissipation per unit area, that is close to Vs2/(2Fnano)2R0. For room-temperature operation, the voltage scale V0 ≈ Vt should be of the order of at least 30 kBT/e ≈ 1 V to avoid thermally-induced errors [9]. With our number for Fnano, and a relatively high but acceptable power consumption of 100 W/cm2, we get R0 ≈ 1010Ω (which is a very realistic value for single-molecule single-electron devices like one shown in Fig. 3). With this number, τ0 is as small as ~10 ns. This means that the CrossNet speed may be approximately six orders of magnitude (!) higher than that of the biological neural networks. Even scaling R0 up by a factor of 100 to bring power consumption to a more comfortable level of 1 W/cm2, would still leave us at least a four-orders-of-magnitude speed advantage. 6 D i s c u s s i on: P o s s i bl e a p p l i c at i o n s These estimates make us believe that that CMOL CrossNet chips may revolutionize the neuromorphic network applications. Let us start with the example of relatively small (1-cm2-scale) chips used for recognition of a face in a crowd [11]. The most difficult feature of such recognition is the search for face location, i.e. optimal placement of a face on the image relative to the panel providing input for the processing network. The enormous density and speed of CMOL hardware gives a possibility to time-and-space multiplex this task (Fig. 6). In this approach, the full image (say, formed by CMOS photodetectors on the same chip) is divided into P rectangular panels of h×w pixels, corresponding to the expected size and approximate shape of a single face. A CMOS-implemented communication channel passes input data from each panel to the corresponding CMOL neural network, providing its shift in time, say using the TV scanning pattern (red line in Fig. 6). The standard methods of image classification require the network to have just a few hidden layers, so that the time interval Δt necessary for each mapping position may be so short that the total pattern recognition time T = hwΔt may be acceptable even for online face recognition. w h image network input Fig. 6. Scan mapping of the input image on CMOL CrossNet inputs. Red lines show the possible time sequence of image pixels sent to a certain input of the network processing image from the upper-left panel of the pattern Indeed, let us consider a 4-Megapixel image partitioned into 4K 32×32-pixel panels (h = w = 32). This panel will require an MLP net with several (say, four) layers with 1K cells each in order to compare the panel image with ~10 3 stored faces. With the feasible 4-nm nanowire half-pitch, and 65-level synapses (sufficient for better than 99% fidelity [9]), each interlayer crossbar would require chip area about (4K×64 nm)2 = 64×64 μm2, fitting 4×4K of them on a ~0.6 cm2 chip. (The CMOS somatic-layer and communication-system overheads are negligible.) With the acceptable power consumption of the order of 10 W/cm2, the input-to-output signal propagation in such a network will take only about 50 ns, so that Δt may be of the order of 100 ns and the total time T = hwΔt of processing one frame of the order of 100 microseconds, much shorter than the typical TV frame time of ~10 milliseconds. The remaining two-orders-of-magnitude time gap may be used, for example, for double-checking the results via stopping the scan mapping (Fig. 6) at the most promising position. (For this, a simple feedback from the recognition output to the mapping communication system is necessary.) It is instructive to compare the estimated CMOL chip speed with that of the implementation of a similar parallel network ensemble on a CMOS signal processor (say, also combined on the same chip with an array of CMOS photodetectors). Even assuming an extremely high performance of 30 billion additions/multiplications per second, we would need ~4×4K×1K×(4K)2/(30×109) ≈ 104 seconds ~ 3 hours per frame, evidently incompatible with the online image stream processing. Let us finish with a brief (and much more speculative) discussion of possible long-term prospects of CMOL CrossNets. Eventually, large-scale (~30×30 cm2) CMOL circuits may become available. According to the estimates given in the previous section, the integration scale of such a system (in terms of both neural cells and synapses) will be comparable with that of the human cerebral cortex. Equipped with a set of broadband sensor/actuator interfaces, such (necessarily, hierarchical) system may be capable, after a period of initial supervised training, of further self-training in the process of interaction with environment, with the speed several orders of magnitude higher than that of its biological prototypes. Needless to say, the successful development of such self-developing systems would have a major impact not only on all information technologies, but also on the society as a whole. Acknowledgments This work has been supported in part by the AFOSR, MARCO (via FENA Center), and NSF. Valuable contributions made by Simon Fölling, Özgür Türel and Ibrahim Muckra, as well as useful discussions with P. Adams, J. Barhen, D. Hammerstrom, V. Protopopescu, T. Sejnowski, and D. Strukov are gratefully acknowledged. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] Frank, D. J. et al. (2001) Device scaling limits of Si MOSFETs and their application dependencies. Proc. IEEE 89(3): 259-288. Likharev, K. K. (2003) Electronics below 10 nm, in J. Greer et al. (eds.), Nano and Giga Challenges in Microelectronics, pp. 27-68. Amsterdam: Elsevier. Likharev, K. K. and Strukov, D. B. (2005) CMOL: Devices, circuits, and architectures, in G. Cuniberti et al. (eds.), Introducing Molecular Electronics, Ch. 16. Springer, Berlin. Fölling, S., Türel, Ö. & Likharev, K. K. (2001) Single-electron latching switches as nanoscale synapses, in Proc. of the 2001 Int. Joint Conf. on Neural Networks, pp. 216-221. Mount Royal, NJ: Int. Neural Network Society. Wang, W. et al. (2003) Mechanism of electron conduction in self-assembled alkanethiol monolayer devices. Phys. Rev. B 68(3): 035416 1-8. Stan M. et al. (2003) Molecular electronics: From devices and interconnect to circuits and architecture, Proc. IEEE 91(11): 1940-1957. Strukov, D. B. & Likharev, K. K. (2005) Prospects for terabit-scale nanoelectronic memories. Nanotechnology 16(1): 137-148. Strukov, D. B. & Likharev, K. K. (2005) CMOL FPGA: A reconfigurable architecture for hybrid digital circuits with two-terminal nanodevices. Nanotechnology 16(6): 888-900. Türel, Ö. et al. (2004) Neuromorphic architectures for nanoelectronic circuits”, Int. J. of Circuit Theory and Appl. 32(5): 277-302. See, e.g., Hertz J. et al. (1991) Introduction to the Theory of Neural Computation. Cambridge, MA: Perseus. Lee, J. H. & Likharev, K. K. (2005) CrossNets as pattern classifiers. Lecture Notes in Computer Sciences 3575: 434-441.

4 0.54949784 188 nips-2005-Temporally changing synaptic plasticity

Author: Minija Tamosiunaite, Bernd Porr, Florentin Wörgötter

Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. 1

5 0.47625241 15 nips-2005-A Theoretical Analysis of Robust Coding over Noisy Overcomplete Channels

Author: Eizaburo Doi, Doru C. Balcan, Michael S. Lewicki

Abstract: Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neurons. This problem can be expressed in information theoretic terms as coding and transmitting a multi-dimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplete codes can be learned by minimizing the reconstruction error with a constraint on the channel capacity. Here, we present a theoretical analysis that characterizes the optimal linear coder and decoder for one- and twodimensional data. The analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides a number of important insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions to achieve robustness. We also report numerical solutions for robust coding of highdimensional image data and show that these codes are substantially more robust compared against other image codes such as ICA and wavelets. 1

6 0.4706555 165 nips-2005-Response Analysis of Neuronal Population with Synaptic Depression

7 0.45728981 81 nips-2005-Gaussian Processes for Multiuser Detection in CDMA receivers

8 0.43867218 140 nips-2005-Nonparametric inference of prior probabilities from Bayes-optimal behavior

9 0.42567831 167 nips-2005-Robust design of biological experiments

10 0.39804536 99 nips-2005-Integrate-and-Fire models with adaptation are good enough

11 0.35380292 128 nips-2005-Modeling Memory Transfer and Saving in Cerebellar Motor Learning

12 0.35010475 118 nips-2005-Learning in Silicon: Timing is Everything

13 0.34976885 164 nips-2005-Representing Part-Whole Relationships in Recurrent Neural Networks

14 0.34949034 181 nips-2005-Spiking Inputs to a Winner-take-all Network

15 0.33695528 68 nips-2005-Factorial Switching Kalman Filters for Condition Monitoring in Neonatal Intensive Care

16 0.33466181 29 nips-2005-Analyzing Coupled Brain Sources: Distinguishing True from Spurious Interaction

17 0.31935927 73 nips-2005-Fast biped walking with a reflexive controller and real-time policy searching

18 0.30733407 132 nips-2005-Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity

19 0.30038932 134 nips-2005-Neural mechanisms of contrast dependent receptive field size in V1

20 0.29749352 139 nips-2005-Non-iterative Estimation with Perturbed Gaussian Markov Processes


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.038), (10, 0.038), (11, 0.024), (27, 0.025), (31, 0.038), (34, 0.077), (55, 0.027), (57, 0.407), (60, 0.016), (69, 0.057), (70, 0.011), (73, 0.041), (88, 0.078), (91, 0.035)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.92975163 188 nips-2005-Temporally changing synaptic plasticity

Author: Minija Tamosiunaite, Bernd Porr, Florentin Wörgötter

Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. 1

same-paper 2 0.86183941 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression

Author: Misha Ahrens, Liam Paninski, Quentin J. Huys

Abstract: Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this regression, in turn, is efficiently solved using standard algorithms, without any “local minima” problems despite the large number of parameters and complex dynamics. The noise level 5) may also be estimated by standard techniques. We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. 1

3 0.84841383 11 nips-2005-A Hierarchical Compositional System for Rapid Object Detection

Author: Long Zhu, Alan L. Yuille

Abstract: We describe a hierarchical compositional system for detecting deformable objects in images. Objects are represented by graphical models. The algorithm uses a hierarchical tree where the root of the tree corresponds to the full object and lower-level elements of the tree correspond to simpler features. The algorithm proceeds by passing simple messages up and down the tree. The method works rapidly, in under a second, on 320 × 240 images. We demonstrate the approach on detecting cats, horses, and hands. The method works in the presence of background clutter and occlusions. Our approach is contrasted with more traditional methods such as dynamic programming and belief propagation. 1

4 0.53439105 181 nips-2005-Spiking Inputs to a Winner-take-all Network

Author: Matthias Oster, Shih-Chii Liu

Abstract: Recurrent networks that perform a winner-take-all computation have been studied extensively. Although some of these studies include spiking networks, they consider only analog input rates. We present results of this winner-take-all computation on a network of integrate-and-fire neurons which receives spike trains as inputs. We show how we can configure the connectivity in the network so that the winner is selected after a pre-determined number of input spikes. We discuss spiking inputs with both regular frequencies and Poisson-distributed rates. The robustness of the computation was tested by implementing the winner-take-all network on an analog VLSI array of 64 integrate-and-fire neurons which have an innate variance in their operating parameters. 1

5 0.46472898 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches

Author: Anna Levina, Michael Herrmann

Abstract: There is experimental evidence that cortical neurons show avalanche activity with the intensity of firing events being distributed as a power-law. We present a biologically plausible extension of a neural network which exhibits a power-law avalanche distribution for a wide range of connectivity parameters. 1

6 0.44884929 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions

7 0.44353873 99 nips-2005-Integrate-and-Fire models with adaptation are good enough

8 0.43360418 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

9 0.43141645 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models

10 0.40945467 8 nips-2005-A Criterion for the Convergence of Learning with Spike Timing Dependent Plasticity

11 0.40776902 183 nips-2005-Stimulus Evoked Independent Factor Analysis of MEG Data with Large Background Activity

12 0.40413186 39 nips-2005-Beyond Pair-Based STDP: a Phenomenological Rule for Spike Triplet and Frequency Effects

13 0.39863351 176 nips-2005-Silicon growth cones map silicon retina

14 0.39541483 132 nips-2005-Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity

15 0.39061651 121 nips-2005-Location-based activity recognition

16 0.38984224 203 nips-2005-Visual Encoding with Jittering Eyes

17 0.38619 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions

18 0.38024428 118 nips-2005-Learning in Silicon: Timing is Everything

19 0.37314522 124 nips-2005-Measuring Shared Information and Coordinated Activity in Neuronal Networks

20 0.37300241 29 nips-2005-Analyzing Coupled Brain Sources: Distinguishing True from Spurious Interaction