nips nips2010 nips2010-96 knowledge-graph by maker-knowledge-mining

96 nips-2010-Fractionally Predictive Spiking Neurons


Source: pdf

Author: Jaldert Rombouts, Sander M. Bohte

Abstract: Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 nl Abstract Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. [sent-9, score-0.843]

2 Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. [sent-10, score-1.063]

3 A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. [sent-11, score-0.451]

4 Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. [sent-12, score-0.942]

5 For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. [sent-13, score-0.595]

6 As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. [sent-14, score-1.234]

7 1 Introduction A key issue in computational neuroscience is the interpretation of neural signaling, as expressed by a neuron’s sequence of action potentials. [sent-15, score-0.026]

8 An emerging notion is that neurons may in fact encode information at multiple timescales simultaneously [1, 2, 3, 4]: the precise timing of spikes may be conveying high-frequency information, and slower measures, like the rate of spiking, may be relating low-frequency information. [sent-16, score-0.699]

9 Such multi-timescale encoding comes naturally, at least for sensory neurons, as the statistics of the outside world often exhibit self-similar multi-timescale features [5] and the magnitude of natural signals can extend over several orders. [sent-17, score-0.465]

10 Since neurons are limited in the rate and resolution with which they can emit spikes, the mapping of large dynamic-range signals into spike-trains is an integral part of attempts at understanding neural coding. [sent-18, score-0.461]

11 Experiments have extensively demonstrated that neurons adapt their response when facing persistent changes in signal magnitude. [sent-19, score-0.575]

12 Typically, adaptation changes the relation between the magnitude of the signal and the neuron’s discharge rate. [sent-20, score-0.433]

13 Since adaptation thus naturally relates to neural coding, it has been extensively scrutinized [6, 7, 8]. [sent-21, score-0.169]

14 Tying the notions of self-similar multi-scale natural signals and adaptive neural coding together, it has recently been suggested that neuronal adaptation allows neuronal spiking to communicate a fractional derivative of the actual computed signal [10, 4]. [sent-25, score-1.844]

15 Fractional derivatives are a generalization of standard ‘integer’ derivatives (‘first order’, ‘second order’), to real valued derivatives (e. [sent-26, score-0.261]

16 A key feature of such derivatives is that they are non-local, and rather convey information over essentially a large part of the signal spectrum [10]. [sent-30, score-0.402]

17 1 Here, we show how neural spikes can encode temporal signals when the spike-train itself is taken as the fractional derivative of the signal. [sent-31, score-1.4]

18 We show that this is the case for a signal approximated by a sum of shifted power-law kernels starting at respective times ti and decaying proportional to 1/(t − ti )β . [sent-32, score-0.853]

19 Then, the fractional derivative of this approximated signal corresponds to a sum of spikes at times ti , provided that the order of fractional differentiation α is equal to 1 − β: a spiketrain is the α = 0. [sent-33, score-2.098]

20 2 fractional derivative of a signal approximated by a sum of power-law kernels with exponent β = 0. [sent-34, score-1.347]

21 Such signal encoding with power-law kernels can be carried out for example with simple standard thresholding spiking neurons with a refractory reset following a power-law. [sent-36, score-0.997]

22 As fractional derivatives contain information over many time-ranges, they are naturally suited for predicting signals. [sent-37, score-0.61]

23 This links to notions of predictive coding, where neurons communicate deviations from expected signals rather than the signal itself. [sent-38, score-1.036]

24 Predictive coding has been suggested as a key feature of neuronal processing in e. [sent-39, score-0.182]

25 For self-similar scale-free signals, future signals may be influenced by past signals over very extended time-ranges: so-called longmemory. [sent-42, score-0.568]

26 For example, fractional Brownian motion (fBm) can exhibit long-memory, depending on their Hurst-parameter H. [sent-43, score-0.555]

27 5 fBM models which exhibit long-range dependence (longmemory) where the autocorrelation-function follows a power-law decay [12]. [sent-45, score-0.12]

28 The long-memory nature of signals approximated with sums of power-law kernels naturally extends this signal approximation into the future along the autocorrelation of the signal, at least for self-similar 1/f γ like signals. [sent-46, score-1.038]

29 The key “predictive” assumption we make is that a neuron’s spike-train up to time t contains all the information that the past signal contributes to the future signal t > t. [sent-47, score-0.614]

30 The correspondence between a spike-train as a fractional derivative and a signal approximated as a sum of power-law kernels is only exact when spike-trains are taken as a sum of Dirac-δ functions and the power-law kernels as 1/tβ . [sent-48, score-1.538]

31 As both responses are singular, neurons would only be able to approximate this. [sent-49, score-0.188]

32 We show empirically how sums of (approximated) 1/tβ power-law kernels can accurately approximate long-memory fBm signals via simple difference thresholding, in an online greedy fashion. [sent-50, score-0.5]

33 Thus encodings signals, we show that the power-law kernels approximate synthesized signals with about half the number of spikes to obtain the same Signal-to-Noise-Ratio, when compared to the same encoding method using similar but exponentially decaying kernels. [sent-51, score-1.026]

34 We further demonstrate the approximation of sine wave modulated white-noise signals with sums of power-law kernels. [sent-52, score-0.525]

35 We find the effect is stronger when encoding the actual sine wave envelope, mimicking the difference between thalamic and cortical neurons reported in [4]. [sent-54, score-0.636]

36 This may suggest that these cortical neurons are more concerned with encoding the sine wave envelope. [sent-55, score-0.507]

37 The power-law approximation also allows for the transparent and straightforward implementation of temporal signal filtering by a post-synaptic, receiving neuron. [sent-56, score-0.514]

38 Since neural decoding by a receiving neuron corresponds to adding a power-law kernel for each received spike, modifying this receiving power-law kernel then corresponds to a temporal filtering operation, effectively exploiting the wide-spectrum nature of power-law kernels. [sent-57, score-0.619]

39 This is particularly relevant, since, as has been amply noted [9, 14], power-law dynamics can be closely approximated by a weighted sum or cascade of exponential kernels. [sent-58, score-0.238]

40 Temporal filtering would then correspond to simply tuning the weights for this sum or cascade. [sent-59, score-0.06]

41 We illustrate this notion with an encoding/decoding example for both a high-pass and low-pass filter. [sent-60, score-0.03]

42 2 Power-law Signal Encoding Neural processing can often be reduced to a Linear-Non-Linear (LNL) filtering operation on incoming signals [15] (figure 1), where inputs are linearly weighted and then passed through a non-linearity to yield the neural activation. [sent-61, score-0.368]

43 As this computation yields analog activations, and neurons communicate through spikes, the additional problem faced by spiking neurons is to decode the incoming signal and then encode the computed LNL filter again into a spike-train. [sent-62, score-0.985]

44 The standard spiking neuron model is that of Linear-Nonlinear-Poisson spiking, where spikes have a stochastic relationship to the computed activation [16]. [sent-63, score-0.695]

45 Here, we interpret the spike encoding and decoding in the light of processing and communicating signals with fractional derivatives [10]. [sent-64, score-1.1]

46 and summing these kernels [17]; keeping track of doublets and triplet spikes allows for even greater fidelity. [sent-66, score-0.567]

47 This approach however only worked for signals with a frequency response lacking low frequencies [17]. [sent-67, score-0.347]

48 Low-frequency changes lead to “adaptation”, where the kernel is adapted to fit the signal again [18]. [sent-68, score-0.38]

49 For long-range predictive coding, the absence of low frequencies leaves little to predict, as the effective correlation time of the signals is then typically very short as well [17]. [sent-69, score-0.448]

50 Using the notion of predictive coding in the context of (possible) long-range dependencies, we define the goal of signal encoding as follows: let a signal xj (t) be the result of the continuous-time computation in neuron j up to time t, and let neuron j have emitted spikes tj up to time t. [sent-70, score-1.719]

51 These spikes should be emitted such that the signal xj (t ) for t < t is decoded up to some signal-to-noise ratio, and these spikes should be predictive for xj (t ) for t > t in the sense that no additional spikes are needed at times t > t to convey the predictive information up to time t. [sent-71, score-1.86]

52 Taking kernels as a signal filter of fixed width, as in the general approach in [17] has the important drawback that the signal reconstruction incurs a delay for the duration of the filter: its detection cannot be communicated until the filter is actually matched to the signal. [sent-72, score-0.784]

53 Alternatively, a predictive coding approach could rely on only on a very short backward looking filter, minimizing the delay in the system, and continuously computing a forward predictive signal. [sent-74, score-0.485]

54 At any time in the future then, only deviations of the actual signal from this expectation are communicated. [sent-75, score-0.372]

55 1 Spike-trains as fractional derivative As recent work has highlighted the possibility that neurons encode fractional derivatives, it is noteworthy that the non-local nature of fractional calculus offers a natural framework for predictive coding. [sent-77, score-2.102]

56 The fractional derivative r(t) of a signal x(t) is denoted as Dα x(t), and intuitively expresses: r(t) = dα x(t), dtα where α is the fractional order, e. [sent-79, score-1.416]

57 We assume that neurons carry out predictive coding by emitting spikes such that all predictive information is contained in the current spikes, and no more spikes will be fired if the signal follows this prediction. [sent-84, score-1.669]

58 Approximating spikes by Dirac-δ functions, we take the spike-train up to some time t0 to be the fractional derivative of the past signal and be fully predictive for the expected influence the 3 Fractionally Predicting Spikes a) x(t) r(t) 0 c) 0. [sent-85, score-1.507]

59 3 time (s) Non−singular kernels b) x(t) r(t) α-exp(τ=10ms) 0 0. [sent-88, score-0.172]

60 c) Approximated 1/tβ power-law kernel for different values of k from eq. [sent-95, score-0.06]

61 d) The approximated 1/tβ power-law kernel (blue line) can be decomposed as a weighted sum of α-functions with various decay time-constants (dashed lines). [sent-97, score-0.353]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('fractional', 0.49), ('spikes', 0.368), ('signal', 0.27), ('signals', 0.247), ('neurons', 0.188), ('kernels', 0.172), ('neuron', 0.17), ('predictive', 0.169), ('derivative', 0.166), ('spiking', 0.157), ('approximated', 0.148), ('encoding', 0.121), ('fbm', 0.112), ('lnl', 0.112), ('coding', 0.102), ('sine', 0.099), ('communicate', 0.092), ('receiving', 0.089), ('derivatives', 0.087), ('sums', 0.081), ('adaptation', 0.081), ('fractionally', 0.075), ('transparent', 0.075), ('decoding', 0.074), ('ti', 0.073), ('lter', 0.07), ('wave', 0.069), ('ltering', 0.066), ('powerlaw', 0.066), ('cwi', 0.066), ('exhibit', 0.065), ('timed', 0.06), ('sum', 0.06), ('kernel', 0.06), ('decaying', 0.057), ('decay', 0.055), ('decoded', 0.054), ('amsterdam', 0.054), ('encode', 0.052), ('suppression', 0.051), ('spike', 0.051), ('temporal', 0.051), ('changes', 0.05), ('emitted', 0.049), ('refractory', 0.049), ('neuronal', 0.049), ('netherlands', 0.047), ('delay', 0.045), ('convey', 0.045), ('past', 0.044), ('actual', 0.043), ('exponent', 0.041), ('notions', 0.041), ('thresholding', 0.04), ('response', 0.038), ('incoming', 0.038), ('life', 0.038), ('carry', 0.035), ('fourier', 0.035), ('naturally', 0.033), ('bohte', 0.033), ('timescales', 0.033), ('encodings', 0.033), ('rombouts', 0.033), ('spiketrain', 0.033), ('magnitude', 0.032), ('frequencies', 0.032), ('ms', 0.031), ('suggested', 0.031), ('weighted', 0.03), ('mimicking', 0.03), ('thalamic', 0.03), ('exponentials', 0.03), ('signaling', 0.03), ('communicating', 0.03), ('noteworthy', 0.03), ('lacking', 0.03), ('future', 0.03), ('cortical', 0.03), ('notion', 0.03), ('extensively', 0.029), ('deviations', 0.029), ('approximation', 0.029), ('synthesized', 0.028), ('autocorrelation', 0.028), ('conveying', 0.028), ('tying', 0.028), ('singular', 0.028), ('retina', 0.027), ('calculus', 0.027), ('triplet', 0.027), ('communicated', 0.027), ('snr', 0.027), ('exponents', 0.027), ('operation', 0.027), ('dashed', 0.027), ('neural', 0.026), ('stronger', 0.026), ('modulation', 0.026), ('delity', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 96 nips-2010-Fractionally Predictive Spiking Neurons

Author: Jaldert Rombouts, Sander M. Bohte

Abstract: Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. 1

2 0.17551766 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

Author: Nicholas Fisher, Arunava Banerjee

Abstract: From a functional viewpoint, a spiking neuron is a device that transforms input spike trains on its various synapses into an output spike train on its axon. We demonstrate in this paper that the function mapping underlying the device can be tractably learned based on input and output spike train data alone. We begin by posing the problem in a classification based framework. We then derive a novel kernel for an SRM0 model that is based on PSP and AHP like functions. With the kernel we demonstrate how the learning problem can be posed as a Quadratic Program. Experimental results demonstrate the strength of our approach. 1

3 0.14596979 161 nips-2010-Linear readout from a neural population with partial correlation data

Author: Adrien Wohrer, Ranulfo Romo, Christian K. Machens

Abstract: How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix grows quadratically with population size. Here, we suggest to bypass this problem by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On average, noise correlations will therefore reflect signal correlations, which can be measured in neural populations. We suggest an explicit parametric dependency between signal and noise correlations. We show how this dependency can be used to ”fill the gaps” in noise correlations matrices using an iterative application of the Wishart distribution over positive definitive matrices. We apply our method to data from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the population of recorded neurons with the discrimination threshold of the monkey and show that our method predicts different results than simpler, average schemes of noise correlations. 1

4 0.14265709 115 nips-2010-Identifying Dendritic Processing

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

5 0.135794 16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model

Author: Sebastian Millner, Andreas Grübl, Karlheinz Meier, Johannes Schemmel, Marc-olivier Schwartz

Abstract: We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. 1

6 0.11861367 157 nips-2010-Learning to localise sounds with spiking neural networks

7 0.11645449 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

8 0.11356699 268 nips-2010-The Neural Costs of Optimal Control

9 0.10718781 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

10 0.10503395 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

11 0.10258681 279 nips-2010-Universal Kernels on Non-Standard Input Spaces

12 0.10108102 227 nips-2010-Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models

13 0.098402739 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

14 0.087655567 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

15 0.087499008 59 nips-2010-Deep Coding Network

16 0.084667698 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

17 0.08214736 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

18 0.080492489 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

19 0.07568524 145 nips-2010-Learning Kernels with Radiuses of Minimum Enclosing Balls

20 0.069372326 91 nips-2010-Fast detection of multiple change-points shared by many signals using group LARS


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.138), (1, 0.047), (2, -0.193), (3, 0.189), (4, 0.161), (5, 0.203), (6, 0.031), (7, 0.099), (8, 0.067), (9, -0.042), (10, 0.021), (11, 0.05), (12, 0.024), (13, 0.069), (14, 0.001), (15, -0.009), (16, 0.006), (17, -0.004), (18, -0.077), (19, -0.042), (20, 0.047), (21, 0.007), (22, -0.039), (23, -0.036), (24, -0.043), (25, -0.093), (26, -0.011), (27, -0.042), (28, 0.024), (29, 0.01), (30, 0.02), (31, 0.067), (32, -0.091), (33, 0.016), (34, 0.01), (35, -0.004), (36, 0.056), (37, 0.033), (38, 0.0), (39, 0.045), (40, 0.082), (41, 0.034), (42, 0.01), (43, 0.015), (44, 0.032), (45, 0.028), (46, -0.063), (47, -0.011), (48, -0.068), (49, 0.117)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98292226 96 nips-2010-Fractionally Predictive Spiking Neurons

Author: Jaldert Rombouts, Sander M. Bohte

Abstract: Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. 1

2 0.80941558 115 nips-2010-Identifying Dendritic Processing

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

3 0.77738881 157 nips-2010-Learning to localise sounds with spiking neural networks

Author: Dan Goodman, Romain Brette

Abstract: To localise the source of a sound, we use location-specific properties of the signals received at the two ears caused by the asymmetric filtering of the original sound by our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs change throughout an organism’s lifetime, during development for example, and so the required neural circuitry cannot be entirely hardwired. Since HRTFs are not directly accessible from perceptual experience, they can only be inferred from filtered sounds. We present a spiking neural network model of sound localisation based on extracting location-specific synchrony patterns, and a simple supervised algorithm to learn the mapping between synchrony patterns and locations from a set of example sounds, with no previous knowledge of HRTFs. After learning, our model was able to accurately localise new sounds in both azimuth and elevation, including the difficult task of distinguishing sounds coming from the front and back. Keywords: Auditory Perception & Modeling (Primary); Computational Neural Models, Neuroscience, Supervised Learning (Secondary) 1

4 0.73484564 16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model

Author: Sebastian Millner, Andreas Grübl, Karlheinz Meier, Johannes Schemmel, Marc-olivier Schwartz

Abstract: We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. 1

5 0.73224777 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

Author: Nicholas Fisher, Arunava Banerjee

Abstract: From a functional viewpoint, a spiking neuron is a device that transforms input spike trains on its various synapses into an output spike train on its axon. We demonstrate in this paper that the function mapping underlying the device can be tractably learned based on input and output spike train data alone. We begin by posing the problem in a classification based framework. We then derive a novel kernel for an SRM0 model that is based on PSP and AHP like functions. With the kernel we demonstrate how the learning problem can be posed as a Quadratic Program. Experimental results demonstrate the strength of our approach. 1

6 0.68015033 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

7 0.61205149 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

8 0.54761803 161 nips-2010-Linear readout from a neural population with partial correlation data

9 0.51797116 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

10 0.50957197 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

11 0.49006051 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

12 0.474646 227 nips-2010-Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models

13 0.45802471 268 nips-2010-The Neural Costs of Optimal Control

14 0.4198772 244 nips-2010-Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models

15 0.39064783 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

16 0.3893545 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

17 0.38111958 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

18 0.37678221 19 nips-2010-A rational decision making framework for inhibitory control

19 0.36262259 76 nips-2010-Energy Disaggregation via Discriminative Sparse Coding

20 0.35667351 81 nips-2010-Evaluating neuronal codes for inference using Fisher information


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.024), (17, 0.045), (27, 0.102), (30, 0.027), (35, 0.029), (45, 0.166), (50, 0.103), (52, 0.089), (60, 0.019), (77, 0.089), (90, 0.027), (98, 0.196)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.86482674 96 nips-2010-Fractionally Predictive Spiking Neurons

Author: Jaldert Rombouts, Sander M. Bohte

Abstract: Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. 1

2 0.76190078 69 nips-2010-Efficient Minimization of Decomposable Submodular Functions

Author: Peter Stobbe, Andreas Krause

Abstract: Many combinatorial problems arising in machine learning can be reduced to the problem of minimizing a submodular function. Submodular functions are a natural discrete analog of convex functions, and can be minimized in strongly polynomial time. Unfortunately, state-of-the-art algorithms for general submodular minimization are intractable for larger problems. In this paper, we introduce a novel subclass of submodular minimization problems that we call decomposable. Decomposable submodular functions are those that can be represented as sums of concave functions applied to modular functions. We develop an algorithm, SLG, that can efÄ?Ĺš ciently minimize decomposable submodular functions with tens of thousands of variables. Our algorithm exploits recent results in smoothed convex minimization. We apply SLG to synthetic benchmarks and a joint classiÄ?Ĺš cation-and-segmentation task, and show that it outperforms the state-of-the-art general purpose submodular minimization algorithms by several orders of magnitude. 1

3 0.74072391 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

Author: Surya Ganguli, Haim Sompolinsky

Abstract: Recent proposals suggest that large, generic neuronal networks could store memory traces of past input sequences in their instantaneous state. Such a proposal raises important theoretical questions about the duration of these memory traces and their dependence on network size, connectivity and signal statistics. Prior work, in the case of gaussian input sequences and linear neuronal networks, shows that the duration of memory traces in a network cannot exceed the number of neurons (in units of the neuronal time constant), and that no network can out-perform an equivalent feedforward network. However a more ethologically relevant scenario is that of sparse input sequences. In this scenario, we show how linear neural networks can essentially perform compressed sensing (CS) of past inputs, thereby attaining a memory capacity that exceeds the number of neurons. This enhanced capacity is achieved by a class of “orthogonal” recurrent networks and not by feedforward networks or generic recurrent networks. We exploit techniques from the statistical physics of disordered systems to analytically compute the decay of memory traces in such networks as a function of network size, signal sparsity and integration time. Alternately, viewed purely from the perspective of CS, this work introduces a new ensemble of measurement matrices derived from dynamical systems, and provides a theoretical analysis of their asymptotic performance. 1

4 0.73269141 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

Author: Ryan Kelly, Matthew Smith, Robert Kass, Tai S. Lee

Abstract: Activity of a neuron, even in the early sensory areas, is not simply a function of its local receptive field or tuning properties, but depends on global context of the stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity of a neuron. In this paper we implemented an L1 regularized point process model to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode “Utah” array. We found that the spikes of surrounding neurons indeed provide strong predictions of a neuron’s response, in addition to the neuron’s receptive field transfer function. We also found that the same spikes could be accounted for with the local field potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate and stimulus-response transfer functions. 1

5 0.72617131 109 nips-2010-Group Sparse Coding with a Laplacian Scale Mixture Prior

Author: Pierre Garrigues, Bruno A. Olshausen

Abstract: We propose a class of sparse coding models that utilizes a Laplacian Scale Mixture (LSM) prior to model dependencies among coefficients. Each coefficient is modeled as a Laplacian distribution with a variable scale parameter, with a Gamma distribution prior over the scale parameter. We show that, due to the conjugacy of the Gamma prior, it is possible to derive efficient inference procedures for both the coefficients and the scale parameter. When the scale parameters of a group of coefficients are combined into a single variable, it is possible to describe the dependencies that occur due to common amplitude fluctuations among coefficients, which have been shown to constitute a large fraction of the redundancy in natural images [1]. We show that, as a consequence of this group sparse coding, the resulting inference of the coefficients follows a divisive normalization rule, and that this may be efficiently implemented in a network architecture similar to that which has been proposed to occur in primary visual cortex. We also demonstrate improvements in image coding and compressive sensing recovery using the LSM model. 1

6 0.72089881 51 nips-2010-Construction of Dependent Dirichlet Processes based on Poisson Processes

7 0.71908015 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

8 0.71690428 17 nips-2010-A biologically plausible network for the computation of orientation dominance

9 0.71564084 161 nips-2010-Linear readout from a neural population with partial correlation data

10 0.71207976 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

11 0.70987988 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

12 0.70784736 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

13 0.70665962 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

14 0.70616645 117 nips-2010-Identifying graph-structured activation patterns in networks

15 0.70041424 68 nips-2010-Effects of Synaptic Weight Diffusion on Learning in Decision Making Networks

16 0.69917101 98 nips-2010-Functional form of motion priors in human motion perception

17 0.69875854 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

18 0.69640154 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

19 0.69476175 33 nips-2010-Approximate inference in continuous time Gaussian-Jump processes

20 0.69168776 140 nips-2010-Layer-wise analysis of deep networks with Gaussian kernels