nips nips2013 nips2013-205 knowledge-graph by maker-knowledge-mining

205 nips-2013-Multisensory Encoding, Decoding, and Identification


Source: pdf

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: We investigate a spiking neuron model of multisensory integration. Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of biophysical spike generators. We demonstrate that stimuli of different dimensions can be faithfully multiplexed and encoded in the spike domain and derive tractable algorithms for decoding each stimulus from the common pool of spikes. We also show that the identification of multisensory processing in a single neuron is dual to the recovery of stimuli encoded with a population of multisensory neurons, and prove that only a projection of the circuit onto input stimuli can be identified. We provide an example of multisensory integration using natural audio and video and discuss the performance of the proposed decoding and identification algorithms. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We investigate a spiking neuron model of multisensory integration. [sent-6, score-0.648]

2 Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of biophysical spike generators. [sent-7, score-1.121]

3 We demonstrate that stimuli of different dimensions can be faithfully multiplexed and encoded in the spike domain and derive tractable algorithms for decoding each stimulus from the common pool of spikes. [sent-8, score-0.392]

4 We also show that the identification of multisensory processing in a single neuron is dual to the recovery of stimuli encoded with a population of multisensory neurons, and prove that only a projection of the circuit onto input stimuli can be identified. [sent-9, score-1.606]

5 We provide an example of multisensory integration using natural audio and video and discuss the performance of the proposed decoding and identification algorithms. [sent-10, score-0.776]

6 Interestingly, recent studies demonstrated that multisensory integration takes place in brain areas that were traditionally considered to be unisensory [2, 6, 7]. [sent-15, score-0.589]

7 This is in contrast to classical brain models in which multisensory integration is relegated to anatomically established sensory convergence regions, after extensive unisensory processing has already taken place [4]. [sent-16, score-0.659]

8 Moreover, multisensory effects were shown to arise not solely due to feedback from higher cortical areas. [sent-17, score-0.476]

9 The computational principles of multisensory integration are still poorly understood. [sent-19, score-0.537]

10 Moreover, although multisensory neuron responses depend on several concurrently received stimuli, existing identification methods typically require separate experimental trials for each of the sensory modalities involved [4, 12, 13, 14]. [sent-21, score-0.743]

11 Doing so creates major challenges, especially when unisensory responses are weak or together do not account for the multisensory response. [sent-22, score-0.507]

12 Here we present a biophysically-grounded spiking neural circuit and a tractable mathematical methodology that together allow one to study the problems of multisensory encoding, decoding, and identification within a unified theoretical framework. [sent-23, score-0.584]

13 of multisensory receptive fields in cascade with a population of neurons that implement stimulus multiplexing in the spike domain. [sent-42, score-0.814]

14 , during attention) (c) joint processing and storage of multisensory signals/stimuli (e. [sent-50, score-0.476]

15 First we show that, under appropriate conditions, each of the stimuli processed by a multisensory circuit can be decoded loss-free from a common, unlabeled set of spikes. [sent-53, score-0.745]

16 These conditions provide clear lower bounds on the size of the population of multisensory neurons and the total number of spikes generated by the entire circuit. [sent-54, score-0.663]

17 We then discuss the open problem of identifying multisensory processing using concurrently presented sensory stimuli. [sent-55, score-0.567]

18 We show that the identification of multisensory processing in a single neuron is elegantly related to the recovery of stimuli encoded with a population of multisensory neurons. [sent-56, score-1.376]

19 2 Modeling Sensory Stimuli, their Processing and Encoding Our formal model of multisensory encoding, called the multisensory Time Encoding Machine (mTEM) is closely related to traditional TEMs [18]. [sent-59, score-0.952]

20 However, in contrast to traditional TEMs that encode one or more stimuli of the same dimension n, a general mTEM receives M input stimuli u1 1 , . [sent-62, score-0.316]

21 , N , the results of this processing are aggregated into the dendritic current v i flowing into the spike initiation zone, where it is encoded into a time sequence (ti )k∈Z , with ti denoting the timing of the k th spike of neuron i. [sent-74, score-0.496]

22 1b), the mapping of the current v i into spikes is described by a set of equations formerly known as the t-transform [18]: ti k+1 ti k i v i (s)ds = qk , k ∈ Z, (1) i i i where qk = C i δ i − bi (ti k+1 − tk ). [sent-79, score-0.455]

23 Intuitively, at every spike time tk+1 , the ideal IAF neuron is i i providing a measurement qk of the current v (t) on the time interval [ti , ti ). [sent-80, score-0.426]

24 For practical and computational reasons we choose to work with the space of trigonometric polynomials Hnm defined below, where each element of the space is a function in nm variables (nm ∈ N, m = 1, 2, . [sent-84, score-0.43]

25 , xnm ) = nm l1 =−L1 over the domain Dnm = exp nm n=1 nm n=1 [0, Tn ], jln Ωn xn /Ln / um. [sent-95, score-1.56]

26 , xnm ) = l1 T1 · · · Tnm , with j denoting the imaginary number. [sent-113, score-0.27]

27 Hnm is endowed with the inner product ·, · : Hnm × Hnm → C, where m um , wnm = nm m um (x1 , . [sent-115, score-0.839]

28 nm Dnm (2) Given the inner product in (2), the set of elements el1 . [sent-125, score-0.43]

29 In what follows, we will primarily be concerned with time-varying stimuli, and the dimension xnm will denote the temporal dimension t of the stimulus um , i. [sent-155, score-0.536]

30 We model audio stimuli um = um (t) as elements of the RKHS H1 over the domain 1 1 D1 = [0, T1 ]. [sent-161, score-0.669]

31 An audio signal um ∈ H1 can√ written be 1 L m m m as u1 (t) = l=−L ul el (t), where the coefficients ul ∈ C and el (t) = exp (jlΩt/L) / T . [sent-163, score-0.357]

32 A video signal um ∈ H3 can be written as um (x, y, t) = 3 3 L1 L2 L3 uml2 l3 el1 l2 l3 (x, y, t), where the coefficients uml2 l3 ∈ C and the funcl1 =−L1 l2 =−L2 l3 =−L3 l1 √ l1 tions el1 l2 l3 (x, y, t) = exp (jl1 Ω1 x/L1 + jl2 Ω2 y/L2 + jl3 Ω3 t/L3 ) / T1 T2 T3 . [sent-166, score-0.458]

33 Without loss of generality, we assume that such transformations involve convolution in the time domain (temporal dimension xnm ) and integration in dimensions x1 , . [sent-174, score-0.331]

34 3 Multisensory Decoding Consider an mTEM comprised of a population of N ideal IAF neurons receiving M input signals um of dimensions nm , m = 1, . [sent-215, score-0.841]

35 Assuming that the multisensory processing is given by nm kernels him , m = 1, . [sent-219, score-0.935]

36 , N , the t-transform in (1) can be rewritten as nm i Tki1 [u1 1 ] + Tki2 [u2 2 ] + . [sent-225, score-0.43]

37 + TkiM [uM ] = qk , n n nM k ∈ Z, (4) where Tkim : Hnm → R are linear functionals defined by Tkim [um ] nm ti k+1 = ti k Dnm him (x1 , . [sent-228, score-0.723]

38 nm nm i We observe that each qk in (4) is a real number representing a quantal measurement of all M stimuli, taken by the neuron i on the interval [ti , ti ). [sent-238, score-1.177]

39 We now k demonstrate that it is possible to reconstruct stimuli um , m = 1, . [sent-240, score-0.352]

40 (Multisensory Time Decoding Machine (mTDM)) Let M signals um ∈ Hnm be encoded by a multisensory TEM comprised of N ideal IAF neurons nm and N × M receptive fields with full spectral support. [sent-248, score-1.405]

41 Then given the filter kernel coefficients him nm , i = 1, . [sent-250, score-0.465]

42 , N , all inputs um can be perfectly recovered as nm l1 . [sent-253, score-0.624]

43 lnm are elements of u = Φ+ q, and Φ+ denotes the pseudoinverse of Φ. [sent-274, score-0.29]

44 ,−ln −1 ,ln (ti − ti ), lnm = 0  −l k+1 k m m im im i i [Φ ]kl = h−l1 ,−l2 ,. [sent-288, score-0.602]

45 ,−lnm −1 ,lnm Lnm Tnm elnm (tk+1 ) − elnm (tk ) , (6)  , lnm = 0  jlnm Ωnm where the column index l traverses all possible subscript combinations of l1 , l2 , . [sent-291, score-0.415]

46 A necessary condition for recovery is that the total number of spikes generated by all neurons is larger than M nm m=1 n=1 (2Ln +1)+N . [sent-295, score-0.587]

47 nm n=1 (2Ln + 1)/ min(ν − 1, 2Lnm + 1) , i Proof: Substituting (5) into (4), qk = Tki1 [u1 1 ]+. [sent-297, score-0.499]

48 In matrix form the above equality can be written as qi = Φi u, with nm i [qi ]k = qk , Φi = [Φi1 , Φi2 , . [sent-322, score-0.499]

49 To find the coefficients im φim n k , we note that φim nm k = Tnm k (el1 . [sent-332, score-0.53]

50 ; um ] with the vector um containing n=1 (2Ln + 1) entries corresponding to coefficients uml2 . [sent-351, score-0.388]

51 This system of linear equations can be solved nm for u, provided that the rank r(Φ) of matrix Φ satisfies r(Φ) = m n=1 (2Ln + 1). [sent-365, score-0.43]

52 A necessary condition for the latter is that the total number of measurements generated by all N neurons is nm greater or equal to n=1 (2Ln + 1). [sent-366, score-0.506]

53 Equivalently, the total number of spikes produced by all N nm neurons should be greater than n=1 (2Ln + 1) + N . [sent-367, score-0.588]

54 , u = Φ+ q t u1 L + h21 (t) 1 v 21 (t) Convex Optimization Problem t u1 (t) 1 (tN )k∈Z k (tN )k∈Z k e L1 ,L2 ,L3 (t) (a) (b) Figure 2: Multimodal TEM & TDM for audio and video integration (a) Block diagram of the multimodal TEM. [sent-373, score-0.315]

55 Thus each neuron can produce a maximum of only 2P Lnm + 1 informative measurements, or equivalently, 2P Lnm + 2 informative spikes on a time nm interval [0, Tnm ]. [sent-376, score-0.626]

56 It follows that for each modality, we require at least n=1 (2Ln + 1)/(2Lnm + 1) nm neurons if ν ≥ (2Lnm + 2) and at least n=1 (2Ln + 1)/(ν − 1) neurons if ν < (2Lnm + 2). [sent-377, score-0.582]

57 4 Multisensory Identification We now investigate the following nonlinear neural identification problem: given stimuli um , nm m = 1, . [sent-378, score-0.782]

58 , M , at the input to a multisensory neuron i and spikes at its output, find the multisensory receptive field kernels him , m = 1, . [sent-381, score-1.258]

59 We will show that this problem is mathematically dual nm to the decoding problem discussed above. [sent-385, score-0.476]

60 We consider identifying kernels for only one multisensory neuron (identification for multiple neurons can be performed in a serial fashion) and drop the superscript i in him nm throughout this section. [sent-390, score-1.147]

61 Instead, we introduce the natural notion of performing multiple experimental trials and use the same superscript i to index stimuli uim on different trials i = 1, . [sent-391, score-0.286]

62 Since for every trial i, an input signal uim , nm m = 1, . [sent-397, score-0.588]

63 , xnm ) by the reproducing property of the RK Knm. [sent-412, score-0.294]

64 dsnm −1 dsnm = nm nm Dnm (a) = (b) = uim (s1 , . [sent-422, score-1.05]

65 dsnm −1 dsnm , nm nm Dnm where (a) follows from the reproducing property and symmetry of Knm and Definition 2, and (b) from the definition of Phm in (3). [sent-446, score-0.946]

66 1 can then be written as nm i Li1 [Ph1 1 ] + Li2 [Ph2 2 ] + . [sent-448, score-0.43]

67 , h = Φ+ q t 21 (Ph1 )(t) 1 + (Ph1 )(t) 1 Convex Optimization Problem Σ (tN )k∈Z k (tN )k∈Z k e L1 ,L2 ,L3 (t) (a) (b) Figure 3: Multimodal CIM for audio and video integration (a) Time encoding interpretation of the multimodal CIM. [sent-453, score-0.344]

68 , M , k ∈ Z, are linear functionals defined by k Lim [Phm ] = k nm ti k+1 ti k Dm uim (s1 , . [sent-458, score-0.782]

69 Intuitively, each inter-spike interval [ti , ti ) produced by the IAF neuron is a time k k+1 i measurement qk of the (weighted) sum of all kernel projections Phm , m = 1, . [sent-469, score-0.396]

70 Each projection Phm is determined by the corresponding stimuli uim , i = 1, . [sent-474, score-0.286]

71 , N , nm nm employed during identification and can be substantially different from the underlying kernel hm . [sent-477, score-0.948]

72 nm It follows that we should be able to identify the projections Phm , m = 1, . [sent-478, score-0.452]

73 , M , be a collection of N linearly indepenn nM nm i=1 dent stimuli at the input to an mTEM circuit comprised of receptive fields with kernels hm ∈ Hnm , nm m = 1, . [sent-490, score-1.287]

74 , M , can be perfectly idennm nm L nm L1 m tified as (Phm )(x1 , . [sent-506, score-0.86]

75 lnm are elements of h = Φ+ q, and Φ+ denotes the pseudoinverse of Φ. [sent-524, score-0.29]

76 ,−ln −1 ,ln (ti − ti ), lnm = 0  −l k+1 k m m im im i i [Φ ]kl = u−l1 ,−l2 ,. [sent-538, score-0.602]

77 ,−lnm −1 ,lnm Lnm Tnm elnm (tk+1 ) − elnm (tk ) , (8)  , lnm = 0  jlnm Ωnm where l traverses all subscript combinations of l1 , l2 , . [sent-541, score-0.415]

78 A necessary condition for identification is that the total number of spikes generated in response to all N trials is larger than M nm m=1 n=1 (2Ln + 1) + N . [sent-545, score-0.49]

79 If the neuron produces ν spikes on each trial, a sufficient condition is that the number of trials N ≥ M m=1 nm n=1 (2Ln + 1)/ min(ν − 1, 2Lnm + 1) . [sent-546, score-0.626]

80 Proof: The equivalent representation of the t-transform in equations (4) and (7) implies that the decoding of the stimulus um (in Theorem 1) and the identification of the filter projections Phm nm nm encountered here are dual problems. [sent-547, score-1.15]

81 , M , are encoded with an mTEM nm comprised of N neurons and receptive fields uim , i = 1, . [sent-551, score-0.807]

82 An analog audio signal u1 (t) and an analog video signal u2 (x, y, t) appear as inputs to temporal filters with kernels hi1 (t) 1 3 1 and spatiotemporal filters with kernels hi2 (x, y, t), i = 1, . [sent-567, score-0.343]

83 In practice, the number of components could be different and would be determined by the bandwidth of input stimuli Ω, or equivalently the order L, and the number of spikes produced (Theorems 1-2). [sent-576, score-0.274]

84 Thus each spike train (ti )k∈Z carries information about two stimuli of completely k different modalities (audio and video) and, under certain conditions, the entire collection of spike trains {ti }N , k ∈ Z, can provide a faithful representation of both signals. [sent-581, score-0.36]

85 k i=1 To demonstrate the performance of the algorithm presented in Theorem 1, we simulated a multisensory TEM with each neuron having a non-separable spatiotemporal receptive field for video stimuli and a temporal receptive field for audio stimuli. [sent-582, score-1.217]

86 The mTEM produced a total of 360, 000 spikes in response to a 6-second-long grayscale video and mono audio of Albert Einstein explaining the mass-energy equivalence formula E = mc2 : “. [sent-587, score-0.343]

87 ” A multisensory TDM was then used to reconstruct the video and audio stimuli from the produced set of spikes. [sent-591, score-0.849]

88 Furthermore, the stimuli now appear as kernels describing the filters and the inputs to the circuit are kernel projections Phm , m = 1, . [sent-600, score-0.316]

89 In other words, identification of a single neuron nm has been converted into a population encoding problem, where the artificially constructed population of N neurons is associated with the N spike trains generated in response to N experimental trials. [sent-604, score-0.885]

90 6 Conclusion We presented a spiking neural circuit for multisensory integration that encodes multiple information streams, e. [sent-609, score-0.645]

91 We derived conditions for inverting the nonlinear operator describing the multiplexing and encoding in the spike domain and developed methods for identifying multisensory processing using concurrent stimulus presentations. [sent-612, score-0.666]

92 We provided explicit algorithms for multisensory decoding and identification and evaluated their performance using natural audio and video stimuli. [sent-613, score-0.715]

93 Our investigations brought to light a key duality between identification of multisensory processing in a single neuron and the recovery of stimuli encoded with a population of multisensory neurons. [sent-614, score-1.376]

94 Spatiotemporal RF t = 45 ms x x A m plit ude y Error A mplit ude y Identified y Original Temporal RF t = 30 ms A m plit ude t = 15 ms x (a) T ime , [ms ] (b) Figure 5: Multisensory identification. [sent-633, score-0.268]

95 On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. [sent-685, score-0.537]

96 Comparing bayesian models for o multisensory cue combination without mandatory integration. [sent-695, score-0.476]

97 The influence of visual and auditory receptive field organization on multisensory integration in the superior colliculus. [sent-702, score-0.658]

98 Linking neurons to behavior in multisensory perception: A computational review. [sent-705, score-0.552]

99 Recovery of stimuli encoded with a Hodgkin-Huxley neuron using conditional PRCs. [sent-742, score-0.352]

100 Reconstruction of sensory stimuli encoded with integrate-and-fire neurons with random thresholds. [sent-763, score-0.362]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('multisensory', 0.476), ('nm', 0.43), ('lnm', 0.29), ('xnm', 0.27), ('hnm', 0.207), ('um', 0.194), ('phm', 0.166), ('stimuli', 0.158), ('snm', 0.145), ('neuron', 0.136), ('uim', 0.128), ('audio', 0.123), ('aurel', 0.114), ('ti', 0.112), ('iaf', 0.104), ('im', 0.1), ('mtem', 0.083), ('receptive', 0.081), ('spike', 0.081), ('neurons', 0.076), ('dnm', 0.073), ('lazar', 0.073), ('tnm', 0.073), ('circuit', 0.072), ('video', 0.07), ('sensory', 0.07), ('qk', 0.069), ('identi', 0.062), ('integration', 0.061), ('spikes', 0.06), ('encoding', 0.06), ('encoded', 0.058), ('ude', 0.055), ('hm', 0.053), ('knm', 0.052), ('tem', 0.052), ('population', 0.051), ('spatiotemporal', 0.048), ('decoding', 0.046), ('temporal', 0.044), ('elnm', 0.041), ('mono', 0.041), ('plit', 0.041), ('tkim', 0.041), ('yevgeniy', 0.041), ('auditory', 0.04), ('modalities', 0.04), ('decoded', 0.039), ('row', 0.037), ('tems', 0.037), ('barry', 0.037), ('spiking', 0.036), ('kernel', 0.035), ('comprised', 0.034), ('bandwidth', 0.034), ('tk', 0.033), ('tn', 0.033), ('px', 0.032), ('dsnm', 0.031), ('mcim', 0.031), ('unisensory', 0.031), ('diagram', 0.031), ('multimodal', 0.03), ('trial', 0.03), ('kernels', 0.029), ('rkhs', 0.029), ('signals', 0.028), ('ideal', 0.028), ('stimulus', 0.028), ('frames', 0.028), ('dendritic', 0.028), ('grayscale', 0.027), ('eftychios', 0.025), ('elds', 0.025), ('lter', 0.025), ('circuits', 0.025), ('reproducing', 0.024), ('db', 0.023), ('opinion', 0.023), ('modality', 0.023), ('subscript', 0.022), ('produced', 0.022), ('lters', 0.022), ('projections', 0.022), ('recovery', 0.021), ('brain', 0.021), ('concurrently', 0.021), ('jlnm', 0.021), ('kayser', 0.021), ('konrad', 0.021), ('mplit', 0.021), ('multiplexed', 0.021), ('multiplexing', 0.021), ('tdm', 0.021), ('wallace', 0.021), ('wnm', 0.021), ('ynm', 0.021), ('cients', 0.021), ('el', 0.02), ('mse', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 205 nips-2013-Multisensory Encoding, Decoding, and Identification

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: We investigate a spiking neuron model of multisensory integration. Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of biophysical spike generators. We demonstrate that stimuli of different dimensions can be faithfully multiplexed and encoded in the spike domain and derive tractable algorithms for decoding each stimulus from the common pool of spikes. We also show that the identification of multisensory processing in a single neuron is dual to the recovery of stimuli encoded with a population of multisensory neurons, and prove that only a projection of the circuit onto input stimuli can be identified. We provide an example of multisensory integration using natural audio and video and discuss the performance of the proposed decoding and identification algorithms. 1

2 0.12445516 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

Author: David Carlson, Vinayak Rao, Joshua T. Vogelstein, Lawrence Carin

Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. 1

3 0.11222079 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

Author: Jasper Snoek, Richard Zemel, Ryan P. Adams

Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1

4 0.098507538 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski

Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1

5 0.095120125 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

6 0.086402051 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

7 0.08157102 173 nips-2013-Least Informative Dimensions

8 0.080431305 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels

9 0.079942331 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

10 0.078733146 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

11 0.077439003 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

12 0.069155768 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

13 0.067991182 178 nips-2013-Locally Adaptive Bayesian Multivariate Time Series

14 0.067757517 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

15 0.063952915 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

16 0.062289562 51 nips-2013-Bayesian entropy estimation for binary spike train data using parametric prior knowledge

17 0.060228698 53 nips-2013-Bayesian inference for low rank spatiotemporal neural receptive fields

18 0.058422685 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively

19 0.053288396 210 nips-2013-Noise-Enhanced Associative Memories

20 0.052409384 353 nips-2013-When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.101), (1, 0.06), (2, -0.067), (3, -0.031), (4, -0.195), (5, -0.037), (6, -0.006), (7, -0.046), (8, 0.01), (9, 0.057), (10, -0.013), (11, -0.002), (12, -0.03), (13, -0.022), (14, 0.041), (15, 0.015), (16, 0.039), (17, 0.012), (18, -0.038), (19, -0.007), (20, -0.013), (21, 0.058), (22, 0.005), (23, -0.059), (24, -0.022), (25, -0.077), (26, -0.047), (27, 0.035), (28, -0.022), (29, -0.082), (30, -0.024), (31, 0.006), (32, -0.027), (33, -0.02), (34, -0.033), (35, -0.041), (36, -0.014), (37, -0.048), (38, -0.001), (39, -0.05), (40, 0.008), (41, 0.009), (42, 0.02), (43, 0.029), (44, -0.041), (45, 0.007), (46, -0.04), (47, 0.056), (48, -0.13), (49, -0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96614695 205 nips-2013-Multisensory Encoding, Decoding, and Identification

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: We investigate a spiking neuron model of multisensory integration. Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of biophysical spike generators. We demonstrate that stimuli of different dimensions can be faithfully multiplexed and encoded in the spike domain and derive tractable algorithms for decoding each stimulus from the common pool of spikes. We also show that the identification of multisensory processing in a single neuron is dual to the recovery of stimuli encoded with a population of multisensory neurons, and prove that only a projection of the circuit onto input stimuli can be identified. We provide an example of multisensory integration using natural audio and video and discuss the performance of the proposed decoding and identification algorithms. 1

2 0.73324859 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

Author: Il M. Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

Abstract: We describe a set of fast, tractable methods for characterizing neural responses to high-dimensional sensory stimuli using a model we refer to as the generalized quadratic model (GQM). The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. The quadratic function characterizes the neuron’s stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range. Special cases of the GQM include the 2nd-order Volterra model [1, 2] and the elliptical Linear-Nonlinear-Poisson model [3]. Here we show that for “canonical form” GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a large class of non-Gaussian stimulus distributions. We show that these estimators are fast and provide highly accurate estimates with far lower computational cost than full maximum likelihood. Moreover, the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. We show applications to both analog and spiking data using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains. 1

3 0.71749866 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

Author: Zhuo Wang, Alan Stocker, Daniel Lee

Abstract: In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1

4 0.67754871 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

Author: David Carlson, Vinayak Rao, Joshua T. Vogelstein, Lawrence Carin

Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. 1

5 0.6726824 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

Author: Jasper Snoek, Richard Zemel, Ryan P. Adams

Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1

6 0.66119957 121 nips-2013-Firing rate predictions in optimal balanced networks

7 0.62798148 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels

8 0.60682237 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

9 0.59848297 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

10 0.48345494 173 nips-2013-Least Informative Dimensions

11 0.47274035 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

12 0.47003064 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

13 0.45877022 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively

14 0.42902187 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

15 0.42712116 210 nips-2013-Noise-Enhanced Associative Memories

16 0.41979411 51 nips-2013-Bayesian entropy estimation for binary spike train data using parametric prior knowledge

17 0.41673976 53 nips-2013-Bayesian inference for low rank spatiotemporal neural receptive fields

18 0.38434845 136 nips-2013-Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream

19 0.37477499 284 nips-2013-Robust Spatial Filtering with Beta Divergence

20 0.36784154 178 nips-2013-Locally Adaptive Bayesian Multivariate Time Series


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.01), (16, 0.022), (19, 0.015), (33, 0.082), (34, 0.084), (41, 0.025), (49, 0.073), (53, 0.39), (56, 0.064), (70, 0.046), (85, 0.016), (89, 0.051), (93, 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80085564 205 nips-2013-Multisensory Encoding, Decoding, and Identification

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: We investigate a spiking neuron model of multisensory integration. Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of biophysical spike generators. We demonstrate that stimuli of different dimensions can be faithfully multiplexed and encoded in the spike domain and derive tractable algorithms for decoding each stimulus from the common pool of spikes. We also show that the identification of multisensory processing in a single neuron is dual to the recovery of stimuli encoded with a population of multisensory neurons, and prove that only a projection of the circuit onto input stimuli can be identified. We provide an example of multisensory integration using natural audio and video and discuss the performance of the proposed decoding and identification algorithms. 1

2 0.42145208 137 nips-2013-High-Dimensional Gaussian Process Bandits

Author: Josip Djolonga, Andreas Krause, Volkan Cevher

Abstract: Many applications in machine learning require optimizing unknown functions defined over a high-dimensional space from noisy samples that are expensive to obtain. We address this notoriously hard challenge, under the assumptions that the function varies only along some low-dimensional subspace and is smooth (i.e., it has a low norm in a Reproducible Kernel Hilbert Space). In particular, we present the SI-BO algorithm, which leverages recent low-rank matrix recovery techniques to learn the underlying subspace of the unknown function and applies Gaussian Process Upper Confidence sampling for optimization of the function. We carefully calibrate the exploration–exploitation tradeoff by allocating the sampling budget to subspace estimation and function optimization, and obtain the first subexponential cumulative regret bounds and convergence rates for Bayesian optimization in high-dimensions under noisy observations. Numerical results demonstrate the effectiveness of our approach in difficult scenarios. 1

3 0.4076865 75 nips-2013-Convex Two-Layer Modeling

Author: Özlem Aslan, Hao Cheng, Xinhua Zhang, Dale Schuurmans

Abstract: Latent variable prediction models, such as multi-layer networks, impose auxiliary latent variables between inputs and outputs to allow automatic inference of implicit features useful for prediction. Unfortunately, such models are difficult to train because inference over latent variables must be performed concurrently with parameter optimization—creating a highly non-convex problem. Instead of proposing another local training method, we develop a convex relaxation of hidden-layer conditional models that admits global training. Our approach extends current convex modeling approaches to handle two nested nonlinearities separated by a non-trivial adaptive latent layer. The resulting methods are able to acquire two-layer models that cannot be represented by any single-layer model over the same features, while improving training quality over local heuristics. 1

4 0.386154 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

5 0.37887797 303 nips-2013-Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis

Author: Nikhil Rao, Christopher Cox, Rob Nowak, Timothy T. Rogers

Abstract: Multitask learning can be effective when features useful in one task are also useful for other tasks, and the group lasso is a standard method for selecting a common subset of features. In this paper, we are interested in a less restrictive form of multitask learning, wherein (1) the available features can be organized into subsets according to a notion of similarity and (2) features useful in one task are similar, but not necessarily identical, to the features best suited for other tasks. The main contribution of this paper is a new procedure called Sparse Overlapping Sets (SOS) lasso, a convex optimization that automatically selects similar features for related learning tasks. Error bounds are derived for SOSlasso and its consistency is established for squared error loss. In particular, SOSlasso is motivated by multisubject fMRI studies in which functional activity is classified using brain voxels as features. Experiments with real and synthetic data demonstrate the advantages of SOSlasso compared to the lasso and group lasso. 1

6 0.37792596 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

7 0.37739083 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

8 0.37657043 221 nips-2013-On the Expressive Power of Restricted Boltzmann Machines

9 0.37254593 131 nips-2013-Geometric optimisation on positive definite matrices for elliptically contoured distributions

10 0.37215185 266 nips-2013-Recurrent linear models of simultaneously-recorded neural populations

11 0.37168667 70 nips-2013-Contrastive Learning Using Spectral Methods

12 0.36985233 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

13 0.36932036 345 nips-2013-Variance Reduction for Stochastic Gradient Optimization

14 0.3691785 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

15 0.3686308 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

16 0.36845151 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

17 0.36640638 64 nips-2013-Compete to Compute

18 0.36579311 22 nips-2013-Action is in the Eye of the Beholder: Eye-gaze Driven Model for Spatio-Temporal Action Localization

19 0.36391175 173 nips-2013-Least Informative Dimensions

20 0.36382625 353 nips-2013-When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity