nips nips2006 nips2006-17 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Hideaki Shimazaki, Shigeru Shinomoto
Abstract: The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1
Reference: text
sentIndex sentText sentNum sentScore
1 jp Abstract The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. [sent-8, score-0.512]
2 In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. [sent-9, score-0.901]
3 We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. [sent-10, score-1.127]
4 The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. [sent-11, score-1.237]
5 It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. [sent-12, score-0.929]
6 In this case, any attempt to characterize the underlying spike rate will lead to spurious results. [sent-13, score-0.56]
7 The shape of a PSTH depends on the choice of the bin size. [sent-16, score-0.569]
8 With too large a bin size, one cannot represent the detailed time-dependent rate, while with too small a bin size, the time-histogram fluctuates greatly and one cannot discern the underlying spike rate. [sent-17, score-1.543]
9 There exists an ideal bin size for estimating the spike rate for each set of experimental data. [sent-18, score-1.132]
10 We devised a method of selecting the bin size objectively so that a PSTH best approximates the underlying rate, which is unknown. [sent-20, score-0.788]
11 In the course of our study, we found an interesting paper that proposed an empirical method of choosing the histogram bin size for a probability density function (Rudemo M, (1982) Scandinavian Journal of Statistics 9: 65-78 [3]). [sent-21, score-0.734]
12 Given a set of experimental data, we wish to not only determine the optimal bin size, but also estimate how many more experimental trials should be performed in order to obtain a resolution we deem sufficient. [sent-24, score-0.687]
13 It was revealed by a theoretical analysis that the optimal bin size may diverge for a small number of spike sequences derived from a moderately fluctuating rate [4]. [sent-25, score-1.469]
14 This implies that any attempt to characterize the underlying rate will lead to spurious results. [sent-26, score-0.221]
15 The present method can indicate the divergence of the optimal bin size only from the spike data. [sent-27, score-1.072]
16 2 Methods We consider sequences of spikes repeatedly recorded from identical experimental trials. [sent-29, score-0.386]
17 A recent analysis revealed that in vivo spike trains are not simply random, but possess inter-spike-interval distributions intrinsic and specific to individual neurons [5, 6]. [sent-30, score-0.467]
18 However, spikes accumulated from a large number of spike trains recorded from a single neuron are, in the majority, mutually independent. [sent-31, score-0.588]
19 Being free from the intrinsic inter-spike-interval distributions of individual spike trains, the accumulated spikes can be regarded as being derived repeatedly from Poisson processes of an identical time-dependent rate [7, 8]. [sent-32, score-0.728]
20 We suggest a method for minimizing the MISE with respect to the bin size ∆. [sent-34, score-0.672]
21 The difficulty of the present problem comes from the fact that the underlying spike rate λt is not known. [sent-35, score-0.531]
22 1 Selection of the bin size ˆ We choose the (bar-graph) PSTH as a way to estimate the rate λt , and explore a method to select the bin size of a PSTH that minimizes MISE in Eq. [sent-37, score-1.443]
23 A PSTH is constructed simply by counting the number of spikes that belong to each bin. [sent-39, score-0.165]
24 The number of spikes accumulated from all n sequences in the ith interval is counted as ki . [sent-41, score-0.524]
25 The bar height at the ith bin is given by ki /n∆. [sent-42, score-0.73]
26 Given a bin of width ∆, the expected height of a bar graph for t ∈ [0, ∆] is the time-averaged rate, 1 ∆ λt dt. [sent-43, score-0.687]
27 (2) ∆ 0 The total number of spikes k from n spike sequences that enter a bin of width ∆ obeys a Poisson distribution with the expected number n∆θ, θ= k p(k | n∆θ) = (n∆θ) −n∆θ e . [sent-44, score-1.366]
28 (3) ˆ The unbiased estimator for θ is given as θ = k/(n∆), which is the empirical height of the bar graph for t ∈ [0, ∆]. [sent-46, score-0.225]
29 By segmenting the total observation period T into N intervals of size ∆, the MISE defined in Eq. [sent-47, score-0.181]
30 Hereafter we denote the average over those segmented rate λt+(i−1)∆ as an average over an ensemble of (segmented) rate functions {λt } defined in an interval of t ∈ [0, ∆]: MISE = 1 ∆ ∆ 0 ˆ E ( θ − λt )2 dt. [sent-49, score-0.303]
31 (5) Table 1: A method for bin size selection for a PSTH (i) Divide the observation period T into N bins of width ∆, and count the number of spikes ki from all n sequences that enter the ith bin. [sent-50, score-1.345]
32 (ii) Construct the mean and variance of the number of spikes {ki } as, N N 1 1 ¯ ¯ k≡ ki , and v ≡ (ki − k)2 . [sent-51, score-0.25]
33 N i=1 N i=1 (iii) Compute the cost function, Cn (∆) = ¯ 2k − v . [sent-52, score-0.158]
34 (n∆)2 Repeat i through iii while changing the bin size ∆ to search for ∆∗ that minimizes Cn (∆). [sent-53, score-0.645]
35 (iv) ˆ The expectation E now refers to the average over the spike count, or θ = k/(n∆), given a rate function λt , or its mean value, θ. [sent-54, score-0.517]
36 (6) 0 ˆ The first and second terms are respectively the stochastic fluctuation of the estimator θ around the expected mean rate θ, and the temporal fluctuation of λt around its mean θ over an interval of length ∆, averaged over the segments. [sent-56, score-0.319]
37 (6) can further be decomposed into two parts, 1 ∆ ∆ (λt − θ + θ − θ)2 dt = 0 1 ∆ ∆ (λt − θ ) 2 dt − (θ − θ ) 2 . [sent-58, score-0.247]
38 (7) represents a mean squared fluctuation of the underlying rate λt from the mean rate θ , and is independent of the bin size ∆, because 1 ∆ ∆ 2 (λt − θ ) dt = 0 1 T T 2 (λt − θ ) dt. [sent-60, score-1.139]
39 (8) 0 We define a cost function by subtracting this term from the original MISE, Cn (∆) ≡ MISE − = 1 ∆ ∆ (λt − θ ) 2 dt 0 ˆ E(θ − θ)2 − (θ − θ ) 2 . [sent-61, score-0.27]
40 (9) This cost function corresponds to the “risk function” in the report by Rudemo, (Eq. [sent-62, score-0.158]
41 (9) represents the temporal fluctuation of the expected mean rate θ for individual intervals of period ∆. [sent-66, score-0.255]
42 As the expected mean rate is not an observable quantity, we must replace the fluctuation of the expected mean rate with that of the ˆ ˆ observable estimator θ. [sent-67, score-0.484]
43 Using the decomposition rule for an unbiased estimator (E θ = θ), ˆ ˆ ˆ ˆ E(θ − E θ )2 = E(θ − θ + θ − θ )2 = E(θ − θ)2 + (θ − θ ) 2 , (10) the cost function is transformed into ˆ ˆ ˆ Cn (∆) = 2 E(θ − θ)2 − E(θ − E θ )2 . [sent-68, score-0.286]
44 (11) Due to the assumed Poisson nature of the point process, the number of spikes k counted in each bin obeys a Poisson distribution: the variance of k is equal to the mean. [sent-69, score-0.771]
45 For the estimated rate defined ˆ as θ = k/(n∆), this variance-mean relation corresponds to ˆ E(θ − θ)2 = 1 ˆ E θ. [sent-70, score-0.176]
46 (11), the cost function is given as a function of the estimator θ, 2 ˆ ˆ ˆ E θ − E(θ − E θ )2 . [sent-73, score-0.242]
47 n∆ Cn (∆) = (13) The optimal bin size is obtained by minimizing the cost function Cn (∆): ∆∗ ≡ arg min Cn (∆). [sent-74, score-0.838]
48 (13) with the sample spike counts, the method is converted into a user-friendly recipe summarized in Table 1. [sent-76, score-0.432]
49 2 Extrapolation of the cost function With the method developed in the preceding subsection, we can determine the optimal bin size for a given set of experimental data. [sent-78, score-0.887]
50 In this section, we develop a method to estimate how the optimal bin size decreases when more experimental trials are added to the data set. [sent-79, score-0.768]
51 Assume that we are in possession of n spike sequences. [sent-80, score-0.359]
52 The fluctuation of the expected mean rate ˆ (θ − θ )2 in Eq. [sent-81, score-0.18]
53 (10) is replaced with the empirical fluctuation of the time-histogram θn using the ˆ ˆ decomposition rule for the unbiased estimator θn satisfying E θn = θ, ˆ ˆ ˆ ˆ E(θn − E θn )2 = E(θn − θ + θ − θ )2 = E(θn − θ)2 + (θ − θ )2 . [sent-82, score-0.161]
54 (15) The expected cost function for m sequences can be obtained by substituting the above equation into Eq. [sent-83, score-0.369]
55 (12), and ˆ E(θm − θ)2 = we obtain Cm (∆|n) = 1 1 ˆ ˆ E θm = E θn , m∆ m∆ 1 1 − m n 1 ˆ E θn + Cn (∆) , ∆ (17) (18) ˆ where Cn (∆) is the original cost function, Eq. [sent-86, score-0.158]
56 By replacing the expectation with sample spike count averages, the cost function for m sequences can be extrap¯ olated as Cm (∆|n) with this formula, using the sample mean k and variance v of the numbers of spikes, given n sequences and the bin size ∆. [sent-88, score-1.614]
57 It may come to pass that the original cost function Cn (∆) computed for n spike sequences does not have a minimum, or have a minimum at a bin size comparable to the observation period T . [sent-90, score-1.457]
58 In such a case, with the method summarized in Table 2, one may estimate the critical number of sequences nc above which the cost function has a finite bin size ∆∗ , and consider carrying out more experiments to obtain a reasonable rate estimation. [sent-91, score-1.487]
59 In the case that the optimal bin size exhibits continuous divergence, the cost function can be expanded as Cn (∆) ∼ µ 1 1 − n nc 1 1 + u 2, ∆ ∆ (19) where we have introduced nc and u, which are independent of n. [sent-92, score-1.324]
60 The optimal bin size undergoes a phase transition from the vanishing 1/∆∗ for n < nc to a finite 1/∆∗ for n > nc . [sent-93, score-1.146]
61 In this case, the inverse optimal bin size is expanded in the vicinity of nc as 1/∆∗ ∝ (1/n − 1/nc ). [sent-94, score-0.917]
62 We can Table 2: A method for extrapolating the cost function for a PSTH (A) Construct the extrapolated cost function, ¯ 1 1 k Cm (∆|n) = − + Cn (∆), m n n∆2 ¯ using the sample mean k and variance v of the number of spikes obtained from n sequences of spikes. [sent-95, score-0.843]
63 m (C) Repeat A and B while changing m, and plot 1/∆∗ vs 1/m to search for m the critical value 1/m = 1/ˆ c above which 1/∆∗ practically vanishes. [sent-97, score-0.187]
64 n m ˆm estimate the critical value nc by applying this asymptotic relation to the set of ∆∗ estimated from ˆ Cm (∆|n) for various values of m: 1 1 − m nc ˆ 1 ∝ ∆∗ m . [sent-98, score-0.573]
65 (20) It should be noted that there are cases that the optimal bin size exhibits discontinuous divergence from a finite value. [sent-99, score-0.772]
66 Even in such cases, the plot of {1/m, 1/∆∗ } could be useful in exploring a discontinuous transition from nonvanishing values of 1/∆∗ to practically vanishing values. [sent-100, score-0.16]
67 3 Theoretical cost function In this section, we obtain a “theoretical” cost function directly from a process with a known underlying rate, λt , and compare it with the “empirical” cost function which can be evaluated without knowing the rate process. [sent-102, score-0.666]
68 Note that this theoretical cost function is not available in real experimental conditions in which the underlying rate is not known. [sent-103, score-0.416]
69 ˆ The present estimator θ ≡ k/(n∆) is a uniformly minimum variance unbiased estimator (UMVUE) of θ, which achieves the lower bound of the Cram´ r-Rao inequality [9, 10], e ∞ ˆ E(θ − θ) = − 2 k=0 ∂ 2 log p (k|θ) p (k|θ) ∂θ2 −1 = θ . [sent-104, score-0.212]
70 (9), the cost function is represented as Cn (∆) = = θ 2 − (θ − θ ) n∆ ∆ ∆ µ 1 − 2 φ (t1 − t2 ) dt1 dt2 , n∆ ∆ 0 0 (22) where µ is the mean rate, and φ(t) is the autocorrelation function of the rate fluctuation, λt − µ. [sent-106, score-0.316]
71 Based on the symmetry φ(t) = φ(−t), the cost function can be rewritten as Cn (∆) = ≈ µ 1 − 2 n∆ ∆ µ 1 − n∆ ∆ ∆ (∆ − |t|)φ(t) dt −∆ ∞ φ(t) dt + −∞ 1 ∆2 ∞ |t|φ(t) dt, (23) −∞ which can be identified with Eq. [sent-107, score-0.382]
72 (19) with parameters given by ∞ nc = µ φ(t) dt , (24) −∞ ∞ u = |t|φ(t) dt. [sent-108, score-0.325]
73 −∞ (25) B Underlying rate, t 60 30 0 0 A Cost function, 100 2 Spike Sequences C 200 1 Empirical cost function 0 1 2 Theoretical cost function ^ Histograms, D 0 t 120 60 0 -100 60 30 0 0. [sent-109, score-0.316]
74 5 0 60 30 0 0 1 2 Time, t Figure 1: A: (Dots): The empirical cost function, Cn (∆), computed from spike data according to the method in Table 1. [sent-114, score-0.578]
75 (Solid line): The “theoretical” cost function computed directly from the underlying fluctuating rate, with Eq. [sent-115, score-0.245]
76 (Below): Time-histograms made using three types of bin sizes: too small, optimal, and too large. [sent-119, score-0.569]
77 Model parameters: the number of sequences n = 30; total observation period T = 30 [sec]; the mean rate µ = 30 [1/s]; the amplitude of rate fluctuation σ = 10 [1/s]; time scale of rate fluctuation τ = 0. [sent-120, score-0.728]
78 3 Results Our first objective was to develop a method for selecting the ideal bin size using spike sequences derived repeatedly from Poisson processes, all with a given identical rate λt . [sent-122, score-1.401]
79 The MISE of the PSTH from the underlying rate is minimized by minimizing the cost function Cn (∆). [sent-123, score-0.35]
80 Figure 1A displays the cost function computed with the method summarized in Table 1. [sent-124, score-0.238]
81 This “empirical” cost function is compared with the “theoretical” cost function Eq. [sent-125, score-0.316]
82 (22) that is computed directly from the underlying rate λt . [sent-126, score-0.213]
83 The figure exhibits that the “empirical” cost function is consistent with the “theoretical” cost function. [sent-127, score-0.352]
84 The time-histogram constructed using the optimal bin size is compared with those constructed using non-optimal bin sizes in Figs. [sent-128, score-1.295]
85 1B, demonstrating the effectiveness of the present method of bin size selection. [sent-129, score-0.672]
86 We also tested a method for extrapolating the cost function. [sent-130, score-0.231]
87 Figures 2A and B demonstrate the extrapolated cost functions for several sequences with differing values of m and the plot of {1/m, 1/∆∗ } for estimating the critical value 1/m = 1/ˆ c , above which 1/∆∗ practically vann ishes. [sent-131, score-0.625]
88 Figure 2C depicts the critical number nc estimated from the smaller or larger numbers of ˆ spike sequences n. [sent-132, score-0.867]
89 The empirically estimated critical number nc approximates the theoretically ˆ predicted critical number nc computed using Eq. [sent-133, score-0.699]
90 Note that the critical number is correctly estimated from the small number of sequences, with which the optimal bin size practically diverges (n < nc ). [sent-135, score-1.122]
91 4 Summary We have developed a method for optimizing the bin size, so that the PSTH best represents the (unknown) underlying spike rate. [sent-136, score-1.001]
92 For a small number of spike sequences derived from a modestly Extrapolated cost function Estimated optimal bin size B A 10 12 m=10 5 6 m=20 0 m=30 -5 0 0 2 4 6 8 0. [sent-137, score-1.387]
93 1 1/m ^ Estimated critical number, nc C 40 30 20 10 0 0 10 20 30 40 Number of sequences, n Figure 2: A: Extrapolated cost functions Cm (∆|n) plotted against 1/∆ for several numbers of sequences m = 10, 20 and 30 computed from n = 10 sample sequences. [sent-140, score-0.678]
94 B: The plot of {1/m, 1/∆∗ } used for estimating the critical value 1/m = 1/ˆ c , above which 1/∆∗ practically vanishes. [sent-141, score-0.187]
95 C: The n number of spike sequences n used to obtain the extrapolated cost function Cm (∆|n) and an estimated critical number nc . [sent-142, score-1.116]
96 Model parameters: the number of sequences n = 10; total observation ˆ period T = 30 [sec]; the mean rate µ = 30 [1/s]; the amplitude of rate fluctuation σ = 4 [1/s]; time scale of rate fluctuation τ = 0. [sent-143, score-0.728]
97 This theoretical nc is depicted as the horizontal dashed line. [sent-148, score-0.257]
98 fluctuating rate, the cost function does not have a minimum, implying the uselessness of the rate estimation. [sent-149, score-0.284]
99 Our method can nevertheless extrapolate the cost function for any number of spike sequences, and suggest how many trials are needed in order to obtain a meaningful time-histogram with the required accuracy. [sent-150, score-0.563]
100 The suitability of the present method was demonstrated by application to spike sequences generated by time-dependent Poisson processes. [sent-151, score-0.555]
wordName wordTfidf (topN-words)
[('bin', 0.569), ('spike', 0.339), ('mise', 0.297), ('nc', 0.213), ('uctuation', 0.203), ('psth', 0.199), ('cn', 0.195), ('sequences', 0.189), ('cost', 0.158), ('spikes', 0.142), ('rate', 0.126), ('dt', 0.112), ('uctuating', 0.109), ('poisson', 0.103), ('critical', 0.097), ('cm', 0.094), ('extrapolated', 0.091), ('estimator', 0.084), ('kyoto', 0.079), ('shinomoto', 0.079), ('ki', 0.076), ('size', 0.076), ('period', 0.075), ('practically', 0.069), ('underlying', 0.066), ('japan', 0.056), ('physics', 0.05), ('extrapolating', 0.046), ('rudemo', 0.046), ('scandinavian', 0.046), ('shimazaki', 0.046), ('theoretical', 0.044), ('unbiased', 0.044), ('accumulated', 0.043), ('enter', 0.043), ('trains', 0.042), ('count', 0.042), ('vanishing', 0.04), ('vivo', 0.04), ('trials', 0.039), ('exhibits', 0.036), ('divide', 0.035), ('optimal', 0.035), ('height', 0.034), ('diverges', 0.034), ('extrapolation', 0.034), ('neurophysiological', 0.034), ('recipe', 0.034), ('empirical', 0.033), ('repeatedly', 0.033), ('summarized', 0.032), ('mean', 0.032), ('width', 0.032), ('counted', 0.03), ('discontinuous', 0.03), ('goodness', 0.03), ('obeys', 0.03), ('bar', 0.03), ('observation', 0.03), ('approximates', 0.029), ('estimated', 0.029), ('spurious', 0.029), ('histogram', 0.029), ('graduate', 0.028), ('moderately', 0.028), ('segmented', 0.028), ('method', 0.027), ('sec', 0.027), ('divergence', 0.026), ('table', 0.026), ('amplitude', 0.024), ('expanded', 0.024), ('intrinsic', 0.024), ('interval', 0.023), ('bins', 0.023), ('decomposed', 0.023), ('constructed', 0.023), ('revealed', 0.022), ('experimental', 0.022), ('expected', 0.022), ('neuron', 0.022), ('ring', 0.022), ('histograms', 0.021), ('cortical', 0.021), ('computed', 0.021), ('relation', 0.021), ('york', 0.021), ('plot', 0.021), ('selecting', 0.021), ('sons', 0.021), ('ith', 0.021), ('derived', 0.021), ('observable', 0.02), ('expectation', 0.02), ('norton', 0.02), ('diverge', 0.02), ('gerstein', 0.02), ('handy', 0.02), ('possession', 0.02), ('shigeru', 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999982 17 nips-2006-A recipe for optimizing a time-histogram
Author: Hideaki Shimazaki, Shigeru Shinomoto
Abstract: The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1
2 0.25738144 34 nips-2006-Approximate Correspondences in High Dimensions
Author: Kristen Grauman, Trevor Darrell
Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1
3 0.25219655 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
Author: Ryota Kobayashi, Shigeru Shinomoto
Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1
4 0.18197152 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
5 0.14819337 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
6 0.11271274 154 nips-2006-Optimal Change-Detection and Spiking Neurons
7 0.093726277 66 nips-2006-Detecting Humans via Their Pose
8 0.08769469 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall
9 0.073234938 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons
10 0.056702759 106 nips-2006-Large Margin Hidden Markov Models for Automatic Speech Recognition
11 0.05611084 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
12 0.055497028 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
13 0.053068958 121 nips-2006-Learning to be Bayesian without Supervision
14 0.052967951 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
15 0.052247439 122 nips-2006-Learning to parse images of articulated bodies
16 0.049624957 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
17 0.043110829 114 nips-2006-Learning Time-Intensity Profiles of Human Activity using Non-Parametric Bayesian Models
18 0.042587388 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds
19 0.039121415 108 nips-2006-Large Scale Hidden Semi-Markov SVMs
20 0.037224595 33 nips-2006-Analysis of Representations for Domain Adaptation
topicId topicWeight
[(0, -0.141), (1, -0.214), (2, 0.005), (3, 0.051), (4, 0.055), (5, 0.072), (6, -0.042), (7, -0.022), (8, 0.035), (9, -0.038), (10, -0.055), (11, 0.11), (12, 0.073), (13, 0.095), (14, 0.196), (15, -0.041), (16, -0.191), (17, -0.149), (18, 0.008), (19, -0.316), (20, 0.081), (21, -0.08), (22, 0.082), (23, -0.08), (24, 0.002), (25, 0.022), (26, 0.246), (27, -0.066), (28, -0.091), (29, 0.172), (30, -0.05), (31, 0.132), (32, 0.084), (33, 0.001), (34, -0.126), (35, 0.111), (36, 0.063), (37, 0.074), (38, 0.038), (39, -0.042), (40, 0.035), (41, -0.047), (42, 0.025), (43, 0.04), (44, -0.078), (45, 0.025), (46, 0.043), (47, 0.099), (48, -0.176), (49, 0.112)]
simIndex simValue paperId paperTitle
same-paper 1 0.98244518 17 nips-2006-A recipe for optimizing a time-histogram
Author: Hideaki Shimazaki, Shigeru Shinomoto
Abstract: The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1
2 0.72012585 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
Author: Ryota Kobayashi, Shigeru Shinomoto
Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1
3 0.65365142 34 nips-2006-Approximate Correspondences in High Dimensions
Author: Kristen Grauman, Trevor Darrell
Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
5 0.33342966 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
6 0.28933519 121 nips-2006-Learning to be Bayesian without Supervision
7 0.28513107 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
8 0.27414244 154 nips-2006-Optimal Change-Detection and Spiking Neurons
9 0.25655138 66 nips-2006-Detecting Humans via Their Pose
10 0.20256433 155 nips-2006-Optimal Single-Class Classification Strategies
11 0.1961242 124 nips-2006-Linearly-solvable Markov decision problems
12 0.1762592 122 nips-2006-Learning to parse images of articulated bodies
13 0.17261931 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall
14 0.16631322 182 nips-2006-Statistical Modeling of Images with Fields of Gaussian Scale Mixtures
15 0.16134156 39 nips-2006-Balanced Graph Matching
16 0.15918291 119 nips-2006-Learning to Rank with Nonsmooth Cost Functions
17 0.15469137 108 nips-2006-Large Scale Hidden Semi-Markov SVMs
18 0.15045173 191 nips-2006-The Robustness-Performance Tradeoff in Markov Decision Processes
19 0.14933388 114 nips-2006-Learning Time-Intensity Profiles of Human Activity using Non-Parametric Bayesian Models
20 0.14497799 25 nips-2006-An Application of Reinforcement Learning to Aerobatic Helicopter Flight
topicId topicWeight
[(1, 0.081), (3, 0.397), (7, 0.063), (9, 0.057), (20, 0.025), (22, 0.044), (44, 0.046), (57, 0.06), (65, 0.033), (71, 0.074)]
simIndex simValue paperId paperTitle
1 0.92332864 200 nips-2006-Unsupervised Regression with Applications to Nonlinear System Identification
Author: Ali Rahimi, Ben Recht
Abstract: We derive a cost functional for estimating the relationship between highdimensional observations and the low-dimensional process that generated them with no input-output examples. Limiting our search to invertible observation functions confers numerous benefits, including a compact representation and no suboptimal local minima. Our approximation algorithms for optimizing this cost functional are fast and give diagnostic bounds on the quality of their solution. Our method can be viewed as a manifold learning algorithm that utilizes a prior on the low-dimensional manifold coordinates. The benefits of taking advantage of such priors in manifold learning and searching for the inverse observation functions in system identification are demonstrated empirically by learning to track moving targets from raw measurements in a sensor network setting and in an RFID tracking experiment. 1
2 0.90199137 120 nips-2006-Learning to Traverse Image Manifolds
Author: Piotr DollĂĄr, Vincent Rabaud, Serge J. Belongie
Abstract: We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data. Applications of our proposed technique include embedding with a natural out-of-sample extension and tasks such as tangent distance estimation, frame rate up-conversion, video compression and motion transfer. 1
same-paper 3 0.89603508 17 nips-2006-A recipe for optimizing a time-histogram
Author: Hideaki Shimazaki, Shigeru Shinomoto
Abstract: The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1
4 0.80107445 104 nips-2006-Large-Scale Sparsified Manifold Regularization
Author: Ivor W. Tsang, James T. Kwok
Abstract: Semi-supervised learning is more powerful than supervised learning by using both labeled and unlabeled data. In particular, the manifold regularization framework, together with kernel methods, leads to the Laplacian SVM (LapSVM) that has demonstrated state-of-the-art performance. However, the LapSVM solution typically involves kernel expansions of all the labeled and unlabeled examples, and is slow on testing. Moreover, existing semi-supervised learning methods, including the LapSVM, can only handle a small number of unlabeled examples. In this paper, we integrate manifold regularization with the core vector machine, which has been used for large-scale supervised and unsupervised learning. By using a sparsified manifold regularizer and formulating as a center-constrained minimum enclosing ball problem, the proposed method produces sparse solutions with low time and space complexities. Experimental results show that it is much faster than the LapSVM, and can handle a million unlabeled examples on a standard PC; while the LapSVM can only handle several thousand patterns. 1
5 0.54886043 127 nips-2006-MLLE: Modified Locally Linear Embedding Using Multiple Weights
Author: Zhenyue Zhang, Jing Wang
Abstract: The locally linear embedding (LLE) is improved by introducing multiple linearly independent local weight vectors for each neighborhood. We characterize the reconstruction weights and show the existence of the linearly independent weight vectors at each neighborhood. The modified locally linear embedding (MLLE) proposed in this paper is much stable. It can retrieve the ideal embedding if MLLE is applied on data points sampled from an isometric manifold. MLLE is also compared with the local tangent space alignment (LTSA). Numerical examples are given that show the improvement and efficiency of MLLE. 1
6 0.52918237 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming
7 0.51988119 121 nips-2006-Learning to be Bayesian without Supervision
8 0.51470476 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
9 0.51325482 153 nips-2006-Online Clustering of Moving Hyperplanes
10 0.51039344 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds
11 0.51016951 128 nips-2006-Manifold Denoising
12 0.49631771 106 nips-2006-Large Margin Hidden Markov Models for Automatic Speech Recognition
13 0.49508584 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis
14 0.49406937 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo
15 0.48591495 67 nips-2006-Differential Entropic Clustering of Multivariate Gaussians
16 0.47969091 48 nips-2006-Branch and Bound for Semi-Supervised Support Vector Machines
17 0.47333074 95 nips-2006-Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions
18 0.47259963 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
19 0.47137368 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes
20 0.46605265 42 nips-2006-Bayesian Image Super-resolution, Continued