nips nips2011 nips2011-225 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Richard Turner, Maneesh Sahani
Abstract: A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude- and frequencydemodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Probabilistic amplitude and frequency demodulation Richard E. [sent-1, score-0.665]
2 uk Abstract A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. [sent-7, score-0.52]
3 Although signal processing provides algorithms for so-called amplitude- and frequencydemodulation (AFD), there are well known problems with all of the existing methods. [sent-8, score-0.154]
4 The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. [sent-10, score-1.175]
5 We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings. [sent-12, score-0.19]
6 1 Introduction Amplitude and frequency demodulation (AFD) is the process by which a signal (yt ) is decomposed into the product of a slowly varying envelope or amplitude component (at ) and a quickly varying sinusoidal carrier (cos(φt )), that is yt = at cos(φt ). [sent-13, score-1.147]
7 In its general form this is an ill-posed problem [1], and so algorithms must impose implicit or explicit assumptions about the form of carrier and envelope to realise a solution. [sent-14, score-0.119]
8 In this paper we make the standard assumption that the amplitude variables are slowly varying positive variables, and the derivatives of the carrier phase, ωt = φt − φt−1 called the instantaneous frequencies (IFs), are also slowly varying variables. [sent-15, score-0.607]
9 It has been argued that the subbands of speech are well characterised by such a representation [2, 3] and so AFD has found a range of applications in audio processing including audio coding [4, 2], speech enhancement [5] and source separation [6], and it is used in hearing devices [5]. [sent-16, score-0.31]
10 AFD is also of importance in neural signal processing applications. [sent-18, score-0.154]
11 Within each such band, both the amplitude of the oscillation and the precise center frequencies may vary with time; and both of these phenomena may reveal important elements of the mechanism by which the field oscillation arises. [sent-20, score-0.408]
12 Because of these problems, the Hilbert method, which recovers an amplitude from the magnitude of the analytic signal, is still considered to be the benchmark despite a number of limitations [11, 12]. [sent-23, score-0.324]
13 In this paper, we show examples of demodulation of synthetic, audio, and hippocampal theta rhythm signals using various AFD techniques that highlights some of the anomalies associated with existing methods. [sent-24, score-0.33]
14 The limitations of these methods suggest an improved model (section 2) which we demonstrate on a range of synthetic and natural signals (sections 4 and 5). [sent-27, score-0.13]
15 1 Simple models for probabilistic amplitude and frequency demodulation In this paper, we view demodulation as an estimation problem in which a signal is fit with a sinusoid of time-varying amplitude and phase, yt = ℜ (at exp (iφt )) + ǫt . [sent-29, score-1.542]
16 We are interested in the situation where the IF of the sinusoid varies slowly around a mean value ω . [sent-31, score-0.115]
17 In this case, the phase can be expressed in terms of ¯ the integrated mean frequency and a small perturbation, φt = ω t + θt . [sent-32, score-0.262]
18 ¯ Clearly, the problem of inferring at and θt from yt is ill-posed, and results will depend on the specification of prior distributions over the amplitude and phase perturbation variables. [sent-33, score-0.589]
19 A simpler alternative is to generate the sinusoidal signal from a rotating two-dimensional phasor. [sent-35, score-0.184]
20 For example, re-parametrizing the likelihood in terms of the components x1,t = at cos(θt ) and x2,t = at sin(θt ), yields a linear likelihood function yt = at (cos(¯ t) cos(θt ) − sin(¯ t) sin(θt )) + ǫt = cos(¯ t)x1,t − sin(¯ t)x2,t + ǫt = wT xt + ǫt . [sent-36, score-0.183]
21 To complete the model, prior distributions can ω ω t be now be specified over xt . [sent-38, score-0.076]
22 (2) When the dynamical parameter tends to unity (λ → 1) and the dynamical noise variance to zero 2 (σx → 0), the dynamics become very slow, and this slowness is inherited by the phase perturbations and amplitudes. [sent-40, score-0.252]
23 This model is an instance of the Bayesian Spectrum Estimation (BSE) model [13] (when λ = 1), but re-interpreted in terms of amplitude- and frequency-modulated sinusoids, rather than fixed frequency basis functions. [sent-41, score-0.128]
24 Now the full complex representation of the sinusoid is retained. [sent-45, score-0.06]
25 As before, the real part corresponds to the observed data, but the imaginary part is now treated explicitly as missing data, yt = ℜ (x1,t cos(¯ t) − x2,t sin(¯ t) + ix1,t sin(¯ t) + ix2,t cos(¯ t)) + ǫt . [sent-46, score-0.136]
26 ω ω ω ω (3) The new form of the likelihood function can be expressed in vector form, yt = [1, 0]zt + ǫt , using a new set of variables, zt , which are rotated versions of the original variables, zt = R(¯ t)xt where ω cos(θ) − sin(θ) . [sent-47, score-0.235]
27 (4) R(θ) = sin(θ) cos(θ) An auto-regressive expression for the new variables, zt , can now be found using the fact that rotation matrices commute, R(θ1 + θ2 ) = R(θ1 )R(θ2 ) = R(θ2 )R(θ1 ), together with expression for the dynamics of the original variables, xt (eq. [sent-48, score-0.125]
28 2), zt = λR(¯ )R(¯ (t − 1))xt−1 + R(¯ t)ǫt = λR(¯ )zt−1 + ǫ′ ω ω ω ω (5) t 2 where the noise is a zero mean Gaussian with covariance ǫ′ ǫ′T = R(¯ t) ǫt ǫT RT (¯ t) = σx I. [sent-49, score-0.096]
29 2 Problems with simple models for probabilistic amplitude and frequency demodulation BSE-PPV is used to demodulate synthetic and natural signals in Figs. [sent-53, score-0.845]
30 Perhaps most unsatisfactory is the fact that the IF estimates are often ill behaved, to the extent that they go negative, especially in regions where the amplitude of the signal is low. [sent-57, score-0.536]
31 It is easy to understand why this occurs by considering the prior distribution over amplitude and phase implied by our choice of prior distribution over xt (or equivalently over zt ), 1 λ at exp − 2 a2 + λ2 a2 ¯ p(at , φt |at−1 , φt−1 ) = t−1 + 2 at at−1 cos(φt − φt−1 − ω ) . [sent-58, score-0.632]
32 (6) 2 2πσx 2σx t σx Phase and amplitude are dependent in the implied distribution, which is conditionally a uniform distribution over phase when the amplitude is zero and a strongly peaked von Mises distribution [15] when the amplitude is large. [sent-59, score-1.196]
33 In some applications this may be desirable, but for signals like sounds it presents a problem. [sent-61, score-0.125]
34 Second, the same noiseless signal at different intensities will yield different estimated IF content. [sent-63, score-0.178]
35 When the phase-perturbations vary slowly (λ → 1), there is no correlation between successive IFs ( ωt ωt−1 − ωt ωt−1 → 0). [sent-66, score-0.055]
36 35 y a a ˆ y ˆ frequency /Hz Hilbert In the next section we will propose a new model for PAFD which addresses these problems, retaining the same likelihood function, but modifying the prior to include independent distributions over the phase and amplitude variables. [sent-75, score-0.654]
37 35 time /s time /s Figure 1: Comparison of AFD methods on a sinusoidally amplitude- and frequency-modulated sinusoid in broad-band noise. [sent-111, score-0.06]
38 The gray areas show the region where the true amplitude falls below the noise floor (a < σy ) and the estimates become less accurate. [sent-113, score-0.436]
39 2 PAFD using Auto-regressive and generalized von Mises distributions We have argued that the amplitude and phase variables in a model for PAFD should be independently parametrized, but that this introduces difficulties as the likelihood is highly non-linear in these variables. [sent-115, score-0.632]
40 25 3000 2000 1000 frequency /Hz 0 2500 2000 1500 1000 time /s Figure 2: AFD of a starling song. [sent-132, score-0.165]
41 The light gray bar indicates the problematic low amplitude region. [sent-134, score-0.37]
42 Bottom panels: IF estimates superposed onto the spectrum of the signal. [sent-135, score-0.081]
43 An important initial consideration is whether to use a representation for phase which is wrapped, θ ∈ (−π, π], or unwrapped, θ ∈ R. [sent-137, score-0.134]
44 It is therefore necessary to work with wrapped phases and a sensible starting point for a prior is thus the von Mises distribution, 1 p(θ|k, µ) = exp(k cos(θ − µ)) = vonMises(θ; k, µ). [sent-139, score-0.22]
45 Crucially for our purposes, the von Mises distribution can be obtained by taking a bivariate isotropic Gaussian with an arbitrary mean, and conditioning onto the unit-circle (this connects with BSE-PPV, see eq. [sent-142, score-0.204]
46 The Generalized von Mises distribution is formed in an identical way when the bivariate Gaussian is anisotropic [16]. [sent-144, score-0.19]
47 These constructions suggest a simple extension to time-series data by conditioning a temporal bivariate Gaussian time-series onto the unit circle at all sample times. [sent-145, score-0.188]
48 One of the attractive features of this prior is that when it is combined with the likelihood (eq. [sent-149, score-0.068]
49 1) the resulting posterior distribution over phase variables is a temporal version of the Generalized von Mises distribution. [sent-150, score-0.274]
50 That is, it can be expressed as a bivariate anisotropic Gaussian, which is constrained to the unit circle. [sent-151, score-0.122]
51 Having established a candidate prior over phases, we turn to the amplitude variables. [sent-153, score-0.359]
52 With one eye upon the fact that the prior over phases can be interpreted as product of a Gaussian and a constraint, we employ a prior of a similar form for the amplitude variables; a truncated Gaussian AR(τ ) process, τ T p(a1:T |λ1:τ , σ 2 ) ∝ λt′ at−t′ , σ 2 1(at ≥ 0) Norm at ; t=1 . [sent-154, score-0.432]
53 Moreover, when the phase variables are drawn from a uniform distribution (k1 = k2 = 0) it reduces to the convex amplitude demodulation model [17], which itself is a form of probabilistic amplitude demodulation [18, 19, 20]. [sent-157, score-1.288]
54 The AR prior over phases has also been used in a regression setting [21]. [sent-158, score-0.073]
55 3 Inference via expectation propagation The PAFD model introduced in the last section contains three separate types of non-linearity: the multiplicative interaction in the likelihood, the unit circle constraint, and the positivity constraint. [sent-159, score-0.109]
56 In this new form the constraints have been incorporated with the non-linear likelihood into the potential ψt , leaving a standard Gaussian dynamical potential πt (st , st−1 ). [sent-168, score-0.065]
57 Using EP we approximate the posterior distribution using a product of forward, backward and constrained-likelihood messages [24], T T ˜ αt (st )βt (st )ψt (a1,t , x1,t , x2,t ) = q(s1:T ) = qt (st ). [sent-169, score-0.08]
58 (11) t=1 t=1 The messages should be interpreted as follows: αt (st ) is the effect of πt (st−1 , st ) and q(st−1 ) on the belief q(st ), whilst βt (st ) is the effect of πt+1 (st , st+1 ) and q(st+1 ) on the belief q(st ). [sent-170, score-0.251]
59 The updates for the messages can be found by removing the messages from q(s1:T ) that correspond to the effect of a particular potential. [sent-173, score-0.16]
60 The deleted messages are then updated by moment matching the two distributions. [sent-175, score-0.08]
61 The updates for the forward and backward messages are a straightforward application of EP and result in updates that are nearly identical to those used for Kalman smoothing. [sent-176, score-0.08]
62 First, we integrate over the amplitude variable, which involves computing the moments of a truncated Gaussian and 5 is therefore computationally efficient. [sent-179, score-0.324]
63 Second, we numerically integrate over the one dimensional phase variable. [sent-180, score-0.134]
64 The computational complexity of PAFD is O T (N + τ 3 ) where N are the number of points used to compute the integral over the phase variable. [sent-184, score-0.134]
65 For the experiments we used a second order process over the amplitude variables (τ = 2) and N = 1000 integration points. [sent-185, score-0.354]
66 In this case, the 16-32 forward-backward passes required for convergence took one minute on a modern laptop computer for signals of length T = 1000. [sent-186, score-0.08]
67 4 Application to synthetic signals One of the main challenges posed by the evaluation of AFD algorithms is that the ground truth for real-world signals is unknown. [sent-187, score-0.21]
68 In particular, we consider amplitude- and frequency-modulated 1 ¯ sinusoids, yt = at cos(θt ) where at = 1 + sin(2πfa t) and 2π dθ = f + ∆f sin(2πff t), which have dt ¯ been corrupted by Gaussian noise. [sent-190, score-0.076]
69 1 compares AFD of one such signal (f = 50Hz, fa = 8Hz, ff = 5Hz and ∆f = 25Hz) by the Hilbert, BSE-PPV and PAFD methods. [sent-192, score-0.208]
70 3 summarizes the results at different noise levels in terms of the signal to noise ratio (SNR) of the estimated variables 2 T T and the reconstructed signal, i. [sent-194, score-0.274]
71 4 demonstrates that PAFD can be used to accurately reconstruct missing sections of this signal, outperforming BSE-PPV. [sent-199, score-0.084]
72 100 SNR ˆ /dB a 80 SNR ω /dB ˆ Hilbert BSE−PPV PAFD 100 60 40 80 SNR y /dB ˆ 120 50 0 20 0 10 20 30 40 SNR signal /dB ∞ −50 10 20 30 40 SNR signal /dB ∞ 60 40 20 0 10 20 30 40 50 SNR signal /dB Figure 3: Noisy synthetic data. [sent-200, score-0.512]
73 5 Application to real world signals Having validated PAFD on simple synthetic examples, we now consider real-world signals. [sent-205, score-0.13]
74 Birdsong is used as a prototypical signal as it has strong frequency-modulation content. [sent-206, score-0.154]
75 2 shows that PAFD can track the underlying frequency modulation even though there is noise in the signal which causes the other methods to fail. [sent-209, score-0.376]
76 In the first, spectrally matched noise is added to the signal and the IFs and amplitudes are reestimated and compared to those derived from the clean signal. [sent-211, score-0.225]
77 In the second test, regions of the signal are removed and the model’s predictions for the missing regions are compared to the estimates derived from the clean signal (see fig. [sent-214, score-0.489]
78 The EEG signal typically contains broadband noise and so a conventional analysis applies a band-pass filter before using the Hilbert method to estimate the IF. [sent-218, score-0.211]
79 Although this improves the estimates markedly, the noise component cannot be completely eradicated which leads to artifacts in the IF estimates (see Fig. [sent-219, score-0.099]
80 TOP: SNR of estimated variables as a function of gap duration in the input signal. [sent-244, score-0.074]
81 Solid markers indicate the examples shown in the bottom rows of the figure. [sent-246, score-0.073]
82 Light gray regions indicate missing sections of the signal. [sent-248, score-0.155]
83 40 30 SNR ω /dB ˆ SNR ˆ /dB a 20 Hilbert BSE−PPV PAFD 35 25 20 15 15 10 5 0 −5 −10 10 10 15 20 25 30 35 10 SNR signal /dB 15 20 25 30 35 SNR signal /dB Figure 5: Noisy bird song experiments. [sent-249, score-0.308]
84 SNR of estimated variables as compared to those estimated from the clean signal, as a function of the SNR of the input signal. [sent-250, score-0.116]
85 6 Conclusion Amplitude and frequency demodulation is a difficult, ill-posed estimation problem. [sent-254, score-0.341]
86 We have developed a new inferential solution called probabilistic amplitude and frequency demodulation which employs a von Mises time-series prior over phase, constructed by conditioning a bivariate Gaussian auto-regressive distribution onto the unit circle. [sent-255, score-0.976]
87 The construction naturally leads to an expectation propagation inference scheme which approximates the hard constraints using soft local Gaussians. [sent-256, score-0.055]
88 5 0 10 5 envelopes 2 frequency /Hz frequency /Hz frequency /Hz frequency /Hz envelopes Figure 6: Missing natural data experiments. [sent-293, score-0.84]
89 TOP: SNR of estimated variables as a function of gap duration in the input signal. [sent-294, score-0.074]
90 Solid markers indicate the examples shown in the bottom rows of the figure. [sent-296, score-0.073]
91 Light gray regions indicate missing sections of the signal. [sent-298, score-0.155]
92 The left hand side shows estimates derived from the raw EEG signal, whilst the right shows estimates derived from a band-pass filtered version. [sent-302, score-0.093]
93 The gray areas show the region where the true amplitude falls below the noise floor (a < σy ), where conventional methods fail. [sent-303, score-0.427]
94 We have demonstrated the utility of the new method on synthetic and natural signals, where it outperformed conventional approaches. [sent-304, score-0.074]
95 Coherent modulation spectral filtering for single-channel music source separation. [sent-341, score-0.061]
96 On the upper cutoff frequency of the auditory critical-band envelope detectors in the context of speech perception. [sent-357, score-0.291]
97 On the dichotomy in auditory perception between temporal envelope and fine structure cues (L). [sent-369, score-0.119]
98 On the analytic signal, the Teager-Kaiser energy algorithm, and other methods for defining amplitude and frequency. [sent-373, score-0.324]
99 Statistical inference for single- and multi-band probabilistic amplitude demodulation. [sent-420, score-0.396]
100 Expectation propagation for approximate inference in dynamic bayesian networks. [sent-444, score-0.055]
wordName wordTfidf (topN-words)
[('pafd', 0.503), ('amplitude', 0.324), ('snr', 0.275), ('afd', 0.261), ('demodulation', 0.213), ('cos', 0.185), ('ifs', 0.168), ('mises', 0.168), ('envelopes', 0.164), ('signal', 0.154), ('sin', 0.149), ('st', 0.144), ('phase', 0.134), ('bse', 0.13), ('frequency', 0.128), ('ppv', 0.093), ('von', 0.09), ('turner', 0.09), ('signals', 0.08), ('messages', 0.08), ('yt', 0.076), ('acoustical', 0.075), ('eeg', 0.067), ('bivariate', 0.067), ('sinusoids', 0.065), ('speech', 0.064), ('zt', 0.063), ('hilbert', 0.063), ('audio', 0.062), ('ar', 0.062), ('modulation', 0.061), ('sinusoid', 0.06), ('carrier', 0.06), ('missing', 0.06), ('envelope', 0.059), ('slowly', 0.055), ('synthetic', 0.05), ('probabilistic', 0.05), ('markers', 0.049), ('gray', 0.046), ('denoised', 0.045), ('sounds', 0.045), ('xt', 0.041), ('auditory', 0.04), ('clean', 0.038), ('america', 0.038), ('phases', 0.038), ('hearing', 0.037), ('starling', 0.037), ('theta', 0.037), ('vocoder', 0.037), ('wrapped', 0.037), ('maneesh', 0.037), ('prior', 0.035), ('instantaneous', 0.035), ('ep', 0.034), ('noise', 0.033), ('likelihood', 0.033), ('propagation', 0.033), ('anisotropic', 0.033), ('estimates', 0.033), ('circle', 0.032), ('acoustics', 0.032), ('gaussian', 0.032), ('dynamical', 0.032), ('variables', 0.03), ('oscillation', 0.03), ('sinusoidal', 0.03), ('ff', 0.03), ('kalman', 0.029), ('spectrum', 0.027), ('whilst', 0.027), ('conditioning', 0.026), ('richard', 0.026), ('sahani', 0.026), ('regions', 0.025), ('nonstationary', 0.025), ('circular', 0.025), ('reveal', 0.024), ('varying', 0.024), ('conventional', 0.024), ('bottom', 0.024), ('sections', 0.024), ('estimated', 0.024), ('fa', 0.024), ('oor', 0.023), ('unit', 0.022), ('inference', 0.022), ('positivity', 0.022), ('dynamics', 0.021), ('onto', 0.021), ('gatsby', 0.021), ('norm', 0.021), ('argued', 0.021), ('coherent', 0.021), ('solid', 0.02), ('sensible', 0.02), ('temporal', 0.02), ('perturbation', 0.02), ('duration', 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999988 225 nips-2011-Probabilistic amplitude and frequency demodulation
Author: Richard Turner, Maneesh Sahani
Abstract: A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude- and frequencydemodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings. 1
2 0.10021984 13 nips-2011-A blind sparse deconvolution method for neural spike identification
Author: Chaitanya Ekanadham, Daniel Tranchina, Eero P. Simoncelli
Abstract: We consider the problem of estimating neural spikes from extracellular voltage recordings. Most current methods are based on clustering, which requires substantial human supervision and systematically mishandles temporally overlapping spikes. We formulate the problem as one of statistical inference, in which the recorded voltage is a noisy sum of the spike trains of each neuron convolved with its associated spike waveform. Joint maximum-a-posteriori (MAP) estimation of the waveforms and spikes is then a blind deconvolution problem in which the coefficients are sparse. We develop a block-coordinate descent procedure to approximate the MAP solution, based on our recently developed continuous basis pursuit method. We validate our method on simulated data as well as real data for which ground truth is available via simultaneous intracellular recordings. In both cases, our method substantially reduces the number of missed spikes and false positives when compared to a standard clustering algorithm, primarily by recovering overlapping spikes. The method offers a fully automated alternative to clustering methods that is less susceptible to systematic errors. 1
3 0.096511878 253 nips-2011-Signal Estimation Under Random Time-Warpings and Nonlinear Signal Alignment
Author: Sebastian A. Kurtek, Anuj Srivastava, Wei Wu
Abstract: While signal estimation under random amplitudes, phase shifts, and additive noise is studied frequently, the problem of estimating a deterministic signal under random time-warpings has been relatively unexplored. We present a novel framework for estimating the unknown signal that utilizes the action of the warping group to form an equivalence relation between signals. First, we derive an estimator for the equivalence class of the unknown signal using the notion of Karcher mean on the quotient space of equivalence classes. This step requires the use of Fisher-Rao Riemannian metric and a square-root representation of signals to enable computations of distances and means under this metric. Then, we define a notion of the center of a class and show that the center of the estimated class is a consistent estimator of the underlying unknown signal. This estimation algorithm has many applications: (1) registration/alignment of functional data, (2) separation of phase/amplitude components of functional data, (3) joint demodulation and carrier estimation, and (4) sparse modeling of functional data. Here we demonstrate only (1) and (2): Given signals are temporally aligned using nonlinear warpings and, thus, separated into their phase and amplitude components. The proposed method for signal alignment is shown to have state of the art performance using Berkeley growth, handwritten signatures, and neuroscience spike train data. 1
4 0.081945486 38 nips-2011-Anatomically Constrained Decoding of Finger Flexion from Electrocorticographic Signals
Author: Zuoguan Wang, Gerwin Schalk, Qiang Ji
Abstract: Brain-computer interfaces (BCIs) use brain signals to convey a user’s intent. Some BCI approaches begin by decoding kinematic parameters of movements from brain signals, and then proceed to using these signals, in absence of movements, to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG) recordings from the surface of the brain in humans can give information about kinematic parameters (e.g., hand velocity or finger flexion). The decoding approaches in these demonstrations usually employed classical classification/regression algorithms that derive a linear mapping between brain signals and outputs. However, they typically only incorporate little prior information about the target kinematic parameter. In this paper, we show that different types of anatomical constraints that govern finger flexion can be exploited in this context. Specifically, we incorporate these constraints in the construction, structure, and the probabilistic functions of a switched non-parametric dynamic system (SNDS) model. We then apply the resulting SNDS decoder to infer the flexion of individual fingers from the same ECoG dataset used in a recent study. Our results show that the application of the proposed model, which incorporates anatomical constraints, improves decoding performance compared to the results in the previous work. Thus, the results presented in this paper may ultimately lead to neurally controlled hand prostheses with full fine-grained finger articulation. 1
5 0.06693285 298 nips-2011-Unsupervised learning models of primary cortical receptive fields and receptive field plasticity
Author: Maneesh Bhand, Ritvik Mudur, Bipin Suresh, Andrew Saxe, Andrew Y. Ng
Abstract: The efficient coding hypothesis holds that neural receptive fields are adapted to the statistics of the environment, but is agnostic to the timescale of this adaptation, which occurs on both evolutionary and developmental timescales. In this work we focus on that component of adaptation which occurs during an organism’s lifetime, and show that a number of unsupervised feature learning algorithms can account for features of normal receptive field properties across multiple primary sensory cortices. Furthermore, we show that the same algorithms account for altered receptive field properties in response to experimentally altered environmental statistics. Based on these modeling results we propose these models as phenomenological models of receptive field plasticity during an organism’s lifetime. Finally, due to the success of the same models in multiple sensory areas, we suggest that these algorithms may provide a constructive realization of the theory, first proposed by Mountcastle [1], that a qualitatively similar learning algorithm acts throughout primary sensory cortices. 1
6 0.065072648 102 nips-2011-Generalised Coupled Tensor Factorisation
7 0.06388434 170 nips-2011-Message-Passing for Approximate MAP Inference with Latent Variables
8 0.060456496 145 nips-2011-Learning Eigenvectors for Free
9 0.052843761 172 nips-2011-Minimax Localization of Structural Information in Large Noisy Matrices
10 0.052643005 100 nips-2011-Gaussian Process Training with Input Noise
11 0.051778093 118 nips-2011-High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity
12 0.050850078 206 nips-2011-Optimal Reinforcement Learning for Gaussian Systems
13 0.048763566 21 nips-2011-Active Learning with a Drifting Distribution
14 0.047460537 80 nips-2011-Efficient Online Learning via Randomized Rounding
15 0.046807423 257 nips-2011-SpaRCS: Recovering low-rank and sparse matrices from compressive measurements
16 0.04632685 39 nips-2011-Approximating Semidefinite Programs in Sublinear Time
17 0.046250675 70 nips-2011-Dimensionality Reduction Using the Sparse Linear Model
18 0.045353241 204 nips-2011-Online Learning: Stochastic, Constrained, and Smoothed Adversaries
19 0.044511635 75 nips-2011-Dynamical segmentation of single trials from population neural data
20 0.042422082 131 nips-2011-Inference in continuous-time change-point models
topicId topicWeight
[(0, 0.127), (1, -0.009), (2, 0.064), (3, -0.037), (4, -0.013), (5, -0.02), (6, 0.034), (7, -0.019), (8, 0.081), (9, -0.016), (10, 0.054), (11, -0.007), (12, 0.047), (13, -0.016), (14, -0.039), (15, -0.029), (16, 0.026), (17, 0.009), (18, -0.015), (19, -0.042), (20, -0.02), (21, -0.005), (22, 0.017), (23, -0.037), (24, 0.074), (25, 0.046), (26, -0.023), (27, 0.076), (28, -0.025), (29, 0.016), (30, 0.005), (31, 0.006), (32, 0.041), (33, 0.031), (34, -0.071), (35, 0.022), (36, -0.018), (37, 0.052), (38, 0.033), (39, 0.093), (40, -0.014), (41, 0.034), (42, 0.046), (43, -0.07), (44, -0.103), (45, -0.013), (46, 0.035), (47, -0.042), (48, -0.161), (49, 0.049)]
simIndex simValue paperId paperTitle
same-paper 1 0.91834861 225 nips-2011-Probabilistic amplitude and frequency demodulation
Author: Richard Turner, Maneesh Sahani
Abstract: A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude- and frequencydemodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings. 1
2 0.73729116 38 nips-2011-Anatomically Constrained Decoding of Finger Flexion from Electrocorticographic Signals
Author: Zuoguan Wang, Gerwin Schalk, Qiang Ji
Abstract: Brain-computer interfaces (BCIs) use brain signals to convey a user’s intent. Some BCI approaches begin by decoding kinematic parameters of movements from brain signals, and then proceed to using these signals, in absence of movements, to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG) recordings from the surface of the brain in humans can give information about kinematic parameters (e.g., hand velocity or finger flexion). The decoding approaches in these demonstrations usually employed classical classification/regression algorithms that derive a linear mapping between brain signals and outputs. However, they typically only incorporate little prior information about the target kinematic parameter. In this paper, we show that different types of anatomical constraints that govern finger flexion can be exploited in this context. Specifically, we incorporate these constraints in the construction, structure, and the probabilistic functions of a switched non-parametric dynamic system (SNDS) model. We then apply the resulting SNDS decoder to infer the flexion of individual fingers from the same ECoG dataset used in a recent study. Our results show that the application of the proposed model, which incorporates anatomical constraints, improves decoding performance compared to the results in the previous work. Thus, the results presented in this paper may ultimately lead to neurally controlled hand prostheses with full fine-grained finger articulation. 1
3 0.50747913 253 nips-2011-Signal Estimation Under Random Time-Warpings and Nonlinear Signal Alignment
Author: Sebastian A. Kurtek, Anuj Srivastava, Wei Wu
Abstract: While signal estimation under random amplitudes, phase shifts, and additive noise is studied frequently, the problem of estimating a deterministic signal under random time-warpings has been relatively unexplored. We present a novel framework for estimating the unknown signal that utilizes the action of the warping group to form an equivalence relation between signals. First, we derive an estimator for the equivalence class of the unknown signal using the notion of Karcher mean on the quotient space of equivalence classes. This step requires the use of Fisher-Rao Riemannian metric and a square-root representation of signals to enable computations of distances and means under this metric. Then, we define a notion of the center of a class and show that the center of the estimated class is a consistent estimator of the underlying unknown signal. This estimation algorithm has many applications: (1) registration/alignment of functional data, (2) separation of phase/amplitude components of functional data, (3) joint demodulation and carrier estimation, and (4) sparse modeling of functional data. Here we demonstrate only (1) and (2): Given signals are temporally aligned using nonlinear warpings and, thus, separated into their phase and amplitude components. The proposed method for signal alignment is shown to have state of the art performance using Berkeley growth, handwritten signatures, and neuroscience spike train data. 1
4 0.47099239 306 nips-2011-t-divergence Based Approximate Inference
Author: Nan Ding, Yuan Qi, S.v.n. Vishwanathan
Abstract: Approximate inference is an important technique for dealing with large, intractable graphical models based on the exponential family of distributions. We extend the idea of approximate inference to the t-exponential family by defining a new t-divergence. This divergence measure is obtained via convex duality between the log-partition function of the t-exponential family and a new t-entropy. We illustrate our approach on the Bayes Point Machine with a Student’s t-prior. 1
5 0.46826467 191 nips-2011-Nonnegative dictionary learning in the exponential noise model for adaptive music signal representation
Author: Onur Dikmen, Cédric Févotte
Abstract: In this paper we describe a maximum likelihood approach for dictionary learning in the multiplicative exponential noise model. This model is prevalent in audio signal processing where it underlies a generative composite model of the power spectrogram. Maximum joint likelihood estimation of the dictionary and expansion coefficients leads to a nonnegative matrix factorization problem where the Itakura-Saito divergence is used. The optimality of this approach is in question because the number of parameters (which include the expansion coefficients) grows with the number of observations. In this paper we describe a variational procedure for optimization of the marginal likelihood, i.e., the likelihood of the dictionary where the activation coefficients have been integrated out (given a specific prior). We compare the output of both maximum joint likelihood estimation (i.e., standard Itakura-Saito NMF) and maximum marginal likelihood estimation (MMLE) on real and synthetical datasets. The MMLE approach is shown to embed automatic model order selection, akin to automatic relevance determination.
6 0.43234071 167 nips-2011-Maximum Covariance Unfolding : Manifold Learning for Bimodal Data
7 0.42329288 21 nips-2011-Active Learning with a Drifting Distribution
8 0.40267345 102 nips-2011-Generalised Coupled Tensor Factorisation
9 0.40209132 100 nips-2011-Gaussian Process Training with Input Noise
10 0.39959344 2 nips-2011-A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm
11 0.38992539 70 nips-2011-Dimensionality Reduction Using the Sparse Linear Model
12 0.3887144 300 nips-2011-Variance Reduction in Monte-Carlo Tree Search
13 0.38590074 188 nips-2011-Non-conjugate Variational Message Passing for Multinomial and Binary Regression
14 0.38401449 13 nips-2011-A blind sparse deconvolution method for neural spike identification
15 0.36915529 192 nips-2011-Nonstandard Interpretations of Probabilistic Programs for Efficient Inference
16 0.36799926 139 nips-2011-Kernel Bayes' Rule
17 0.36397085 285 nips-2011-The Kernel Beta Process
18 0.36309022 238 nips-2011-Relative Density-Ratio Estimation for Robust Distribution Comparison
19 0.36214149 176 nips-2011-Multi-View Learning of Word Embeddings via CCA
20 0.36056137 179 nips-2011-Multilinear Subspace Regression: An Orthogonal Tensor Decomposition Approach
topicId topicWeight
[(0, 0.013), (4, 0.026), (20, 0.03), (26, 0.016), (31, 0.595), (43, 0.049), (45, 0.048), (57, 0.026), (74, 0.039), (83, 0.029), (84, 0.01), (99, 0.023)]
simIndex simValue paperId paperTitle
1 0.97811967 94 nips-2011-Facial Expression Transfer with Input-Output Temporal Restricted Boltzmann Machines
Author: Matthew D. Zeiler, Graham W. Taylor, Leonid Sigal, Iain Matthews, Rob Fergus
Abstract: We present a type of Temporal Restricted Boltzmann Machine that defines a probability distribution over an output sequence conditional on an input sequence. It shares the desirable properties of RBMs: efficient exact inference, an exponentially more expressive latent state than HMMs, and the ability to model nonlinear structure and dynamics. We apply our model to a challenging real-world graphics problem: facial expression transfer. Our results demonstrate improved performance over several baselines modeling high-dimensional 2D and 3D data. 1
2 0.97605473 88 nips-2011-Environmental statistics and the trade-off between model-based and TD learning in humans
Author: Dylan A. Simon, Nathaniel D. Daw
Abstract: There is much evidence that humans and other animals utilize a combination of model-based and model-free RL methods. Although it has been proposed that these systems may dominate according to their relative statistical efficiency in different circumstances, there is little specific evidence — especially in humans — as to the details of this trade-off. Accordingly, we examine the relative performance of different RL approaches under situations in which the statistics of reward are differentially noisy and volatile. Using theory and simulation, we show that model-free TD learning is relatively most disadvantaged in cases of high volatility and low noise. We present data from a decision-making experiment manipulating these parameters, showing that humans shift learning strategies in accord with these predictions. The statistical circumstances favoring model-based RL are also those that promote a high learning rate, which helps explain why, in psychology, the distinction between these strategies is traditionally conceived in terms of rulebased vs. incremental learning. 1
same-paper 3 0.9723891 225 nips-2011-Probabilistic amplitude and frequency demodulation
Author: Richard Turner, Maneesh Sahani
Abstract: A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude- and frequencydemodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings. 1
4 0.94641179 23 nips-2011-Active dendrites: adaptation to spike-based communication
Author: Balazs B. Ujfalussy, Máté Lengyel
Abstract: Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we address the computational challenge faced by neurons that compute and represent analogue quantities but communicate with digital spikes, and show that reliable computation of even purely linear functions of inputs can require the interplay of strongly nonlinear subunits within the postsynaptic dendritic tree. Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to the joint statistics of presynaptic inputs. This approach suggests normative roles for some puzzling forms of nonlinear dendritic dynamics and plasticity. 1
5 0.9112677 137 nips-2011-Iterative Learning for Reliable Crowdsourcing Systems
Author: David R. Karger, Sewoong Oh, Devavrat Shah
Abstract: Crowdsourcing systems, in which tasks are electronically distributed to numerous “information piece-workers”, have emerged as an effective paradigm for humanpowered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers’ answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker. 1
6 0.90241951 240 nips-2011-Robust Multi-Class Gaussian Process Classification
7 0.80997324 241 nips-2011-Scalable Training of Mixture Models via Coresets
8 0.80250442 249 nips-2011-Sequence learning with hidden units in spiking neural networks
9 0.79084903 131 nips-2011-Inference in continuous-time change-point models
10 0.76807141 75 nips-2011-Dynamical segmentation of single trials from population neural data
11 0.76517689 87 nips-2011-Energetically Optimal Action Potentials
12 0.76020724 229 nips-2011-Query-Aware MCMC
13 0.75799429 221 nips-2011-Priors over Recurrent Continuous Time Processes
14 0.75478381 243 nips-2011-Select and Sample - A Model of Efficient Neural Inference and Learning
15 0.74596703 38 nips-2011-Anatomically Constrained Decoding of Finger Flexion from Electrocorticographic Signals
16 0.74570853 292 nips-2011-Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories
17 0.73249465 11 nips-2011-A Reinforcement Learning Theory for Homeostatic Regulation
18 0.73067564 184 nips-2011-Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability
19 0.71728563 215 nips-2011-Policy Gradient Coagent Networks
20 0.71588749 86 nips-2011-Empirical models of spiking in neural populations