nips nips2008 nips2008-75 knowledge-graph by maker-knowledge-mining

75 nips-2008-Estimating vector fields using sparse basis field expansions


Source: pdf

Author: Stefan Haufe, Vadim V. Nikulin, Andreas Ziehe, Klaus-Robert Müller, Guido Nolte

Abstract: We introduce a novel framework for estimating vector fields using sparse basis field expansions (S-FLEX). The notion of basis fields, which are an extension of scalar basis functions, arises naturally in our framework from a rotational invariance requirement. We consider a regression setting as well as inverse problems. All variants discussed lead to second-order cone programming formulations. While our framework is generally applicable to any type of vector field, we focus in this paper on applying it to solving the EEG/MEG inverse problem. It is shown that significantly more precise and neurophysiologically more plausible location and shape estimates of cerebral current sources from EEG/MEG measurements become possible with our method when comparing to the state-of-the-art. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Estimating vector fields using sparse basis field expansions Stefan Haufe1, 2, * Vadim V. [sent-1, score-0.371]

2 de Abstract We introduce a novel framework for estimating vector fields using sparse basis field expansions (S-FLEX). [sent-6, score-0.371]

3 The notion of basis fields, which are an extension of scalar basis functions, arises naturally in our framework from a rotational invariance requirement. [sent-7, score-0.654]

4 We consider a regression setting as well as inverse problems. [sent-8, score-0.125]

5 All variants discussed lead to second-order cone programming formulations. [sent-9, score-0.111]

6 While our framework is generally applicable to any type of vector field, we focus in this paper on applying it to solving the EEG/MEG inverse problem. [sent-10, score-0.212]

7 It is shown that significantly more precise and neurophysiologically more plausible location and shape estimates of cerebral current sources from EEG/MEG measurements become possible with our method when comparing to the state-of-the-art. [sent-11, score-0.414]

8 Such “truly” vectorial functions are called vector fields and become manifest for example in optical flow fields, electromagnetic fields and wind fields in meteorology. [sent-16, score-0.257]

9 The first type are direct samples (xn , yn ), xn ∈ RP , yn ∈ RQ , n = 1, . [sent-22, score-0.28]

10 The second case occurs, if only indirect measurements zm ∈ R, m = 1, . [sent-26, score-0.127]

11 , M are available, which we assume to be generated by a known linear1 transformation of the vector field outputs yn belonging to nodes xn , n = 1, . [sent-29, score-0.368]

12 This kind of estimation problem is known as an inverse problem. [sent-33, score-0.125]

13 , yN )T the N × Q matrix of vector field outputs and vec(Y ) a column vector containing the stacked transposed rows of Y . [sent-40, score-0.153]

14 1 As an example of an inverse problem consider the way humans localize acoustic sources. [sent-43, score-0.162]

15 Here z comprises the signal arriving at the ears, v is the spatial distribution of the sound sources and F is given by physical equations of sound propagation. [sent-44, score-0.186]

16 regularization) is indeed the most effective strategy for solving inverse problems [13], which are inherently ambiguous. [sent-52, score-0.158]

17 , regression may be applied to cope with the ambiguity of inverse problems. [sent-55, score-0.125]

18 For the estimation of scalar functions, methods that utilize sparse linear combinations of basis functions have gained considerable attention recently (e. [sent-56, score-0.342]

19 Apart from the computational tractability that comes with the sparsity of the learned model, the possibility of interpreting the estimates in terms of their basis functions is a particularly appealing feature of these methods. [sent-59, score-0.339]

20 While sparse expansions are also desirable in vector field estimation, lasso and similar methods cannot be used for that purpose, as they break rotational invariance in the output space RQ . [sent-60, score-0.466]

21 This is easily seen as sparse methods tend to select different basis functions in each of the Q dimensions. [sent-61, score-0.298]

22 Only few attempts have been made on rotation-invariant sparse vector field expansions so far. [sent-62, score-0.187]

23 In [8] a dense expansion is discussed, which could be modified to a sparse version maintaining rotational invariance. [sent-63, score-0.258]

24 In section 3 we will apply the (appropriately customized) method for solving the EEG/MEG inverse problem. [sent-67, score-0.158]

25 2 Method Our model is based on the assumption that v can be well approximated by a linear combination of some basis fields. [sent-69, score-0.184]

26 A basis field is defined here (unlike in [8]) as a vector field, in which all output vectors point in the same direction, while the magnitudes are proportional to a scalar (basis) function b : RP → R. [sent-70, score-0.362]

27 1, this model has an expressive power which is comparable to a basis function expansion of scalar functions. [sent-72, score-0.284]

28 Given a set (dictionary) of basis functions bl (x), l = 1, . [sent-73, score-0.337]

29 , L, the basis field expansion is written as L cl bl (x) , v(x) = (1) l=1 with coefficients cl ∈ RQ , l = 1, . [sent-76, score-0.582]

30 Note that by including one coefficient for each output dimension, both orientations and proportionality factors are learned in this model (the term “basis field” thus refers to a basis function with learned coefficients). [sent-80, score-0.184]

31 In order to select a small set of fields, most of the coefficient vectors cl have to vanish. [sent-81, score-0.117]

32 However, care has to be taken in order to maintain rotational invariance of the solution. [sent-83, score-0.242]

33 We here propose to use a regularizer that imposes sparsity and is invariant with respect to rotations, namely the 1 -norm of the magnitudes of the coefficient vectors. [sent-84, score-0.241]

34 bL (xN ) the basis functions evaluated at the xn . [sent-100, score-0.339]

35 2 1 2 3 SUM Figure 1: Complicated vector field (SUM) as a sum of three basis fields (1-3). [sent-106, score-0.238]

36 One has to distinguish invariance in input- from invariance in output space. [sent-109, score-0.218]

37 The former requirement may arise in many estimation settings and can be fulfilled by the choice of appropriate basis functions bl (x). [sent-110, score-0.337]

38 The latter one is specific to vector field estimation and has to be assured by formulating a rotationally invariant cost function. [sent-111, score-0.178]

39 for an orthogonal matrix R ∈ RQ×Q , RT R = I L L Rcl 2 l=1 L tr(cT RT Rcl ) = l = l=1 cl 2 . [sent-117, score-0.117]

40 (4) l=1 For the same argument, additional regularizers R∗ (C) = vec(D∗ C) 2 (the well-known Tikhonov 2 regularizer) or R+ (C) = D+ C 1,2 (promoting sparsity of the linearly transformed vectors) may be introduced without breaking the rotational invariance in RQ . [sent-118, score-0.342]

41 5 is an instance of second-order cone programming (SOCP), a standard class of convex programs, for which efficient interior-point based solvers are available. [sent-133, score-0.111]

42 This approach also requires complex coefficients, by which it is then possible not only to optimally scale the basis functions, but also to optimally shift their phase. [sent-142, score-0.184]

43 Similarly, it is possible to reconstruct complex vector fields from complex measurements using real-valued basis functions. [sent-143, score-0.367]

44 3 3 Application to the EEG/MEG inverse problem Vector fields occur, for example, in form of electrical currents in the brain, which are produced by postsynaptic neuronal processes. [sent-144, score-0.211]

45 Invasive measurements allow very local assessment of neuronal activations, but such procedure in humans is only possible when electrodes are implanted for treatment/diagnosis of neurological diseases, e. [sent-146, score-0.201]

46 The reconstruction of the current density from such measurements is an inverse problem. [sent-150, score-0.396]

47 1 Method specification In the following the task is to infer the generating cerebral current density given an EEG measurement z ∈ RM . [sent-152, score-0.138]

48 The current density is a vector field v : R3 → R3 assigning a vectorial current source to each location in the brain. [sent-153, score-0.308]

49 Inside the brain, we arranged 2142 nodes in a regular grid of 1 cm distance. [sent-155, score-0.12]

50 The forward mapping F ∈ RM ×2142·3 from these nodes to the electrodes was constructed according to [9] – taking into account the realistic geometry and conductive properties of brain, skull and skin. [sent-156, score-0.155]

51 Dictionary In most applications the “true” sources are expected to be small in number and spatial extent. [sent-157, score-0.186]

52 However, many commonly used methods estimate sources that almost cover the whole brain (e. [sent-158, score-0.197]

53 Another group of methods delivers source estimates that are spatially sparse, but usually not rotationally invariant (e. [sent-161, score-0.169]

54 Both the very smooth and the very sparse estimates are unrealistic from a physiological point of view. [sent-165, score-0.114]

55 For achieving a similar effect we here propose a sparse basis field expansion using radial basis functions. [sent-167, score-0.493]

56 More specifically we consider spherical Gaussians 1 −3 2 −2 bn,s (x) = (2πσs ) 2 exp − x − xn 2 σs (6) 2 s = 1, . [sent-168, score-0.11]

57 5 cm, σ4 = 2 cm and being centered at nodes xn , n = 1, . [sent-173, score-0.23]

58 Using this redundant dictionary our expectation is that sources of different spatial extent can be reconstructed by selecting the appropriate basis functions. [sent-178, score-0.417]

59 Figure 2: Gaussian basis functions with fixed center and standard deviations 0. [sent-180, score-0.229]

60 Normalization Our 1,2 -norm based regularization is a heuristic for selecting the smallest possible number of basis fields necessary to explain the measurement. [sent-182, score-0.184]

61 It is therefore important to normalize the basis functions in order not to a-priori prefer some of them. [sent-184, score-0.229]

62 Let Bs be the N × N matrix containing the basis functions with standard deviation σs . [sent-185, score-0.229]

63 Due to volume conduction, the signal captured at the sensors is much stronger for superficial sources compared to deep sources. [sent-192, score-0.142]

64 This can be done by either penalizing activity at locations with high variance or by penalizing basis functions with high variance in the center. [sent-195, score-0.229]

65 We here employ the former approach, as the latter may be problematic for basis functions with large extent. [sent-196, score-0.229]

66 Therefore, we restrict ourselves here to nodes xn , n = 1, . [sent-198, score-0.149]

67 Let Wn ∈ R3×3 denote the inverse matrix square root of ˆ the part of S belonging to node xn . [sent-202, score-0.27]

68 WN ˆ the coefficients are estimated using C = arg min C C ˆ estimated current density at node xn is v(xn ) = Wn 3. [sent-218, score-0.215]

69 Experiments Validation of methods for inverse reconstruction is generally difficult due to the lack of a “ground truth”. [sent-223, score-0.199]

70 The measurements z cannot be used in this respect, as the main goal is not to predict the EEG/MEG measurements, but the vector field v(x) as accurately as possible. [sent-224, score-0.146]

71 Therefore, the only way to evaluate inverse methods is to assess their ability to reconstruct known functions. [sent-225, score-0.162]

72 We do this by reconstructing a) simulated current sources and b) sources of real EEG data that are already well-localized by other studies. [sent-226, score-0.433]

73 we perform 25 inverse reconstructions based on different training sets containing 80 % of the electrodes. [sent-229, score-0.16]

74 Most important ˆ ˆ is the reconstruction error, defined as Cy = vec(Y )/ vec(Y ) 2 − vec(Y tr )/ vec(Y tr ) 2 2 , ˆ where Y tr are the vector field outputs at nodes xn , n = 1, . [sent-231, score-0.493]

75 This is defined as Cz = zte − F te vec(Y tr ) 2 , where zte and F te are 2 the parts of z and F belonging to the test set. [sent-239, score-0.27]

76 We compared the sparse basis field expansion (S-FLEX) approach using Gaussian basis functions (see section 3. [sent-240, score-0.538]

77 All three competitors correspond to using unit impulses as basis functions while employing different regularizers. [sent-242, score-0.272]

78 , is a Tikhonov regularized least-squares estimate while MCE is equivalent to applying lasso to each dimension separately, yielding current vectors that are biased towards being axes-parallel. [sent-245, score-0.105]

79 Interestingly, FVR can be interpreted as a special case of S-FLEX employing the rotation-invariant regularizer R+ (C) to enforce both sparsity and smoothness. [sent-248, score-0.117]

80 Simulated data We simulated current densities in the following way. [sent-252, score-0.149]

81 Finally, each yn was shortened by the 90th percentile of the magnitudes of all yn – leaving only 10% of the current vectors active. [sent-259, score-0.318]

82 We simulated five densities and computed respective pseudo-measurements for 118 channels using the forward model F . [sent-263, score-0.125]

83 Real data We recorded 113-channel EEG of one healthy subject (male, 26 years) during electrical median nerve stimulation. [sent-265, score-0.121]

84 Artifactual trials as well as artifactual electrodes were excluded from the analysis. [sent-275, score-0.115]

85 Finally, a single measurement vector was constructed by averaging the EEG amplitudes at 21 ms across 1946 trials (50% left hand, 50% right hand). [sent-277, score-0.153]

86 3 shows a simulated current density along with reconstructions according to LORETA, MCE, FVR and S-FLEX. [sent-282, score-0.221]

87 From the figure it becomes apparent, that LORETA and MCE do not approximate the true current density very well. [sent-283, score-0.105]

88 The MCE solution consists of eight spikes scattered across the whole somatosensory area. [sent-298, score-0.132]

89 This is due to the fact that the parameter of FVR controlling the tradeoff between sparsity and smoothness was fixed here to a value promoting “maximally sparse sources which are still smooth”. [sent-304, score-0.319]

90 60 Table 1: Ability of LORETA, FVR, S-FLEX and MCE to reconstruct simulated currents (Cy SIM) and generalization performance with respect to the EEG measurements (Cz SIM/REAL). [sent-331, score-0.253]

91 6 SIM LORETA FVR S-FLEX MCE Figure 3: Simulated current density (SIM) and reconstruction according to LORETA, FVR, S-FLEX and MCE. [sent-333, score-0.179]

92 LORETA FVR S-FLEX MCE Figure 4: Localization of somatosensory evoked N20 generators according to LORETA, FVR, S-FLEX and MCE. [sent-335, score-0.125]

93 7 4 Conclusion and Outlook This paper contributes a novel and general methodology for obtaining sparse decompositions of vector fields. [sent-337, score-0.123]

94 Interestingly, the latter constraint together with sparsity leads to a second-order cone programming formulation. [sent-339, score-0.176]

95 We have focussed here on solving the EEG/MEG inverse problem, where our proposed S-FLEX approach outperformed the state-of-the-art in approximating the true shape of the current sources. [sent-340, score-0.226]

96 Combining sparsity and rotational invariu ance in EEG/MEG source reconstruction. [sent-371, score-0.198]

97 From basis functions to basis fields: vector field approximation from sparse data. [sent-417, score-0.536]

98 Low resolution electromagnetic tomography: a new method for localizing electrical activity in the brain. [sent-445, score-0.107]

99 Penalized least squares methods for solving the EEG inverse problem. [sent-486, score-0.158]

100 An empirical bayesian strategy for solving the simultaneous sparse approximation problem. [sent-495, score-0.102]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('fvr', 0.346), ('loreta', 0.32), ('mce', 0.28), ('vec', 0.252), ('eeg', 0.234), ('basis', 0.184), ('eld', 0.172), ('sources', 0.142), ('rotational', 0.133), ('elds', 0.132), ('inverse', 0.125), ('cl', 0.117), ('rq', 0.116), ('xn', 0.11), ('invariance', 0.109), ('bl', 0.108), ('sim', 0.1), ('cz', 0.093), ('measurements', 0.092), ('somatosensory', 0.085), ('yn', 0.085), ('cm', 0.081), ('simulated', 0.081), ('magnitudes', 0.08), ('rotationally', 0.08), ('cone', 0.075), ('reconstruction', 0.074), ('electrodes', 0.072), ('cy', 0.07), ('ul', 0.07), ('sparse', 0.069), ('current', 0.068), ('coef', 0.065), ('sparsity', 0.065), ('berlin', 0.065), ('expansions', 0.064), ('electromagnetic', 0.064), ('fkz', 0.064), ('socp', 0.064), ('ms', 0.059), ('tr', 0.057), ('expansion', 0.056), ('brain', 0.055), ('vector', 0.054), ('haufe', 0.053), ('rcl', 0.053), ('zte', 0.053), ('regularizer', 0.052), ('wn', 0.052), ('vectorial', 0.047), ('scattered', 0.047), ('tomioka', 0.047), ('wind', 0.047), ('dictionary', 0.047), ('outputs', 0.045), ('estimates', 0.045), ('functions', 0.045), ('forward', 0.044), ('scalar', 0.044), ('germany', 0.044), ('spatial', 0.044), ('bc', 0.044), ('invariant', 0.044), ('electrical', 0.043), ('artifactual', 0.043), ('nerve', 0.043), ('pulses', 0.043), ('competitors', 0.043), ('currents', 0.043), ('ears', 0.043), ('promoting', 0.043), ('meg', 0.043), ('tikhonov', 0.043), ('amplitudes', 0.04), ('generators', 0.04), ('neuroimage', 0.04), ('localization', 0.04), ('nodes', 0.039), ('head', 0.039), ('rp', 0.039), ('lasso', 0.037), ('humans', 0.037), ('cients', 0.037), ('density', 0.037), ('reconstruct', 0.037), ('te', 0.036), ('focal', 0.036), ('programming', 0.036), ('median', 0.035), ('rm', 0.035), ('belonging', 0.035), ('reconstructions', 0.035), ('tomography', 0.035), ('indirect', 0.035), ('regularizers', 0.035), ('location', 0.034), ('bs', 0.033), ('crossvalidation', 0.033), ('cerebral', 0.033), ('solving', 0.033)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 75 nips-2008-Estimating vector fields using sparse basis field expansions

Author: Stefan Haufe, Vadim V. Nikulin, Andreas Ziehe, Klaus-Robert Müller, Guido Nolte

Abstract: We introduce a novel framework for estimating vector fields using sparse basis field expansions (S-FLEX). The notion of basis fields, which are an extension of scalar basis functions, arises naturally in our framework from a rotational invariance requirement. We consider a regression setting as well as inverse problems. All variants discussed lead to second-order cone programming formulations. While our framework is generally applicable to any type of vector field, we focus in this paper on applying it to solving the EEG/MEG inverse problem. It is shown that significantly more precise and neurophysiologically more plausible location and shape estimates of cerebral current sources from EEG/MEG measurements become possible with our method when comparing to the state-of-the-art. 1

2 0.18754764 243 nips-2008-Understanding Brain Connectivity Patterns during Motor Imagery for Brain-Computer Interfacing

Author: Moritz Grosse-wentrup

Abstract: EEG connectivity measures could provide a new type of feature space for inferring a subject’s intention in Brain-Computer Interfaces (BCIs). However, very little is known on EEG connectivity patterns for BCIs. In this study, EEG connectivity during motor imagery (MI) of the left and right is investigated in a broad frequency range across the whole scalp by combining Beamforming with Transfer Entropy and taking into account possible volume conduction effects. Observed connectivity patterns indicate that modulation intentionally induced by MI is strongest in the γ-band, i.e., above 35 Hz. Furthermore, modulation between MI and rest is found to be more pronounced than between MI of different hands. This is in contrast to results on MI obtained with bandpower features, and might provide an explanation for the so far only moderate success of connectivity features in BCIs. It is concluded that future studies on connectivity based BCIs should focus on high frequency bands and consider experimental paradigms that maximally vary cognitive demands between conditions. 1

3 0.16124064 74 nips-2008-Estimating the Location and Orientation of Complex, Correlated Neural Activity using MEG

Author: Julia Owen, Hagai T. Attias, Kensuke Sekihara, Srikantan S. Nagarajan, David P. Wipf

Abstract: The synchronous brain activity measured via MEG (or EEG) can be interpreted as arising from a collection (possibly large) of current dipoles or sources located throughout the cortex. Estimating the number, location, and orientation of these sources remains a challenging task, one that is significantly compounded by the effects of source correlations and the presence of interference from spontaneous brain activity, sensor noise, and other artifacts. This paper derives an empirical Bayesian method for addressing each of these issues in a principled fashion. The resulting algorithm guarantees descent of a cost function uniquely designed to handle unknown orientations and arbitrary correlations. Robust interference suppression is also easily incorporated. In a restricted setting, the proposed method is shown to have theoretically zero bias estimating both the location and orientation of multi-component dipoles even in the presence of correlations, unlike a variety of existing Bayesian localization methods or common signal processing techniques such as beamforming and sLORETA. Empirical results on both simulated and real data sets verify the efficacy of this approach. 1

4 0.12054706 21 nips-2008-An Homotopy Algorithm for the Lasso with Online Observations

Author: Pierre Garrigues, Laurent E. Ghaoui

Abstract: It has been shown that the problem of 1 -penalized least-square regression commonly referred to as the Lasso or Basis Pursuit DeNoising leads to solutions that are sparse and therefore achieves model selection. We propose in this paper RecLasso, an algorithm to solve the Lasso with online (sequential) observations. We introduce an optimization problem that allows us to compute an homotopy from the current solution to the solution after observing a new data point. We compare our method to Lars and Coordinate Descent, and present an application to compressive sensing with sequential observations. Our approach can easily be extended to compute an homotopy from the current solution to the solution that corresponds to removing a data point, which leads to an efficient algorithm for leave-one-out cross-validation. We also propose an algorithm to automatically update the regularization parameter after observing a new data point. 1

5 0.1086015 138 nips-2008-Modeling human function learning with Gaussian processes

Author: Thomas L. Griffiths, Chris Lucas, Joseph Williams, Michael L. Kalish

Abstract: Accounts of how people learn functional relationships between continuous variables have tended to focus on two possibilities: that people are estimating explicit functions, or that they are performing associative learning supported by similarity. We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear regression and Gaussian processes, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use this insight to define a Gaussian process model of human function learning that combines the strengths of both approaches. 1

6 0.10857469 226 nips-2008-Supervised Dictionary Learning

7 0.10220225 180 nips-2008-Playing Pinball with non-invasive BCI

8 0.090752162 118 nips-2008-Learning Transformational Invariants from Natural Movies

9 0.088781692 62 nips-2008-Differentiable Sparse Coding

10 0.083013579 157 nips-2008-Nonrigid Structure from Motion in Trajectory Space

11 0.076799296 30 nips-2008-Bayesian Experimental Design of Magnetic Resonance Imaging Sequences

12 0.075251743 202 nips-2008-Robust Regression and Lasso

13 0.074925169 145 nips-2008-Multi-stage Convex Relaxation for Learning with Sparse Regularization

14 0.072853938 135 nips-2008-Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of \boldmath$\ell 1$-regularized MLE

15 0.071827054 215 nips-2008-Sparse Signal Recovery Using Markov Random Fields

16 0.070270687 143 nips-2008-Multi-label Multiple Kernel Learning

17 0.069767572 99 nips-2008-High-dimensional support union recovery in multivariate regression

18 0.066860959 193 nips-2008-Regularized Co-Clustering with Dual Supervision

19 0.065903746 216 nips-2008-Sparse probabilistic projections

20 0.065499038 179 nips-2008-Phase transitions for high-dimensional joint support recovery


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.211), (1, -0.018), (2, 0.039), (3, 0.092), (4, 0.112), (5, 0.015), (6, -0.128), (7, 0.018), (8, 0.043), (9, 0.127), (10, -0.035), (11, -0.039), (12, -0.066), (13, 0.083), (14, 0.006), (15, -0.007), (16, 0.015), (17, 0.122), (18, -0.073), (19, 0.023), (20, 0.104), (21, 0.019), (22, 0.014), (23, -0.002), (24, -0.028), (25, -0.056), (26, -0.027), (27, 0.027), (28, -0.08), (29, -0.169), (30, 0.067), (31, -0.039), (32, 0.067), (33, -0.014), (34, -0.066), (35, 0.109), (36, 0.099), (37, 0.017), (38, -0.045), (39, -0.072), (40, -0.233), (41, -0.13), (42, -0.082), (43, -0.0), (44, 0.03), (45, 0.051), (46, -0.015), (47, 0.015), (48, -0.088), (49, -0.135)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9426825 75 nips-2008-Estimating vector fields using sparse basis field expansions

Author: Stefan Haufe, Vadim V. Nikulin, Andreas Ziehe, Klaus-Robert Müller, Guido Nolte

Abstract: We introduce a novel framework for estimating vector fields using sparse basis field expansions (S-FLEX). The notion of basis fields, which are an extension of scalar basis functions, arises naturally in our framework from a rotational invariance requirement. We consider a regression setting as well as inverse problems. All variants discussed lead to second-order cone programming formulations. While our framework is generally applicable to any type of vector field, we focus in this paper on applying it to solving the EEG/MEG inverse problem. It is shown that significantly more precise and neurophysiologically more plausible location and shape estimates of cerebral current sources from EEG/MEG measurements become possible with our method when comparing to the state-of-the-art. 1

2 0.77332121 243 nips-2008-Understanding Brain Connectivity Patterns during Motor Imagery for Brain-Computer Interfacing

Author: Moritz Grosse-wentrup

Abstract: EEG connectivity measures could provide a new type of feature space for inferring a subject’s intention in Brain-Computer Interfaces (BCIs). However, very little is known on EEG connectivity patterns for BCIs. In this study, EEG connectivity during motor imagery (MI) of the left and right is investigated in a broad frequency range across the whole scalp by combining Beamforming with Transfer Entropy and taking into account possible volume conduction effects. Observed connectivity patterns indicate that modulation intentionally induced by MI is strongest in the γ-band, i.e., above 35 Hz. Furthermore, modulation between MI and rest is found to be more pronounced than between MI of different hands. This is in contrast to results on MI obtained with bandpower features, and might provide an explanation for the so far only moderate success of connectivity features in BCIs. It is concluded that future studies on connectivity based BCIs should focus on high frequency bands and consider experimental paradigms that maximally vary cognitive demands between conditions. 1

3 0.69290805 180 nips-2008-Playing Pinball with non-invasive BCI

Author: Matthias Krauledat, Konrad Grzeska, Max Sagebaum, Benjamin Blankertz, Carmen Vidaurre, Klaus-Robert Müller, Michael Schröder

Abstract: Compared to invasive Brain-Computer Interfaces (BCI), non-invasive BCI systems based on Electroencephalogram (EEG) signals have not been applied successfully for precisely timed control tasks. In the present study, however, we demonstrate and report on the interaction of subjects with a real device: a pinball machine. Results of this study clearly show that fast and well-timed control well beyond chance level is possible, even though the environment is extremely rich and requires precisely timed and complex predictive behavior. Using machine learning methods for mental state decoding, BCI-based pinball control is possible within the first session without the necessity to employ lengthy subject training. The current study shows clearly that very compelling control with excellent timing and dynamics is possible for a non-invasive BCI. 1

4 0.65453762 74 nips-2008-Estimating the Location and Orientation of Complex, Correlated Neural Activity using MEG

Author: Julia Owen, Hagai T. Attias, Kensuke Sekihara, Srikantan S. Nagarajan, David P. Wipf

Abstract: The synchronous brain activity measured via MEG (or EEG) can be interpreted as arising from a collection (possibly large) of current dipoles or sources located throughout the cortex. Estimating the number, location, and orientation of these sources remains a challenging task, one that is significantly compounded by the effects of source correlations and the presence of interference from spontaneous brain activity, sensor noise, and other artifacts. This paper derives an empirical Bayesian method for addressing each of these issues in a principled fashion. The resulting algorithm guarantees descent of a cost function uniquely designed to handle unknown orientations and arbitrary correlations. Robust interference suppression is also easily incorporated. In a restricted setting, the proposed method is shown to have theoretically zero bias estimating both the location and orientation of multi-component dipoles even in the presence of correlations, unlike a variety of existing Bayesian localization methods or common signal processing techniques such as beamforming and sLORETA. Empirical results on both simulated and real data sets verify the efficacy of this approach. 1

5 0.57311261 30 nips-2008-Bayesian Experimental Design of Magnetic Resonance Imaging Sequences

Author: Hannes Nickisch, Rolf Pohmann, Bernhard Schölkopf, Matthias Seeger

Abstract: We show how improved sequences for magnetic resonance imaging can be found through optimization of Bayesian design scores. Combining approximate Bayesian inference and natural image statistics with high-performance numerical computation, we propose the first Bayesian experimental design framework for this problem of high relevance to clinical and brain research. Our solution requires large-scale approximate inference for dense, non-Gaussian models. We propose a novel scalable variational inference algorithm, and show how powerful methods of numerical mathematics can be modified to compute primitives in our framework. Our approach is evaluated on raw data from a 3T MR scanner. 1

6 0.52127278 21 nips-2008-An Homotopy Algorithm for the Lasso with Online Observations

7 0.46383578 215 nips-2008-Sparse Signal Recovery Using Markov Random Fields

8 0.44202998 68 nips-2008-Efficient Direct Density Ratio Estimation for Non-stationarity Adaptation and Outlier Detection

9 0.42860752 14 nips-2008-Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models

10 0.40929824 138 nips-2008-Modeling human function learning with Gaussian processes

11 0.38778526 155 nips-2008-Nonparametric regression and classification with joint sparsity constraints

12 0.37421149 25 nips-2008-An interior-point stochastic approximation method and an L1-regularized delta rule

13 0.36870742 145 nips-2008-Multi-stage Convex Relaxation for Learning with Sparse Regularization

14 0.36688492 62 nips-2008-Differentiable Sparse Coding

15 0.3645044 185 nips-2008-Privacy-preserving logistic regression

16 0.36123559 110 nips-2008-Kernel-ARMA for Hand Tracking and Brain-Machine interfacing During 3D Motor Control

17 0.35974309 149 nips-2008-Near-minimax recursive density estimation on the binary hypercube

18 0.35916907 226 nips-2008-Supervised Dictionary Learning

19 0.35593751 118 nips-2008-Learning Transformational Invariants from Natural Movies

20 0.35132018 106 nips-2008-Inferring rankings under constrained sensing


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(6, 0.122), (7, 0.072), (12, 0.051), (15, 0.018), (28, 0.131), (57, 0.086), (59, 0.027), (63, 0.029), (71, 0.023), (77, 0.043), (81, 0.03), (83, 0.06), (98, 0.225)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.83048475 75 nips-2008-Estimating vector fields using sparse basis field expansions

Author: Stefan Haufe, Vadim V. Nikulin, Andreas Ziehe, Klaus-Robert Müller, Guido Nolte

Abstract: We introduce a novel framework for estimating vector fields using sparse basis field expansions (S-FLEX). The notion of basis fields, which are an extension of scalar basis functions, arises naturally in our framework from a rotational invariance requirement. We consider a regression setting as well as inverse problems. All variants discussed lead to second-order cone programming formulations. While our framework is generally applicable to any type of vector field, we focus in this paper on applying it to solving the EEG/MEG inverse problem. It is shown that significantly more precise and neurophysiologically more plausible location and shape estimates of cerebral current sources from EEG/MEG measurements become possible with our method when comparing to the state-of-the-art. 1

2 0.76982528 234 nips-2008-The Infinite Factorial Hidden Markov Model

Author: Jurgen V. Gael, Yee W. Teh, Zoubin Ghahramani

Abstract: We introduce a new probability distribution over a potentially infinite number of binary Markov chains which we call the Markov Indian buffet process. This process extends the IBP to allow temporal dependencies in the hidden variables. We use this stochastic process to build a nonparametric extension of the factorial hidden Markov model. After constructing an inference scheme which combines slice sampling and dynamic programming we demonstrate how the infinite factorial hidden Markov model can be used for blind source separation. 1

3 0.73686457 87 nips-2008-Fitted Q-iteration by Advantage Weighted Regression

Author: Gerhard Neumann, Jan R. Peters

Abstract: Recently, fitted Q-iteration (FQI) based methods have become more popular due to their increased sample efficiency, a more stable learning process and the higher quality of the resulting policy. However, these methods remain hard to use for continuous action spaces which frequently occur in real-world tasks, e.g., in robotics and other technical applications. The greedy action selection commonly used for the policy improvement step is particularly problematic as it is expensive for continuous actions, can cause an unstable learning process, introduces an optimization bias and results in highly non-smooth policies unsuitable for real-world systems. In this paper, we show that by using a soft-greedy action selection the policy improvement step used in FQI can be simplified to an inexpensive advantageweighted regression. With this result, we are able to derive a new, computationally efficient FQI algorithm which can even deal with high dimensional action spaces. 1

4 0.68315041 62 nips-2008-Differentiable Sparse Coding

Author: J. A. Bagnell, David M. Bradley

Abstract: Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1 ) that promotes sparsity. We show how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate efficiently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of applications, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance. 1

5 0.67530084 226 nips-2008-Supervised Dictionary Learning

Author: Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, Francis R. Bach

Abstract: It is now well established that sparse signal models are well suited for restoration tasks and can be effectively learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and discriminative class models. The linear version of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks. 1

6 0.67136526 202 nips-2008-Robust Regression and Lasso

7 0.66533631 27 nips-2008-Artificial Olfactory Brain for Mixture Identification

8 0.66334569 79 nips-2008-Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

9 0.66324586 245 nips-2008-Unlabeled data: Now it helps, now it doesn't

10 0.66205716 194 nips-2008-Regularized Learning with Networks of Features

11 0.66050273 91 nips-2008-Generative and Discriminative Learning with Unknown Labeling Bias

12 0.65976828 143 nips-2008-Multi-label Multiple Kernel Learning

13 0.65607691 133 nips-2008-Mind the Duality Gap: Logarithmic regret algorithms for online optimization

14 0.65564895 149 nips-2008-Near-minimax recursive density estimation on the binary hypercube

15 0.65372026 14 nips-2008-Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models

16 0.65347803 116 nips-2008-Learning Hybrid Models for Image Annotation with Partially Labeled Data

17 0.65310663 205 nips-2008-Semi-supervised Learning with Weakly-Related Unlabeled Data : Towards Better Text Categorization

18 0.65146881 42 nips-2008-Cascaded Classification Models: Combining Models for Holistic Scene Understanding

19 0.65074551 164 nips-2008-On the Generalization Ability of Online Strongly Convex Programming Algorithms

20 0.65073752 162 nips-2008-On the Design of Loss Functions for Classification: theory, robustness to outliers, and SavageBoost