nips nips2007 nips2007-122 knowledge-graph by maker-knowledge-mining

122 nips-2007-Locality and low-dimensions in the prediction of natural experience from fMRI


Source: pdf

Author: Francois Meyer, Greg Stephens

Abstract: Functional Magnetic Resonance Imaging (fMRI) provides dynamical access into the complex functioning of the human brain, detailing the hemodynamic activity of thousands of voxels during hundreds of sequential time points. One approach towards illuminating the connection between fMRI and cognitive function is through decoding; how do the time series of voxel activities combine to provide information about internal and external experience? Here we seek models of fMRI decoding which are balanced between the simplicity of their interpretation and the effectiveness of their prediction. We use signals from a subject immersed in virtual reality to compare global and local methods of prediction applying both linear and nonlinear techniques of dimensionality reduction. We find that the prediction of complex stimuli is remarkably low-dimensional, saturating with less than 100 features. In particular, we build effective models based on the decorrelated components of cognitive activity in the classically-defined Brodmann areas. For some of the stimuli, the top predictive areas were surprisingly transparent, including Wernicke’s area for verbal instructions, visual cortex for facial and body features, and visual-temporal regions for velocity. Direct sensory experience resulted in the most robust predictions, with the highest correlation (c ∼ 0.8) between the predicted and experienced time series of verbal instructions. Techniques based on non-linear dimensionality reduction (Laplacian eigenmaps) performed similarly. The interpretability and relative simplicity of our approach provides a conceptual basis upon which to build more sophisticated techniques for fMRI decoding and offers a window into cognitive function during dynamic, natural experience. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Locality and low-dimensions in the prediction of natural experience from fMRI Francois G. [sent-1, score-0.209]

2 Abstract Functional Magnetic Resonance Imaging (fMRI) provides dynamical access into the complex functioning of the human brain, detailing the hemodynamic activity of thousands of voxels during hundreds of sequential time points. [sent-6, score-0.161]

3 One approach towards illuminating the connection between fMRI and cognitive function is through decoding; how do the time series of voxel activities combine to provide information about internal and external experience? [sent-7, score-0.276]

4 We use signals from a subject immersed in virtual reality to compare global and local methods of prediction applying both linear and nonlinear techniques of dimensionality reduction. [sent-9, score-0.455]

5 We find that the prediction of complex stimuli is remarkably low-dimensional, saturating with less than 100 features. [sent-10, score-0.282]

6 In particular, we build effective models based on the decorrelated components of cognitive activity in the classically-defined Brodmann areas. [sent-11, score-0.253]

7 For some of the stimuli, the top predictive areas were surprisingly transparent, including Wernicke’s area for verbal instructions, visual cortex for facial and body features, and visual-temporal regions for velocity. [sent-12, score-0.423]

8 Direct sensory experience resulted in the most robust predictions, with the highest correlation (c ∼ 0. [sent-13, score-0.164]

9 8) between the predicted and experienced time series of verbal instructions. [sent-14, score-0.088]

10 The interpretability and relative simplicity of our approach provides a conceptual basis upon which to build more sophisticated techniques for fMRI decoding and offers a window into cognitive function during dynamic, natural experience. [sent-16, score-0.346]

11 1 Introduction Functional Magnetic Resonance Imaging (fMRI) is a non-invasive imaging technique that can quantify changes in cerebral venous oxygen concentration. [sent-17, score-0.138]

12 Changes in the fMRI signal that occur during brain activation are very small (1-5%) and are often contaminated by noise (created by the imaging system hardware or physiological processes). [sent-18, score-0.329]

13 The Experience Based Cognition competition (EBC) [3] offers an opportunity to study complex responses to natural environments, and to test new ideas and new methods for the analysis of fMRI collected in natural environments. [sent-21, score-0.126]

14 t0 t0 ti ti tj T l T tj t Figure 1: We study the variation of the set of features fk (t), k = 1, · · · , K as a function of the dynamical changes in the fMRI signal X(t) = [x1 (t), · · · , xN (t)] during natural experience. [sent-28, score-0.844]

15 The features represent both external stimuli such as the presence of faces and internal emotional states encountered during the exploration of a virtual urban environment (left and right images). [sent-29, score-0.599]

16 We predict the feature functions fk for t = Tl+1 , · · · T , from the knowledge of the entire fMRI dataset X , and the partial knowledge of fk (t) for t = 1, · · · , Tl . [sent-30, score-0.693]

17 In this work we seek a low dimensional representation of the entire fMRI dataset that provides a new set of ‘voxel-free” coordinates to study cognitive and sensory features. [sent-34, score-0.268]

18 We denote a three-dimensional volumes of fMRI composed of a total of N voxels by X(t) = [x1 (t), · · · , xN (t)]. [sent-35, score-0.136]

19 xN (1) · · · xN (T ) where each row n represents a time series xn generated from voxel n and each column j represents a scan acquired at time tj . [sent-46, score-0.261]

20 We call the set of features to be predicted fk , k = 1, , · · · , K. [sent-47, score-0.376]

21 We are interested in studying the variation of the set of features fk (t), k = 1, · · · , K describing the subject experience as a function of the dynamical changes of the brain, as measured by X(t). [sent-48, score-0.555]

22 Formally, we need to build predictions of fk (t) for t = Tl+1 , · · · T , from the knowledge of the entire fMRI dataset X , and the partial knowledge of fk (t) for the training time samples t = 1, · · · , Tl (see Fig. [sent-49, score-0.77]

23 tj ti φ t0 φ D t Figure 2: Low-dimensional parametrization of the set of “brain states”. [sent-51, score-0.268]

24 The parametrization is constructed from the samples provided by the fMRI data at different times, and in different states. [sent-52, score-0.1]

25 2 A voxel-free parametrization of brain states We use here the global information provided by the dynamical evolution of X(t) over time, both during the training times and the test times. [sent-53, score-0.58]

26 We would like to effectively replace each fMRI dataset X(t) by a small set of features that facilitates the identification of the brain states, and make the prediction of the features easier. [sent-54, score-0.48]

27 Formally, our goal is to construct a map φ from the voxel space to low dimensional space. [sent-55, score-0.094]

28 As t varies over the training and the test sets, we hope that we explore most of the possible brain configurations that are useful for predicting the features. [sent-57, score-0.277]

29 The map φ provides a parametrization of the brain states. [sent-58, score-0.343]

30 2 as a smooth surface, is the set of parameters y1 , · · · , yL that characterize the brain dynamics. [sent-61, score-0.243]

31 Note that time does not play any role on D, and neighboring points on D correspond to similar brain states. [sent-63, score-0.243]

32 Equipped with this re-parametrization of the dataset X , the goal is to learn the evolution of the feature time series as a function of the new coordinates [y1 (t), · · · , yL (t)]T . [sent-64, score-0.165]

33 Each feature function is an implicit function of the brain state measured by [y1 (t), · · · , yL (t)]. [sent-65, score-0.243]

34 For a given feature fk , the training data provide us with samples of fk at certain locations in D. [sent-66, score-0.68]

35 The map φ is build by globally computing a new parametrization of the set {X(1), · · · , X(T )}. [sent-67, score-0.143]

36 1 The graph of brain states We represent the fMRI dataset for the training times and test times by a graph. [sent-73, score-0.405]

37 Each vertex i corresponds to a time sample ti , and we compute the distance between two vertices i and j by measuring a distance between X(ti ) and X(tj ). [sent-74, score-0.141]

38 Global changes in the signal due to residual head motion, or global blood flow changes were removed by computing a a principal components analysis (PCA) of the dataset X and removing a small number components. [sent-75, score-0.337]

39 This distance compares all the voxels (white and gray matter, as well as CSF) inside the brain. [sent-77, score-0.113]

40 The Euclidean distance used to construct the graph is only useful locally: we can use it to compare brain states that are very similar, but is unfortunately very sensitive to short-circuits created by the noise in the data. [sent-80, score-0.381]

41 The commute time can be conveniently computed 1 1 from the eigenfunctions φ1 , · · · , φN of N = D 2 PD− 2 , with the eigenvalues −1 ≤ λN · · · ≤ λ2 < λ1 = 1. [sent-83, score-0.152]

42 As proposed in [4, 5, 6], we define an embedding i → Ik (i) = 1 φk (i) √ , 1 − λk di Because −1 ≤ λN · · · ≤ λ2 < λ1 = 1, we have √ 1 1−λ2 k = 2, · · · , N > √ 1 1−λ3 (4) 1 > · · · √1−λ . [sent-85, score-0.189]

43 We can therefore N φk (j) neglect √1−λ for large k, and reduce the dimensionality of the embedding by using only the first k K coordinates in (4). [sent-86, score-0.262]

44 The algorithm for the construction of the embedding is summarized in Fig. [sent-89, score-0.192]

45 Algorithm 1: Construction of the embedding Input: – X(t), t = 1, · · · , T , K: number of eigenfunctions. [sent-91, score-0.161]

46 find the first K eigenfunctions, φk , of N • Output: For ti = 1 : T – new co-ordinates of X(ti ): yk (ti ) = φ (i) √1 √ k πi 1−λk k = 2, · · · , K + 1 Figure 3: Construction of the embedding A parameter of the embedding (Fig. [sent-94, score-0.403]

47 We expect that for small values of K the embedding will not describe the data with enough precision, and the prediction will be inaccurate. [sent-97, score-0.245]

48 If K is too large, some of the new coordinates will be describing the noise in the dataset, and the algorithm will overfit the training data. [sent-98, score-0.105]

49 The quality of the prediction for the features: faces, instruction and velocity is plotted against K. [sent-101, score-0.217]

50 Instructions elicits a strong response in the auditory cortex that can be decoded with as few as 20 coordinates. [sent-102, score-0.124]

51 As expected the performance eventually drops when additional coordinates are used to describe variability that is not related to the features to be decoded. [sent-104, score-0.124]

52 3 Semi-supervised learning of the features The problem of predicting a feature fk at an unknown time tu is formulated as kernel ridge regression problem. [sent-107, score-0.439]

53 The training set {fk (t) for t = 1, · · · , Tl } is used to estimate the optimal choice of weights in the following model, Tl ˆ f (tu ) = α(t)K(y(tu ), y(t)), ˆ t=1 where K is a kernel and tu is a time point where we need to predict. [sent-108, score-0.097]

54 4 Results We compared the nonlinear embedding approach (referred to as global Laplacian) to dimension reduction obtained with a PCA of X . [sent-110, score-0.328]

55 Here the principal components are principal volumes, and for each time t we can expand X(t) onto the principal components. [sent-111, score-0.217]

56 We use fk (t) in a subset to predict fk (t) in the other subset. [sent-113, score-0.646]

57 In order to quantify the stability of the prediction we randomly selected 85 % of the training set (first subset), and predicted 85 % of the testing set (other subset). [sent-114, score-0.118]

58 The performance was quantified with the normalized correlation between the model prediction and the real value of fk , est r = δfk (t), δfk (t) / est 2 δ(fk )2 δfk , (5) where δfk = fk (t) − fk . [sent-117, score-1.215]

59 The approach based on the nonlinear embedding yields very stable results, with low variance. [sent-121, score-0.225]

60 For both global methods the optimal performance is reached with less than 50 coordinates. [sent-122, score-0.103]

61 For most features, the nonlinear embedding performed better than global PCA. [sent-125, score-0.328]

62 3 From global to local While models based on global features leverage predictive components from across the brain, cognitive function is often localized within specific regions. [sent-126, score-0.453]

63 The Brodmann areas were defined almost a century ago (see e. [sent-128, score-0.128]

64 While the areas are characterized structurally many also have distinct functional roles and we use these roles to provide useful interpretations of our predictive models. [sent-130, score-0.243]

65 Though the partitioning of cortical regions remains an open and challenging problem, the Brodmann areas represent a transparent compromise between dimensionality, function and structure. [sent-131, score-0.229]

66 Using data supplied by the competition, we warp each brain into standard Talairach coordinates and locate the Brodmann area corresponding to each voxel. [sent-132, score-0.383]

67 Within each Brodmann region, differing in size from tens to thousands of elements, we build the covariance matrix of voxel time series using all three virtual reality episodes. [sent-133, score-0.326]

68 We then project the voxel time series onto the eigenvectors of the covariance matrix (principal components) and build a simple, linear stimulus decoding model using the top n modes ranked by their eigenvalues, n est fk (t) = k wi mk (t). [sent-134, score-0.875]

69 The weights are chosen to minimize the RMS error on the training k set and have a particularly simple form here as the modes are decorrelated, wi = S(t)mk (t) . [sent-136, score-0.168]

70 9 〈r〉 faces instructions velocity 0 1 100 global Laplacian faces instructions velocity 0 1 200 100 global eigenmodes 200 (d) (c) 48 0. [sent-140, score-1.167]

71 (a) nonlinear embedding, (b) global principal components, (c) local (Brodmann area) principal components. [sent-142, score-0.325]

72 In all cases we find that the prediction is remarkably low-dimensional, saturating with less than 100 features. [sent-143, score-0.167]

73 (d) stability and interpretability of the optimal Brodmann areas used for decoding the presence of faces. [sent-144, score-0.313]

74 All three areas are functionally associated with visual processing. [sent-145, score-0.158]

75 Brodmann area 22 (Wernicke’s area) is the best predictor of instructions (not shown). [sent-146, score-0.252]

76 The connections between anatomy, function and prediction add an important measure of interpretability to our decoding models. [sent-147, score-0.24]

77 the real stimulus averaged over the two virtual reality episodes and we use the region with the lowest training error to make the prediction. [sent-148, score-0.283]

78 In principle, we could use a large number of modes to make a prediction with n limited only by the number of training samples. [sent-149, score-0.205]

79 In practice the predictive power of our linear model saturates for a remarkably low number of modes in each region. [sent-150, score-0.155]

80 In Fig 4(c) we demonstrate the performance of the model on the number of local modes for three stimuli that are predicted rather well (faces, instructions and velocity). [sent-151, score-0.417]

81 For many of the well-predicted stimuli, the best Brodmann areas were also stable across subjects and episodes offering important interpretability. [sent-152, score-0.206]

82 For example, in the prediction of instructions (which the subjects received through headphones), the top region was Brodmann Area 22, Wernicke’s area, which has long been associated with the processing of human language. [sent-153, score-0.341]

83 For the prediction of the face stimulus the best region was usually visual cortex (Brodmann Areas 17 and 19) and for the prediction of velocity it was Brodmann Area 7, known to be important for the coordination of visual and motor activity. [sent-154, score-0.502]

84 Using modes derived from Laplacian eigenmaps we were also able to predict an emotional state, the self-reporting of fear and anxiety. [sent-155, score-0.175]

85 Interestingly, in this case the best predictions came from higher cognitive areas in frontal cortex, Brodmann Area 11. [sent-156, score-0.236]

86 While the above discussion highlights the usefulness of classical anatomical location, many aspects of cognitive experience are not likely to be so simple. [sent-157, score-0.194]

87 Local decoders do well on stimuli related to objects while nonlinear global methods better capture stimuli related to emotion. [sent-160, score-0.397]

88 to look for ways of combining the intuition derived from single classical location with more global methods that are likely to do better in prediction. [sent-161, score-0.103]

89 As a step in this direction, we modify our model to include multiple Brodmann areas n est fk (t) = l wi ml (t), i (7) l∈A i=1 where A represents a collection of areas. [sent-162, score-0.561]

90 To make a prediction using the modified model we find the top three Brodmann areas as before (ranked by their training correlation with the stimulus) and then incorporate all of the modes in these areas (nA in total) in the linear model of Eq 7. [sent-163, score-0.497]

91 The combined model leverages both the interpretive power of single areas and also some of the interactions between them. [sent-165, score-0.128]

92 For ease of comparison, we also show the best global results (both nonlinear Laplacian and global principal components). [sent-168, score-0.333]

93 For many (but not all) of the stimuli, the local, low-dimensional linear model is significantly better than both linear and nonlinear global methods. [sent-169, score-0.167]

94 4 Discussion Incorporating the knowledge of functional, cortical regions, we used fMRI to build low-dimensional models of natural experience that performed surprisingly well at predicting many of the complex stimuli in the EBC competition. [sent-170, score-0.283]

95 In addition, the regional basis of our models allows for transparent cognitive interpretation, such as the emergence of Wernicke’s area for the prediction of auditory instructions in the virtual environment. [sent-171, score-0.634]

96 Other well-predicted experiences include the presence of body parts and faces, both of which were decoded by areas in visual cortex. [sent-172, score-0.276]

97 In future work, it will be interesting to examine whether there is a well-defined cognitive difference between stimuli that can be decoded with local brain function and those that appear to require more global techniques. [sent-173, score-0.658]

98 We also learned in this work that nonlinear methods for embedding datasets, inspired by manifold learning methods [4, 5, 6], outperform linear techniques in their ability to capture the complex dynamics of fMRI. [sent-174, score-0.225]

99 Finally, our particular use of Brodmann areas and linear methods represent only a first step towards combining prior knowledge of broad regional brain function with the construction of models for the decoding of natural experience. [sent-175, score-0.564]

100 Extrinsic and intrinsic systems in the posterior cortex of the human brain revealed during natural sensory stimulation. [sent-187, score-0.391]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('fmri', 0.466), ('brodmann', 0.372), ('fk', 0.323), ('brain', 0.243), ('instructions', 0.183), ('embedding', 0.161), ('velocity', 0.133), ('faces', 0.129), ('areas', 0.128), ('stimuli', 0.115), ('cognitive', 0.108), ('global', 0.103), ('parametrization', 0.1), ('wernicke', 0.095), ('tl', 0.095), ('voxel', 0.094), ('modes', 0.087), ('tj', 0.087), ('experience', 0.086), ('decoding', 0.085), ('prediction', 0.084), ('voxels', 0.083), ('eigenfunctions', 0.083), ('virtual', 0.081), ('ti', 0.081), ('laplacian', 0.077), ('ebc', 0.071), ('eigenmodes', 0.071), ('interpretability', 0.071), ('transparent', 0.071), ('coordinates', 0.071), ('area', 0.069), ('cortex', 0.067), ('nonlinear', 0.064), ('est', 0.063), ('tu', 0.063), ('yl', 0.063), ('principal', 0.063), ('reality', 0.061), ('imaging', 0.058), ('decoded', 0.057), ('states', 0.055), ('features', 0.053), ('volumes', 0.053), ('competition', 0.048), ('changes', 0.048), ('eigenbrain', 0.048), ('eigenmaps', 0.047), ('series', 0.047), ('wi', 0.047), ('dataset', 0.047), ('stimulus', 0.045), ('subjects', 0.045), ('dynamical', 0.045), ('build', 0.043), ('remarkably', 0.042), ('sensory', 0.042), ('decorrelated', 0.041), ('commute', 0.041), ('emotional', 0.041), ('saturating', 0.041), ('urban', 0.041), ('verbal', 0.041), ('mk', 0.041), ('princeton', 0.039), ('rms', 0.039), ('natural', 0.039), ('regional', 0.038), ('resonance', 0.038), ('correlation', 0.036), ('functional', 0.035), ('training', 0.034), ('episodes', 0.033), ('magnetic', 0.033), ('activity', 0.033), ('xn', 0.033), ('quanti', 0.032), ('body', 0.032), ('local', 0.032), ('cerebral', 0.032), ('mind', 0.032), ('construction', 0.031), ('visual', 0.03), ('distance', 0.03), ('dimensionality', 0.03), ('regions', 0.03), ('presence', 0.029), ('region', 0.029), ('components', 0.028), ('activated', 0.028), ('physiological', 0.028), ('environment', 0.028), ('eigenvalues', 0.028), ('di', 0.028), ('roles', 0.027), ('center', 0.027), ('created', 0.027), ('internal', 0.027), ('predictive', 0.026), ('graph', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 122 nips-2007-Locality and low-dimensions in the prediction of natural experience from fMRI

Author: Francois Meyer, Greg Stephens

Abstract: Functional Magnetic Resonance Imaging (fMRI) provides dynamical access into the complex functioning of the human brain, detailing the hemodynamic activity of thousands of voxels during hundreds of sequential time points. One approach towards illuminating the connection between fMRI and cognitive function is through decoding; how do the time series of voxel activities combine to provide information about internal and external experience? Here we seek models of fMRI decoding which are balanced between the simplicity of their interpretation and the effectiveness of their prediction. We use signals from a subject immersed in virtual reality to compare global and local methods of prediction applying both linear and nonlinear techniques of dimensionality reduction. We find that the prediction of complex stimuli is remarkably low-dimensional, saturating with less than 100 features. In particular, we build effective models based on the decorrelated components of cognitive activity in the classically-defined Brodmann areas. For some of the stimuli, the top predictive areas were surprisingly transparent, including Wernicke’s area for verbal instructions, visual cortex for facial and body features, and visual-temporal regions for velocity. Direct sensory experience resulted in the most robust predictions, with the highest correlation (c ∼ 0.8) between the predicted and experienced time series of verbal instructions. Techniques based on non-linear dimensionality reduction (Laplacian eigenmaps) performed similarly. The interpretability and relative simplicity of our approach provides a conceptual basis upon which to build more sophisticated techniques for fMRI decoding and offers a window into cognitive function during dynamic, natural experience. 1

2 0.41535515 154 nips-2007-Predicting Brain States from fMRI Data: Incremental Functional Principal Component Regression

Author: Sennay Ghebreab, Arnold Smeulders, Pieter Adriaans

Abstract: We propose a method for reconstruction of human brain states directly from functional neuroimaging data. The method extends the traditional multivariate regression analysis of discretized fMRI data to the domain of stochastic functional measurements, facilitating evaluation of brain responses to complex stimuli and boosting the power of functional imaging. The method searches for sets of voxel time courses that optimize a multivariate functional linear model in terms of R2 statistic. Population based incremental learning is used to identify spatially distributed brain responses to complex stimuli without attempting to localize function first. Variation in hemodynamic lag across brain areas and among subjects is taken into account by voxel-wise non-linear registration of stimulus pattern to fMRI data. Application of the method on an international test benchmark for prediction of naturalistic stimuli from new and unknown fMRI data shows that the method successfully uncovers spatially distributed parts of the brain that are highly predictive of a given stimulus. 1

3 0.12890594 59 nips-2007-Continuous Time Particle Filtering for fMRI

Author: Lawrence Murray, Amos J. Storkey

Abstract: We construct a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed Blood Oxygen Level Dependent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The model poses a difficult parameter estimation problem, both theoretically due to the nonlinearity and divergence of the differential system, and computationally due to its time and space complexity. We adapt a particle filter and smoother to the task, and discuss some of the practical approaches used to tackle the difficulties, including use of sparse matrices and parallelisation. Results demonstrate the tractability of the approach in its application to an effective connectivity study. 1

4 0.11594935 115 nips-2007-Learning the 2-D Topology of Images

Author: Nicolas L. Roux, Yoshua Bengio, Pascal Lamblin, Marc Joliveau, Balázs Kégl

Abstract: We study the following question: is the two-dimensional structure of images a very strong prior or is it something that can be learned with a few examples of natural images? If someone gave us a learning task involving images for which the two-dimensional topology of pixels was not known, could we discover it automatically and exploit it? For example suppose that the pixels had been permuted in a fixed but unknown way, could we recover the relative two-dimensional location of pixels on images? The surprising result presented here is that not only the answer is yes, but that about as few as a thousand images are enough to approximately recover the relative locations of about a thousand pixels. This is achieved using a manifold learning algorithm applied to pixels associated with a measure of distributional similarity between pixel intensities. We compare different topologyextraction approaches and show how having the two-dimensional topology can be exploited.

5 0.11460064 127 nips-2007-Measuring Neural Synchrony by Message Passing

Author: Justin Dauwels, François Vialatte, Tomasz Rutkowski, Andrzej S. Cichocki

Abstract: A novel approach to measure the interdependence of two time series is proposed, referred to as “stochastic event synchrony” (SES); it quantifies the alignment of two point processes by means of the following parameters: time delay, variance of the timing jitter, fraction of “spurious” events, and average similarity of events. SES may be applied to generic one-dimensional and multi-dimensional point processes, however, the paper mainly focusses on point processes in time-frequency domain. The average event similarity is in that case described by two parameters: the average frequency offset between events in the time-frequency plane, and the variance of the frequency offset (“frequency jitter”); SES then consists of five parameters in total. Those parameters quantify the synchrony of oscillatory events, and hence, they provide an alternative to existing synchrony measures that quantify amplitude or phase synchrony. The pairwise alignment of point processes is cast as a statistical inference problem, which is solved by applying the maxproduct algorithm on a graphical model. The SES parameters are determined from the resulting pairwise alignment by maximum a posteriori (MAP) estimation. The proposed interdependence measure is applied to the problem of detecting anomalies in EEG synchrony of Mild Cognitive Impairment (MCI) patients; the results indicate that SES significantly improves the sensitivity of EEG in detecting MCI.

6 0.10568865 58 nips-2007-Consistent Minimization of Clustering Objective Functions

7 0.082145005 74 nips-2007-EEG-Based Brain-Computer Interaction: Improved Accuracy by Automatic Single-Trial Error Detection

8 0.070514858 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

9 0.068629473 106 nips-2007-Invariant Common Spatial Patterns: Alleviating Nonstationarities in Brain-Computer Interfacing

10 0.066908158 155 nips-2007-Predicting human gaze using low-level saliency combined with face detection

11 0.065964043 164 nips-2007-Receptive Fields without Spike-Triggering

12 0.065110281 182 nips-2007-Sparse deep belief net model for visual area V2

13 0.064559683 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

14 0.063305721 49 nips-2007-Colored Maximum Variance Unfolding

15 0.059505969 173 nips-2007-Second Order Bilinear Discriminant Analysis for single trial EEG analysis

16 0.058358029 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

17 0.057408869 125 nips-2007-Markov Chain Monte Carlo with People

18 0.056162868 93 nips-2007-GRIFT: A graphical model for inferring visual classification features from human data

19 0.054736279 103 nips-2007-Inferring Elapsed Time from Stochastic Neural Processes

20 0.054021597 44 nips-2007-Catching Up Faster in Bayesian Model Selection and Model Averaging


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.186), (1, 0.074), (2, 0.082), (3, -0.036), (4, -0.027), (5, 0.169), (6, 0.002), (7, 0.19), (8, -0.155), (9, -0.088), (10, -0.106), (11, 0.032), (12, 0.004), (13, -0.013), (14, 0.249), (15, 0.055), (16, -0.038), (17, -0.152), (18, 0.054), (19, 0.258), (20, 0.034), (21, 0.004), (22, -0.151), (23, -0.209), (24, -0.139), (25, 0.204), (26, -0.126), (27, 0.044), (28, -0.279), (29, -0.012), (30, -0.132), (31, 0.006), (32, 0.043), (33, 0.119), (34, -0.118), (35, 0.046), (36, -0.108), (37, 0.035), (38, -0.075), (39, -0.003), (40, 0.013), (41, -0.027), (42, -0.006), (43, -0.048), (44, 0.016), (45, -0.021), (46, 0.035), (47, -0.028), (48, -0.066), (49, 0.061)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96591419 122 nips-2007-Locality and low-dimensions in the prediction of natural experience from fMRI

Author: Francois Meyer, Greg Stephens

Abstract: Functional Magnetic Resonance Imaging (fMRI) provides dynamical access into the complex functioning of the human brain, detailing the hemodynamic activity of thousands of voxels during hundreds of sequential time points. One approach towards illuminating the connection between fMRI and cognitive function is through decoding; how do the time series of voxel activities combine to provide information about internal and external experience? Here we seek models of fMRI decoding which are balanced between the simplicity of their interpretation and the effectiveness of their prediction. We use signals from a subject immersed in virtual reality to compare global and local methods of prediction applying both linear and nonlinear techniques of dimensionality reduction. We find that the prediction of complex stimuli is remarkably low-dimensional, saturating with less than 100 features. In particular, we build effective models based on the decorrelated components of cognitive activity in the classically-defined Brodmann areas. For some of the stimuli, the top predictive areas were surprisingly transparent, including Wernicke’s area for verbal instructions, visual cortex for facial and body features, and visual-temporal regions for velocity. Direct sensory experience resulted in the most robust predictions, with the highest correlation (c ∼ 0.8) between the predicted and experienced time series of verbal instructions. Techniques based on non-linear dimensionality reduction (Laplacian eigenmaps) performed similarly. The interpretability and relative simplicity of our approach provides a conceptual basis upon which to build more sophisticated techniques for fMRI decoding and offers a window into cognitive function during dynamic, natural experience. 1

2 0.92395681 154 nips-2007-Predicting Brain States from fMRI Data: Incremental Functional Principal Component Regression

Author: Sennay Ghebreab, Arnold Smeulders, Pieter Adriaans

Abstract: We propose a method for reconstruction of human brain states directly from functional neuroimaging data. The method extends the traditional multivariate regression analysis of discretized fMRI data to the domain of stochastic functional measurements, facilitating evaluation of brain responses to complex stimuli and boosting the power of functional imaging. The method searches for sets of voxel time courses that optimize a multivariate functional linear model in terms of R2 statistic. Population based incremental learning is used to identify spatially distributed brain responses to complex stimuli without attempting to localize function first. Variation in hemodynamic lag across brain areas and among subjects is taken into account by voxel-wise non-linear registration of stimulus pattern to fMRI data. Application of the method on an international test benchmark for prediction of naturalistic stimuli from new and unknown fMRI data shows that the method successfully uncovers spatially distributed parts of the brain that are highly predictive of a given stimulus. 1

3 0.53159499 59 nips-2007-Continuous Time Particle Filtering for fMRI

Author: Lawrence Murray, Amos J. Storkey

Abstract: We construct a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed Blood Oxygen Level Dependent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The model poses a difficult parameter estimation problem, both theoretically due to the nonlinearity and divergence of the differential system, and computationally due to its time and space complexity. We adapt a particle filter and smoother to the task, and discuss some of the practical approaches used to tackle the difficulties, including use of sparse matrices and parallelisation. Results demonstrate the tractability of the approach in its application to an effective connectivity study. 1

4 0.32177421 127 nips-2007-Measuring Neural Synchrony by Message Passing

Author: Justin Dauwels, François Vialatte, Tomasz Rutkowski, Andrzej S. Cichocki

Abstract: A novel approach to measure the interdependence of two time series is proposed, referred to as “stochastic event synchrony” (SES); it quantifies the alignment of two point processes by means of the following parameters: time delay, variance of the timing jitter, fraction of “spurious” events, and average similarity of events. SES may be applied to generic one-dimensional and multi-dimensional point processes, however, the paper mainly focusses on point processes in time-frequency domain. The average event similarity is in that case described by two parameters: the average frequency offset between events in the time-frequency plane, and the variance of the frequency offset (“frequency jitter”); SES then consists of five parameters in total. Those parameters quantify the synchrony of oscillatory events, and hence, they provide an alternative to existing synchrony measures that quantify amplitude or phase synchrony. The pairwise alignment of point processes is cast as a statistical inference problem, which is solved by applying the maxproduct algorithm on a graphical model. The SES parameters are determined from the resulting pairwise alignment by maximum a posteriori (MAP) estimation. The proposed interdependence measure is applied to the problem of detecting anomalies in EEG synchrony of Mild Cognitive Impairment (MCI) patients; the results indicate that SES significantly improves the sensitivity of EEG in detecting MCI.

5 0.26456201 89 nips-2007-Feature Selection Methods for Improving Protein Structure Prediction with Rosetta

Author: Ben Blum, Rhiju Das, Philip Bradley, David Baker, Michael I. Jordan, David Tax

Abstract: Rosetta is one of the leading algorithms for protein structure prediction today. It is a Monte Carlo energy minimization method requiring many random restarts to find structures with low energy. In this paper we present a resampling technique for structure prediction of small alpha/beta proteins using Rosetta. From an initial round of Rosetta sampling, we learn properties of the energy landscape that guide a subsequent round of sampling toward lower-energy structures. Rather than attempt to fit the full energy landscape, we use feature selection methods—both L1-regularized linear regression and decision trees—to identify structural features that give rise to low energy. We then enrich these structural features in the second sampling round. Results are presented across a benchmark set of nine small alpha/beta proteins demonstrating that our methods seldom impair, and frequently improve, Rosetta’s performance. 1

6 0.26126423 58 nips-2007-Consistent Minimization of Clustering Objective Functions

7 0.254767 93 nips-2007-GRIFT: A graphical model for inferring visual classification features from human data

8 0.24721159 115 nips-2007-Learning the 2-D Topology of Images

9 0.23946923 74 nips-2007-EEG-Based Brain-Computer Interaction: Improved Accuracy by Automatic Single-Trial Error Detection

10 0.23822737 164 nips-2007-Receptive Fields without Spike-Triggering

11 0.23347764 26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis

12 0.23290564 106 nips-2007-Invariant Common Spatial Patterns: Alleviating Nonstationarities in Brain-Computer Interfacing

13 0.22211528 188 nips-2007-Subspace-Based Face Recognition in Analog VLSI

14 0.2194175 158 nips-2007-Probabilistic Matrix Factorization

15 0.21706666 171 nips-2007-Scan Strategies for Meteorological Radars

16 0.21198955 49 nips-2007-Colored Maximum Variance Unfolding

17 0.21141392 173 nips-2007-Second Order Bilinear Discriminant Analysis for single trial EEG analysis

18 0.20913926 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

19 0.20469634 109 nips-2007-Kernels on Attributed Pointsets with Applications

20 0.20346887 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.045), (13, 0.042), (16, 0.038), (18, 0.03), (19, 0.021), (20, 0.264), (21, 0.068), (31, 0.04), (34, 0.033), (35, 0.025), (47, 0.085), (49, 0.026), (50, 0.01), (83, 0.103), (87, 0.032), (90, 0.072)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77071178 122 nips-2007-Locality and low-dimensions in the prediction of natural experience from fMRI

Author: Francois Meyer, Greg Stephens

Abstract: Functional Magnetic Resonance Imaging (fMRI) provides dynamical access into the complex functioning of the human brain, detailing the hemodynamic activity of thousands of voxels during hundreds of sequential time points. One approach towards illuminating the connection between fMRI and cognitive function is through decoding; how do the time series of voxel activities combine to provide information about internal and external experience? Here we seek models of fMRI decoding which are balanced between the simplicity of their interpretation and the effectiveness of their prediction. We use signals from a subject immersed in virtual reality to compare global and local methods of prediction applying both linear and nonlinear techniques of dimensionality reduction. We find that the prediction of complex stimuli is remarkably low-dimensional, saturating with less than 100 features. In particular, we build effective models based on the decorrelated components of cognitive activity in the classically-defined Brodmann areas. For some of the stimuli, the top predictive areas were surprisingly transparent, including Wernicke’s area for verbal instructions, visual cortex for facial and body features, and visual-temporal regions for velocity. Direct sensory experience resulted in the most robust predictions, with the highest correlation (c ∼ 0.8) between the predicted and experienced time series of verbal instructions. Techniques based on non-linear dimensionality reduction (Laplacian eigenmaps) performed similarly. The interpretability and relative simplicity of our approach provides a conceptual basis upon which to build more sophisticated techniques for fMRI decoding and offers a window into cognitive function during dynamic, natural experience. 1

2 0.6123358 63 nips-2007-Convex Relaxations of Latent Variable Training

Author: Yuhong Guo, Dale Schuurmans

Abstract: We investigate a new, convex relaxation of an expectation-maximization (EM) variant that approximates a standard objective while eliminating local minima. First, a cautionary result is presented, showing that any convex relaxation of EM over hidden variables must give trivial results if any dependence on the missing values is retained. Although this appears to be a strong negative outcome, we then demonstrate how the problem can be bypassed by using equivalence relations instead of value assignments over hidden variables. In particular, we develop new algorithms for estimating exponential conditional models that only require equivalence relation information over the variable values. This reformulation leads to an exact expression for EM variants in a wide range of problems. We then develop a semidefinite relaxation that yields global training by eliminating local minima. 1

3 0.55025542 93 nips-2007-GRIFT: A graphical model for inferring visual classification features from human data

Author: Michael Ross, Andrew Cohen

Abstract: This paper describes a new model for human visual classification that enables the recovery of image features that explain human subjects’ performance on different visual classification tasks. Unlike previous methods, this algorithm does not model their performance with a single linear classifier operating on raw image pixels. Instead, it represents classification as the combination of multiple feature detectors. This approach extracts more information about human visual classification than previous methods and provides a foundation for further exploration. 1

4 0.54507744 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

Author: Matthias Bethge, Philipp Berens

Abstract: Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data—the model parameters can be derived in closed form and sampling is easy. Therefore, our NearMaxEnt approach can serve as a tool for testing predictions from a pairwise maximum entropy model not only for low-dimensional marginals, but also for high dimensional measurements of more than thousand units. We demonstrate its usefulness by studying natural images with dichotomized pixel intensities. Our results indicate that the statistics of such higher-dimensional measurements exhibit additional structure that are not predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics surprisingly well up to the limit of dimensionality where estimation of the full joint distribution is feasible. 1

5 0.54383588 154 nips-2007-Predicting Brain States from fMRI Data: Incremental Functional Principal Component Regression

Author: Sennay Ghebreab, Arnold Smeulders, Pieter Adriaans

Abstract: We propose a method for reconstruction of human brain states directly from functional neuroimaging data. The method extends the traditional multivariate regression analysis of discretized fMRI data to the domain of stochastic functional measurements, facilitating evaluation of brain responses to complex stimuli and boosting the power of functional imaging. The method searches for sets of voxel time courses that optimize a multivariate functional linear model in terms of R2 statistic. Population based incremental learning is used to identify spatially distributed brain responses to complex stimuli without attempting to localize function first. Variation in hemodynamic lag across brain areas and among subjects is taken into account by voxel-wise non-linear registration of stimulus pattern to fMRI data. Application of the method on an international test benchmark for prediction of naturalistic stimuli from new and unknown fMRI data shows that the method successfully uncovers spatially distributed parts of the brain that are highly predictive of a given stimulus. 1

6 0.54340643 18 nips-2007-A probabilistic model for generating realistic lip movements from speech

7 0.5429489 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model

8 0.54060268 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC

9 0.53879732 86 nips-2007-Exponential Family Predictive Representations of State

10 0.53877056 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

11 0.5379771 156 nips-2007-Predictive Matrix-Variate t Models

12 0.53781837 174 nips-2007-Selecting Observations against Adversarial Objectives

13 0.53717619 79 nips-2007-Efficient multiple hyperparameter learning for log-linear models

14 0.53641808 73 nips-2007-Distributed Inference for Latent Dirichlet Allocation

15 0.53508294 94 nips-2007-Gaussian Process Models for Link Analysis and Transfer Learning

16 0.53460014 168 nips-2007-Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods

17 0.53454959 45 nips-2007-Classification via Minimum Incremental Coding Length (MICL)

18 0.53451079 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

19 0.53349066 189 nips-2007-Supervised Topic Models

20 0.53189093 100 nips-2007-Hippocampal Contributions to Control: The Third Way