nips nips2010 nips2010-249 knowledge-graph by maker-knowledge-mining

249 nips-2010-Spatial and anatomical regularization of SVM for brain image analysis


Source: pdf

Author: Remi Cuingnet, Marie Chupin, Habib Benali, Olivier Colliot

Abstract: Support vector machines (SVM) are increasingly used in brain image analyses since they allow capturing complex multivariate relationships in the data. Moreover, when the kernel is linear, SVMs can be used to localize spatial patterns of discrimination between two groups of subjects. However, the features’ spatial distribution is not taken into account. As a consequence, the optimal margin hyperplane is often scattered and lacks spatial coherence, making its anatomical interpretation difficult. This paper introduces a framework to spatially regularize SVM for brain image analysis. We show that Laplacian regularization provides a flexible framework to integrate various types of constraints and can be applied to both cortical surfaces and 3D brain images. The proposed framework is applied to the classification of MR images based on gray matter concentration maps and cortical thickness measures from 30 patients with Alzheimer’s disease and 30 elderly controls. The results demonstrate that the proposed method enables natural spatial and anatomical regularization of the classifier. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Spatial and anatomical regularization of SVM for brain image analysis R´ mi Cuingnet e CRICM (UPMC/Inserm/CNRS), Paris, France Inserm - LIF (UMR S 678), Paris, France remi. [sent-1, score-0.575]

2 fr Abstract Support vector machines (SVM) are increasingly used in brain image analyses since they allow capturing complex multivariate relationships in the data. [sent-11, score-0.247]

3 Moreover, when the kernel is linear, SVMs can be used to localize spatial patterns of discrimination between two groups of subjects. [sent-12, score-0.271]

4 As a consequence, the optimal margin hyperplane is often scattered and lacks spatial coherence, making its anatomical interpretation difficult. [sent-14, score-0.49]

5 This paper introduces a framework to spatially regularize SVM for brain image analysis. [sent-15, score-0.305]

6 We show that Laplacian regularization provides a flexible framework to integrate various types of constraints and can be applied to both cortical surfaces and 3D brain images. [sent-16, score-0.512]

7 The proposed framework is applied to the classification of MR images based on gray matter concentration maps and cortical thickness measures from 30 patients with Alzheimer’s disease and 30 elderly controls. [sent-17, score-0.69]

8 The results demonstrate that the proposed method enables natural spatial and anatomical regularization of the classifier. [sent-18, score-0.568]

9 In such analyses, brain images are first spatially registered to a common stereotaxic space, and then mass univariate statistical tests are performed in each voxel to detect significant group differences. [sent-20, score-0.454]

10 However, the sensitivity of theses approaches is limited when the differences are spatially complex and involve a combination of different voxels or brain structures [2]. [sent-21, score-0.631]

11 However, one of the problems with analyzing directly the OMH coefficients is that the corresponding maps are scattered and lack spatial coherence. [sent-25, score-0.294]

12 In this paper, we address this issue by proposing a framework to introduce spatial consistency into SVMs by using regularization operators. [sent-27, score-0.308]

13 We then show that the regularization operator framework provides a flexible approach to model different types of proximity (section 3). [sent-29, score-0.518]

14 We then present in section 5 a more complex type of constraint, called anatomical proximity. [sent-33, score-0.26]

15 In the latter case, two features are considered close if they belong to the same brain network; for instance two voxels are close if they belong to the same anatomical or functional region or if they are anatomically or functionally connected (based on fMRI networks or white matter tracts). [sent-34, score-1.159]

16 Finally, in section 6, the proposed framework is illustrated on the analysis of MR images using gray matter concentration maps and cortical thickness measures from 30 patients with AD and 30 elderly controls from the ADNI database (www. [sent-35, score-0.629]

17 1 Brain imaging data In this contribution, we consider any feature computed either at each voxel of a 3D brain image or at any vertex of the cortical surface. [sent-41, score-0.47]

18 Typically, for anatomical studies, the features could be tissue concentration maps such as gray matter (GM) or white matter (WM) for the 3D case or cortical thickness maps for the surface case. [sent-42, score-1.21]

19 We further assume that 3D images or cortical surfaces were spatially normalized to a common stereotaxic space (e. [sent-44, score-0.441]

20 Similarly, in the surface case, xs can be viewed either as an element of Rd where d denotes the number of vertices or as a real-valued function on a 2-dimensional compact Riemannian manifold. [sent-55, score-0.263]

21 2 Linear SVM The linear SVM solves the following optimization problem [3, 4, 11]: N wopt , bopt = arg min lhinge (ys [ w, xs + b]) + λ w 2 (1) w∈X ,b∈R s=1 where λ ∈ R+ is the regularization parameter and lhinge the hinge loss function defined as: lhinge : u ∈ R → max(0, 1 − u). [sent-61, score-0.764]

22 Thus, when the input features opt are the voxels of a 3D image, each element of wopt = (wv )v∈V also corresponds to a voxel. [sent-63, score-0.584]

23 Similarly, for the surface-based methods, the elements of wopt can be represented on the vertices of the cortical surface. [sent-64, score-0.328]

24 To be anatomically consistent, if v (1) ∈ V and v (2) ∈ V are close according opt opt to the topology of V, their weights in the SVM classifier, wv(1) and wv(2) respectively, should be similar. [sent-65, score-0.26]

25 However, this is not guaranteed with the standard linear SVM (as for example in [7]) because the regularization term is not a spatial regularization. [sent-67, score-0.308]

26 The aim of the present paper is to propose methods to ensure that wopt is spatially regularized. [sent-68, score-0.269]

27 3 How to include priors in SVM To spatially regularize the SVM, one has to include some prior knowledge on the proximity of features. [sent-70, score-0.444]

28 4 Regularization operators Our aim is to introduce a spatial regularization on the classifier function of the SVM which can be written as sgn (f (xs ) + b) where f ∈ RX . [sent-82, score-0.308]

29 Therefore, the optimisation problem (3) is very convenient to include spatial regularization on f via the definition of P . [sent-87, score-0.308]

30 One has to define the regularization operator P so as to obtain the suitable regularization for the problem. [sent-90, score-0.302]

31 3 Laplacian regularization Spatial regularization requires the notion of proximity between elements of V. [sent-91, score-0.573]

32 In this section, we propose spatial regularizations based on the Laplacian for both of these proximity models. [sent-93, score-0.524]

33 Voxels of a brain image can be considered as nodes of a graph which models the voxels’ proximity. [sent-97, score-0.244]

34 The new kernel Kβ is given by: Kβ (x1 , x2 ) = xT e−βL x2 (6) 1 This is a heat or diffusion kernel on a graph. [sent-103, score-0.295]

35 To our knowledge, such spectral regularization has not been applied to brain images but only to the classification of microarray data [20]. [sent-108, score-0.323]

36 Therefore, the Laplacian regularization presented in the previous paragraph can be extended to compact Riemannian manifolds [22]. [sent-117, score-0.233]

37 In sections 4 and 5, we present different types of proximity models which correspond to different types of graphs or distances. [sent-122, score-0.37]

38 4 Spatial proximity In this section, we consider the case of regularization based on spatial proximity, i. [sent-123, score-0.643]

39 two voxels (or vertices) are close if they are spatially close. [sent-125, score-0.441]

40 1 The 3D case When V are the image voxels (discrete case), the simplest option to encode the spatial proximity is to use the image connectivity (e. [sent-127, score-0.984]

41 Similarly, when V is a compact subset of R3 (continuous case), the proximity is encoded by a Euclidean distance. [sent-130, score-0.39]

42 However, smoothing the data with a Gaussian kernel would mix gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). [sent-132, score-0.46]

43 Instead, we propose a graph which takes into consideration both the spatial localization and the tissue types. [sent-133, score-0.415]

44 Based on tissue probability maps, in each voxel v, we have the set of probabilities pv that this voxel belongs to GM, WM or CSF. [sent-134, score-0.39]

45 Two voxels are connected if and only if they are neighbors in the image (6-connectivity). [sent-136, score-0.416]

46 The weight au,v of the edge between two connected voxels u and v is 2 2 au,v = e−dχ2 (pu ,pv ) /(2σ ) , where dχ2 is the χ2 -distance between two distributions. [sent-137, score-0.37]

47 In this paper, we chose a different approach by adopting the continuous viewpoint: we consider the cortical surface as a 2-dimensional Riemannian manifold and use the regularization operator defined by equation (7). [sent-144, score-0.506]

48 The heat kernel has already been used for cortical smoothing for example in [23, 24, 25, 26]. [sent-146, score-0.377]

49 5 Anatomical proximity In this section, we consider a different type of proximity, which we call anatomical proximity. [sent-149, score-0.595]

50 Two voxels are considered close if they belong to the same brain network. [sent-150, score-0.537]

51 For example, two voxels can be close if they belong to the same anatomical or functional region (defined for example by a probabilistic atlas). [sent-151, score-0.647]

52 Another example is that of “long-range” proximity which models the fact that distant voxels can be anatomically (through white matter tracts) or functionally connected (based on fMRI networks). [sent-153, score-0.974]

53 Therefore, we then show a continuous formulation which enables to consider both spatial and anatomical proximity. [sent-157, score-0.486]

54 1 On graphs: atlas and connectivity Let (A1 , · · · , AR ) be the R regions of interest (ROI) of an atlas and p(v ∈ Ar ) the probability that the voxel v belongs to region Ar . [sent-159, score-0.446]

55 Then the probability that two voxels v (i) and v (j) R belong to the same region is: v (i) , v (j) ∈ A2 . [sent-160, score-0.387]

56 Then, for i = j, the (i, j)-th entry of the adjacency matrix EE t is the probability that the voxels v (i) and v (j) belong to the same regions. [sent-163, score-0.387]

57 The next section presents a combination of anatomical and spatial proximity using the continuous viewpoint. [sent-170, score-0.821]

58 2 On statistical manifolds In this section, the goal is to take into account various prior informations such as tissue information, atlas information and spatial proximity. [sent-172, score-0.578]

59 5 Fisher metric We assume that we are given an anatomical or a functional atlas A composed of R regions: {Ar }r=1···R . [sent-175, score-0.461]

60 In each point v ∈ V, we have a probability distribution patlas (·|v) ∈ RT ×A which informs about the tissue type and the atlas region in v. [sent-177, score-0.41]

61 We also consider a probability distribution ploc (·|v) ∈ RV which encodes the spatial proxim2 ity. [sent-180, score-0.329]

62 v∈V A natural way to encode proximity on M is to use the Fisher metric as in [22]. [sent-183, score-0.384]

63 For clarity, we present this framework only for 3D images but it could be applied to cortical surfaces with minor changes. [sent-185, score-0.297]

64 When δij atlas 2 ploc (·|v) ∼ N (v, σloc I3 ), we have: gij (v) = gij (v) + 2 . [sent-187, score-0.444]

65 σloc Computing the kernel Once the notion of proximity is defined, one has to compute the kernel matrix. [sent-188, score-0.499]

66 The computation of the kernel matrix requires the computation of e−β∆ xs for all the subjects of the training set. [sent-189, score-0.263]

67 The eigendecomposition of the Laplace-Beltrami operator is intractable since the number of voxels in a brain images is about 106 . [sent-190, score-0.6]

68 Subjects were excluded if their scan revealed major artifacts or gross structural abnormalities of the white matter, for it makes the tissue segmentation step fail. [sent-211, score-0.226]

69 Features extraction For the 3D image analyses, all T1-weighted MR images were segmented into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) using the SPM5 (Statistical Parametric Mapping, London, UK) unified segmentation routine [30] and spatially normalized with DARTEL [9]. [sent-215, score-0.546]

70 For the surface-based analyses, the features are the cortical thickness values at each vertex of the cortical surface. [sent-217, score-0.432]

71 The classification function obtained with a linear SVM is the sign of the inner product of the features with wopt , a vector opt orthogonal to the OMH [3, 4]. [sent-222, score-0.252]

72 Therefore, if the absolute value of the ith component of wopt , |wi |, opt th is small compared to the other components (|wj |)j=i , the i feature will have a small influence opt on the classification. [sent-223, score-0.344]

73 Thus the optimal weights wopt allow us to evaluate the anatomical consistency 1 of the classifier. [sent-225, score-0.42]

74 3 Results: spatial proximity In this section, we present the results for the spatial proximity in the 3D case (method presented in section 4. [sent-228, score-1.048]

75 1(a) presents the OMH when no spatial regularization is performed. [sent-232, score-0.308]

76 1(b) shows the results with spatial proximity but without tissue probability maps. [sent-234, score-0.702]

77 For instance, it mixes tissues of the temporal lobe with tissues of the frontal and parietal lobes. [sent-237, score-0.253]

78 The results with both spatial proximity and tissue maps are shown on Fig. [sent-238, score-0.766]

79 β controls the size of the spatial regularization and was chosen to be equivalent to a 4mm-FWHM of the Gaussian smoothing. [sent-241, score-0.308]

80 The classifiers were able to distinguish AD from CN with similar accuracies (83% with no spatial priors and 85% with spatial priors). [sent-243, score-0.378]

81 4 Results: anatomical proximity In this section, we present the results for the anatomical proximity. [sent-245, score-0.855]

82 Discrete case For the discrete case, we used ”short-range“ proximity, defined by the cortical atlas of Desikan et al. [sent-250, score-0.367]

83 When the amount of regularization is increased, voxels of a same region tend to be considered as similar by the classifier (Fig. [sent-259, score-0.451]

84 Note how the anatomical coherence of the OMH varies with β. [sent-261, score-0.26]

85 The atlas information used was only the tissue types. [sent-264, score-0.33]

86 5 (c) (d) Figure 1: Normalized w coefficients: (a) no spatial prior, (b) spatial proximity: FWHM=4mm, (c) spatial proximity and tissues: FWHM∼4mm, (d) Fisher metric using tissue maps. [sent-277, score-1.129]

87 5 (d) Figure 2: Normalized w of the left hemisphere when the SVM is regularized with a cortical atlas [31]: (a) β = 0 (no prior), (b) β = 1, (c) β = 2, (d) β = 3. [sent-280, score-0.32]

88 7 Discussion In this contribution, we proposed to use regularization operators to add spatial consistency to SVMs for brain image analysis. [sent-281, score-0.504]

89 We show that this provides a flexible approach to model different types of proximity between the features. [sent-282, score-0.335]

90 We proposed derivations for both 3D image features, such as tissue maps, or surface characteristics, such as cortical thickness. [sent-283, score-0.475]

91 We considered two different types of formulations: a discrete viewpoint in which the proximity is encoded via a graph, and a continuous viewpoint in which the data lies on a Riemannian manifold. [sent-284, score-0.518]

92 In particular, the latter viewpoint is useful for surface cases because it overcomes problems due to surface parameterization. [sent-285, score-0.239]

93 We first considered the case of regularization based on spatial proximity, which results in spatially consistent OMH making their anatomical interpretation more meaningful. [sent-287, score-0.677]

94 We then considered a different type of proximity which allows modeling higherlevel knowledge, which we call anatomical proximity. [sent-288, score-0.595]

95 In this model, two voxels are considered close if they belong to the same brain network. [sent-289, score-0.537]

96 For example, two voxels can be close if they belong to the same anatomical region. [sent-290, score-0.647]

97 Another example is that of “long-range” proximity which models the fact that distant voxels can be anatomically connected, through white matter tracts, or functionally connected, based on fMRI networks. [sent-292, score-0.936]

98 Early diagnosis of Alzheimer’s disease using cortical thickness: impact of cognitive reserve. [sent-350, score-0.267]

99 Heat kernel smoothing and its application to cortical manifolds. [sent-431, score-0.291]

100 Cortical thickness analysis in autism with heat kernel smoothing. [sent-437, score-0.264]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('proximity', 0.335), ('voxels', 0.332), ('anatomical', 0.26), ('spatial', 0.189), ('tissue', 0.178), ('cortical', 0.168), ('omh', 0.16), ('wopt', 0.16), ('atlas', 0.152), ('brain', 0.15), ('adni', 0.14), ('ploc', 0.14), ('svm', 0.137), ('xs', 0.125), ('mri', 0.123), ('regularization', 0.119), ('spatially', 0.109), ('riemannian', 0.107), ('alzheimer', 0.106), ('voxel', 0.106), ('neuroimage', 0.105), ('matter', 0.104), ('lhinge', 0.1), ('thickness', 0.096), ('opt', 0.092), ('heat', 0.086), ('surface', 0.083), ('mr', 0.083), ('ar', 0.082), ('kernel', 0.082), ('initiative', 0.08), ('patlas', 0.08), ('laplacian', 0.077), ('anatomically', 0.076), ('gij', 0.076), ('surfaces', 0.075), ('viewpoint', 0.073), ('loc', 0.07), ('ad', 0.066), ('operator', 0.064), ('tissues', 0.064), ('maps', 0.064), ('disease', 0.061), ('gm', 0.061), ('neuroimaging', 0.06), ('bopt', 0.06), ('cricm', 0.06), ('elderly', 0.06), ('manifolds', 0.059), ('classi', 0.059), ('paris', 0.058), ('subjects', 0.056), ('belong', 0.055), ('rv', 0.055), ('compact', 0.055), ('images', 0.054), ('tracts', 0.053), ('ashburner', 0.053), ('analyses', 0.051), ('ys', 0.051), ('wm', 0.049), ('fmri', 0.049), ('metric', 0.049), ('lobe', 0.048), ('france', 0.048), ('white', 0.048), ('graph', 0.048), ('et', 0.047), ('image', 0.046), ('diffusion', 0.045), ('scans', 0.045), ('wv', 0.043), ('svms', 0.042), ('patients', 0.042), ('functionally', 0.041), ('scattered', 0.041), ('gray', 0.041), ('smoothing', 0.041), ('cerebrospinal', 0.04), ('desikan', 0.04), ('morphometry', 0.04), ('theses', 0.04), ('tmi', 0.04), ('weiner', 0.04), ('parietal', 0.039), ('diagnosis', 0.038), ('frontal', 0.038), ('connected', 0.038), ('er', 0.037), ('continuous', 0.037), ('connectivity', 0.036), ('ce', 0.036), ('univ', 0.036), ('csf', 0.035), ('inserm', 0.035), ('stereotaxic', 0.035), ('fwhm', 0.035), ('mms', 0.035), ('chose', 0.035), ('graphs', 0.035)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 249 nips-2010-Spatial and anatomical regularization of SVM for brain image analysis

Author: Remi Cuingnet, Marie Chupin, Habib Benali, Olivier Colliot

Abstract: Support vector machines (SVM) are increasingly used in brain image analyses since they allow capturing complex multivariate relationships in the data. Moreover, when the kernel is linear, SVMs can be used to localize spatial patterns of discrimination between two groups of subjects. However, the features’ spatial distribution is not taken into account. As a consequence, the optimal margin hyperplane is often scattered and lacks spatial coherence, making its anatomical interpretation difficult. This paper introduces a framework to spatially regularize SVM for brain image analysis. We show that Laplacian regularization provides a flexible framework to integrate various types of constraints and can be applied to both cortical surfaces and 3D brain images. The proposed framework is applied to the classification of MR images based on gray matter concentration maps and cortical thickness measures from 30 patients with Alzheimer’s disease and 30 elderly controls. The results demonstrate that the proposed method enables natural spatial and anatomical regularization of the classifier. 1

2 0.24629711 97 nips-2010-Functional Geometry Alignment and Localization of Brain Areas

Author: Georg Langs, Yanmei Tie, Laura Rigolo, Alexandra Golby, Polina Golland

Abstract: Matching functional brain regions across individuals is a challenging task, largely due to the variability in their location and extent. It is particularly difficult, but highly relevant, for patients with pathologies such as brain tumors, which can cause substantial reorganization of functional systems. In such cases spatial registration based on anatomical data is only of limited value if the goal is to establish correspondences of functional areas among different individuals, or to localize potentially displaced active regions. Rather than rely on spatial alignment, we propose to perform registration in an alternative space whose geometry is governed by the functional interaction patterns in the brain. We first embed each brain into a functional map that reflects connectivity patterns during a fMRI experiment. The resulting functional maps are then registered, and the obtained correspondences are propagated back to the two brains. In application to a language fMRI experiment, our preliminary results suggest that the proposed method yields improved functional correspondences across subjects. This advantage is pronounced for subjects with tumors that affect the language areas and thus cause spatial reorganization of the functional regions. 1

3 0.23786257 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

Author: Morten Mørup, Kristoffer Madsen, Anne-marie Dogonowski, Hartwig Siebner, Lars K. Hansen

Abstract: Functional magnetic resonance imaging (fMRI) can be applied to study the functional connectivity of the neural elements which form complex network at a whole brain level. Most analyses of functional resting state networks (RSN) have been based on the analysis of correlation between the temporal dynamics of various regions of the brain. While these models can identify coherently behaving groups in terms of correlation they give little insight into how these groups interact. In this paper we take a different view on the analysis of functional resting state networks. Starting from the definition of resting state as functional coherent groups we search for functional units of the brain that communicate with other parts of the brain in a coherent manner as measured by mutual information. We use the infinite relational model (IRM) to quantify functional coherent groups of resting state networks and demonstrate how the extracted component interactions can be used to discriminate between functional resting state activity in multiple sclerosis and normal subjects. 1

4 0.23464441 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

Author: Gael Varoquaux, Alexandre Gramfort, Jean-baptiste Poline, Bertrand Thirion

Abstract: Spontaneous brain activity, as observed in functional neuroimaging, has been shown to display reproducible structure that expresses brain architecture and carries markers of brain pathologies. An important view of modern neuroscience is that such large-scale structure of coherent activity reflects modularity properties of brain connectivity graphs. However, to date, there has been no demonstration that the limited and noisy data available in spontaneous activity observations could be used to learn full-brain probabilistic models that generalize to new data. Learning such models entails two main challenges: i) modeling full brain connectivity is a difficult estimation problem that faces the curse of dimensionality and ii) variability between subjects, coupled with the variability of functional signals between experimental runs, makes the use of multiple datasets challenging. We describe subject-level brain functional connectivity structure as a multivariate Gaussian process and introduce a new strategy to estimate it from group data, by imposing a common structure on the graphical model in the population. We show that individual models learned from functional Magnetic Resonance Imaging (fMRI) data using this population prior generalize better to unseen data than models based on alternative regularization schemes. To our knowledge, this is the first report of a cross-validated model of spontaneous brain activity. Finally, we use the estimated graphical model to explore the large-scale characteristics of functional architecture and show for the first time that known cognitive networks appear as the integrated communities of functional connectivity graph. 1

5 0.21482569 77 nips-2010-Epitome driven 3-D Diffusion Tensor image segmentation: on extracting specific structures

Author: Kamiya Motwani, Nagesh Adluru, Chris Hinrichs, Andrew Alexander, Vikas Singh

Abstract: We study the problem of segmenting specific white matter structures of interest from Diffusion Tensor (DT-MR) images of the human brain. This is an important requirement in many Neuroimaging studies: for instance, to evaluate whether a brain structure exhibits group level differences as a function of disease in a set of images. Typically, interactive expert guided segmentation has been the method of choice for such applications, but this is tedious for large datasets common today. To address this problem, we endow an image segmentation algorithm with “advice” encoding some global characteristics of the region(s) we want to extract. This is accomplished by constructing (using expert-segmented images) an epitome of a specific region – as a histogram over a bag of ‘words’ (e.g., suitable feature descriptors). Now, given such a representation, the problem reduces to segmenting a new brain image with additional constraints that enforce consistency between the segmented foreground and the pre-specified histogram over features. We present combinatorial approximation algorithms to incorporate such domain specific constraints for Markov Random Field (MRF) segmentation. Making use of recent results on image co-segmentation, we derive effective solution strategies for our problem. We provide an analysis of solution quality, and present promising experimental evidence showing that many structures of interest in Neuroscience can be extracted reliably from 3-D brain image volumes using our algorithm. 1

6 0.18009724 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

7 0.088963278 133 nips-2010-Kernel Descriptors for Visual Recognition

8 0.087517545 12 nips-2010-A Primal-Dual Algorithm for Group Sparse Regularization with Overlapping Groups

9 0.087100074 250 nips-2010-Spectral Regularization for Support Estimation

10 0.085142002 279 nips-2010-Universal Kernels on Non-Standard Input Spaces

11 0.079047203 145 nips-2010-Learning Kernels with Radiuses of Minimum Enclosing Balls

12 0.069986887 148 nips-2010-Learning Networks of Stochastic Differential Equations

13 0.067633875 174 nips-2010-Multi-label Multiple Kernel Learning by Stochastic Approximation: Application to Visual Object Recognition

14 0.066365406 79 nips-2010-Estimating Spatial Layout of Rooms using Volumetric Reasoning about Objects and Surfaces

15 0.065455541 57 nips-2010-Decoding Ipsilateral Finger Movements from ECoG Signals in Humans

16 0.065448284 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

17 0.061209619 232 nips-2010-Sample Complexity of Testing the Manifold Hypothesis

18 0.06035405 117 nips-2010-Identifying graph-structured activation patterns in networks

19 0.060044065 86 nips-2010-Exploiting weakly-labeled Web images to improve object classification: a domain adaptation approach

20 0.058156878 195 nips-2010-Online Learning in The Manifold of Low-Rank Matrices


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.197), (1, 0.093), (2, -0.123), (3, 0.075), (4, 0.099), (5, -0.151), (6, 0.099), (7, -0.395), (8, 0.003), (9, 0.045), (10, 0.04), (11, 0.096), (12, -0.044), (13, 0.042), (14, -0.087), (15, 0.029), (16, 0.013), (17, 0.034), (18, 0.055), (19, -0.06), (20, -0.001), (21, -0.008), (22, -0.077), (23, -0.043), (24, -0.028), (25, 0.012), (26, -0.004), (27, -0.036), (28, -0.098), (29, 0.032), (30, 0.026), (31, 0.025), (32, 0.034), (33, -0.051), (34, -0.02), (35, -0.034), (36, -0.002), (37, 0.004), (38, 0.074), (39, -0.023), (40, 0.034), (41, 0.013), (42, -0.023), (43, -0.002), (44, -0.049), (45, 0.009), (46, -0.022), (47, -0.078), (48, -0.013), (49, -0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92724788 249 nips-2010-Spatial and anatomical regularization of SVM for brain image analysis

Author: Remi Cuingnet, Marie Chupin, Habib Benali, Olivier Colliot

Abstract: Support vector machines (SVM) are increasingly used in brain image analyses since they allow capturing complex multivariate relationships in the data. Moreover, when the kernel is linear, SVMs can be used to localize spatial patterns of discrimination between two groups of subjects. However, the features’ spatial distribution is not taken into account. As a consequence, the optimal margin hyperplane is often scattered and lacks spatial coherence, making its anatomical interpretation difficult. This paper introduces a framework to spatially regularize SVM for brain image analysis. We show that Laplacian regularization provides a flexible framework to integrate various types of constraints and can be applied to both cortical surfaces and 3D brain images. The proposed framework is applied to the classification of MR images based on gray matter concentration maps and cortical thickness measures from 30 patients with Alzheimer’s disease and 30 elderly controls. The results demonstrate that the proposed method enables natural spatial and anatomical regularization of the classifier. 1

2 0.86709505 97 nips-2010-Functional Geometry Alignment and Localization of Brain Areas

Author: Georg Langs, Yanmei Tie, Laura Rigolo, Alexandra Golby, Polina Golland

Abstract: Matching functional brain regions across individuals is a challenging task, largely due to the variability in their location and extent. It is particularly difficult, but highly relevant, for patients with pathologies such as brain tumors, which can cause substantial reorganization of functional systems. In such cases spatial registration based on anatomical data is only of limited value if the goal is to establish correspondences of functional areas among different individuals, or to localize potentially displaced active regions. Rather than rely on spatial alignment, we propose to perform registration in an alternative space whose geometry is governed by the functional interaction patterns in the brain. We first embed each brain into a functional map that reflects connectivity patterns during a fMRI experiment. The resulting functional maps are then registered, and the obtained correspondences are propagated back to the two brains. In application to a language fMRI experiment, our preliminary results suggest that the proposed method yields improved functional correspondences across subjects. This advantage is pronounced for subjects with tumors that affect the language areas and thus cause spatial reorganization of the functional regions. 1

3 0.85625488 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

Author: Kaiming Li, Lei Guo, Carlos Faraco, Dajiang Zhu, Fan Deng, Tuo Zhang, Xi Jiang, Degang Zhang, Hanbo Chen, Xintao Hu, Steve Miller, Tianming Liu

Abstract: Functional segregation and integration are fundamental characteristics of the human brain. Studying the connectivity among segregated regions and the dynamics of integrated brain networks has drawn increasing interest. A very controversial, yet fundamental issue in these studies is how to determine the best functional brain regions or ROIs (regions of interests) for individuals. Essentially, the computed connectivity patterns and dynamics of brain networks are very sensitive to the locations, sizes, and shapes of the ROIs. This paper presents a novel methodology to optimize the locations of an individual's ROIs in the working memory system. Our strategy is to formulate the individual ROI optimization as a group variance minimization problem, in which group-wise functional and structural connectivity patterns, and anatomic profiles are defined as optimization constraints. The optimization problem is solved via the simulated annealing approach. Our experimental results show that the optimized ROIs have significantly improved consistency in structural and functional profiles across subjects, and have more reasonable localizations and more consistent morphological and anatomic profiles. 1 Int ro ducti o n The human brain’s function is segregated into distinct regions and integrated via axonal fibers [1]. Studying the connectivity among these regions and modeling their dynamics and interactions has drawn increasing interest and effort from the brain imaging and neuroscience communities [2-6]. For example, recently, the Human Connectome Project [7] and the 1000 Functional Connectomes Project [8] have embarked to elucidate large-scale connectivity patterns in the human brain. For traditional connectivity analysis, a variety of models including DCM (dynamics causal modeling), GCM (Granger causality modeling) and MVA (multivariate autoregressive modeling) are proposed [6, 9-10] to model the interactions of the ROIs. A fundamental issue in these studies is how to accurately identify the ROIs, which are the structural substrates for measuring connectivity. Currently, this is still an open, urgent, yet challenging problem in many brain imaging applications. From our perspective, the major challenges come from uncertainties in ROI boundary definition, the tremendous variability across individuals, and high nonlinearities within and around ROIs. Current approaches for identifying brain ROIs can be broadly classified into four categories. The first is manual labeling by experts using their domain knowledge. The second is a data-driven clustering of ROIs from the brain image itself. For instance, the ReHo (regional homogeneity) algorithm [11] has been used to identify regional homogeneous regions as ROIs. The third is to predefine ROIs in a template brain, and warp them back to the individual space using image registration [12]. Lastly, ROIs can be defined from the activated regions observed during a task-based fMRI paradigm. While fruitful results have been achieved using these approaches, there are various limitations. For instance, manual labeling is difficult to implement for large datasets and may be vulnerable to inter-subject and intra-subject variation; it is difficult to build correspondence across subjects using data-driven clustering methods; warping template ROIs back to individual space is subject to the accuracy of warping techniques and the anatomical variability across subjects. Even identifying ROIs using task-based fMRI paradigms, which is regarded as the standard approach for ROI identification, is still an open question. It was reported in [13] that many imaging-related variables including scanner vender, RF coil characteristics (phase array vs. volume coil), k-space acquisition trajectory, reconstruction algorithms, susceptibility -induced signal dropout, as well as field strength differences, contribute to variations in ROI identification. Other researchers reported that spatial smoothing, a common preprocessing technique in fMRI analysis to enhance SNR, may introduce artificial localization shift s (up to 12.1mm for Gaussian kernel volumetric smoothing) [14] or generate overly smoothed activation maps that may obscure important details [15]. For example, as shown in Fig.1a, the local maximum of the ROI was shifted by 4mm due to the spatial smoothing process. Additionally, its structural profile (Fig.1b) was significantly altered. Furthermore, group-based activation maps may show different patterns from an individual's activation map; Fig.1c depicts such differences. The top panel is the group activation map from a working memory study, while the bottom panel is the activation map of one subject in the study. As we can see from the highlighted boxes, the subject has less activated regions than the group analysis result. In conclusion, standard analysis of task-based fMRI paradigm data is inadequate to accurately localize ROIs for each individual. Fig.1. (a): Local activation map maxima (marked by the cross) shift of one ROI due to spatial volumetric smoothing. The top one was detected using unsmoothed data while the bottom one used smoothed data (FWHM: 6.875mm). (b): The corresponding fibers for the ROIs in (a). The ROIs are presented using a sphere (radius: 5mm). (c): Activation map differences between the group (top) and one subject (bottom). The highlighted boxes show two of the missing activated ROIs found from the group analysis. Without accurate and reliable individualized ROIs, the validity of brain connectivity analysis, and computational modeling of dynamics and interactions among brain networks , would be questionable. In response to this fundamental issue, this paper presents a novel computational methodology to optimize the locations of an individual's ROIs initialized from task-based fMRI. We use the ROIs identified in a block-based working memory paradigm as a test bed application to develop and evaluate our methodology. The optimization of ROI locations was formulated as an energy minimization problem, with the goal of jointly maximizing the group-wise consistency of functional and structural connectivity patterns and anatomic profiles. The optimization problem is solved via the well-established simulated annealing approach. Our experimental results show that the optimized ROIs achieved our optimization objectives and demonstrated promising results. 2 Mat eria l s a nd Metho ds 2.1 Data acquisition and preprocessing Twenty-five university students were recruited to participate in this study. Each participant performed an fMRI modified version of the OSPAN task (3 block types: OSPAN, Arithmetic, and Baseline) while fMRI data was acquired. DTI scans were also acquired for each participant. FMRI and DTI scans were acquired on a 3T GE Signa scanner. Acquisition parameters were as follows : fMRI: 64x64 matrix, 4mm slice thickness, 220mm FOV, 30 slices, TR=1.5s, TE=25ms, ASSET=2; DTI: 128x128 matrix, 2mm slice thickness, 256mm FOV, 60 slices, TR=15100ms, TE= variable, ASSET=2, 3 B0 images, 30 optimized gradient directions, b-value=1000). Each participant’s fMRI data was analyzed using FSL. Individual activation map Fig.2. working memory reflecting the OSPAN (OSPAN > Baseline) contrast was used. In ROIs mapped on a total, we identified the 16 highest activated ROIs, including left WM/GM surface and right insula, left and right medial frontal gyrus, left and right precentral gyrus, left and right paracingulate gyrus, left and right dorsolateral prefrontal cortex, left and right inferior parietal lobule, left occipital pole, right frontal pole, right lateral occipital gyrus, and left and right precuneus. Fig.2 shows the 16 ROIs mapped onto a WM(white matter)/GM(gray matter) cortical surface. For some individuals, there may be missing ROIs on their activation maps. Under such condition, we adapted the group activation map as a guide to find these ROIs using linear registration. DTI pre-processing consisted of skull removal, motion correction, and eddy current correction. After the pre-processing, fiber tracking was performed using MEDINRIA (FA threshold: 0.2; minimum fiber length: 20). Fibers were extended along their tangent directions to reach into the gray matter when necessary. Brain tissue segmentation was conducted on DTI data by the method in [16] and the cortical surface was reconstructed from the tissue maps using the marching cubes algorithm. The cortical surface was parcellated into anatomical regions using the HAMMER tool [17]. DTI space was used as the standard space from which to generate the GM (gray matter) segmentation and from which to report the ROI locations on the cortical surface. Since the fMRI and DTI sequences are both EPI (echo planar imaging) sequences, their distortions tend to be similar and the misalignment between DTI and fMRI images is much less than that between T1 and fMRI images [18]. Co-registration between DTI and fMRI data was performed using FSL FLIRT [12]. The activated ROIs and tracked fibers were then mapped onto the cortical surface for joint modeling. 2.2 Joint modeling of anatomical, structural and functional profiles Despite the high degree of variability across subjects, there are several aspects of regularity on which we base the proposed solution. Firstly, across subjects, the functional ROIs should have similar anatomical locations, e.g., similar locations in the atlas space. Secondly, these ROIs should have similar structural connectivity profiles across subjects. In other words, fibers penetrating the same functional ROIs should have at least similar target regions across subjects. Lastly, individual networks identified by task-based paradigms, like the working memory network we adapted as a test bed in this paper, should have similar functional connectivity pattern across subjects. The neuroscience bases of the above premises include: 1) structural and functional brain connectivity are closely related [19], and cortical gyrification and axongenesis processes are closely coupled [20]; Hence, it is reasonable to put these three types of information in a joint modeling framework. 2) Extensive studies have already demonstrated the existence of a common structural and functional architecture of the human brain [21, 22], and it makes sense to assume that the working memory network has similar structural and functional connectivity patterns across individuals. Based on these premises, we proposed to optimize the locations of individual functional ROIs by jointly modeling anatomic profiles, structural connectivity patterns, and functional connectivity patterns, as illustrated in Fig 3. The Fig.3. ROIs optimization scheme. goal was to minimize the group-wise variance (or maximize group-wise consistency) of these jointly modeled profiles. Mathematically, we modeled the group-wise variance as energy E as follows. A ROI from fMRI analysis was mapped onto the surface, and is represented by a center vertex and its neighborhood. Suppose đ?‘… đ?‘–đ?‘— is the ROI region j on the cortical surface of subject i identified in Section 2.1; we find a corresponding surface ROI region đ?‘† đ?‘–đ?‘— so that the energy E (contains energy from n subjects, each with m ROIs) is minimized: đ??¸ = đ??¸ đ?‘Ž (đ?œ† đ??¸ đ?‘? −đ?‘€ đ??¸ đ?‘? đ?œŽ đ??¸đ?‘? + (1 − đ?œ†) đ??¸ đ?‘“ −đ?‘€ đ??¸ đ?‘“ đ?œŽđ??¸đ?‘“ ) (1) where Ea is the anatomical constraint; Ec is the structural connectivity constraint, M Ec and ď ł E are the mean and standard deviation of Ec in the searching space; E f is the functional c connectivity constraint, M E f and ď ł E f are the mean and standard deviation of E f respectively; and ď Ź is a weighting parameter between 0 and 1. If not specified, and m is the number of ROIs in this paper. The details of these energy terms are provided in the following sections. 2.2.1 n is the number of subjects, Anatomical constraint energy Anatomical constraint energy Ea is defined to ensure that the optimized ROIs have similar anatomical locations in the atlas space (Fig.4 shows an example of ROIs of 15 randomly selected subjects in the atlas space). We model the locations for all ROIs in the atlas space using a Gaussian model (mean: đ?‘€ đ?‘‹ đ?‘— ,and standard deviation: ď ł X j for ROI j ). The model parameters were estimated using the initial locations obtained from Section 2.1. Let X ij be the center coordinate of region Sij in the atlas space, then Ea is expressed as đ??¸đ?‘Ž = { 1 đ?‘’ đ?‘‘đ?‘šđ?‘Žđ?‘Ľâˆ’1 Fig.4. ROI distributions in Atlas space. (đ?‘‘đ?‘šđ?‘Žđ?‘Ľâ‰¤1) (đ?‘‘đ?‘šđ?‘Žđ?‘Ľ>1) (2) ‖ , 1 ≤ đ?‘– ≤ đ?‘›; 1 ≤ đ?‘— ≤ đ?‘š. } (3) where đ?‘‘đ?‘šđ?‘Žđ?‘Ľ = đ?‘€đ?‘Žđ?‘Ľ { ‖ đ?‘‹ đ?‘–đ?‘— −đ?‘€ đ?‘‹ đ?‘— 3đ?œŽ đ?‘‹ đ?‘— Under the above definition, if any X ij is within the range of 3s X from the distribution model j center M X , the anatomical constraint energy will always be one; if not, there will be an j exponential increase of the energy which punishes the possible involvement of outliers. In other words, this energy factor will ensure the optimized ROIs will not significantly deviate away from the original ROIs. 2.2.2 Structural connectivity constraint energy Structural connectivity constraint energy Ec is defined to ensure the group has similar structural connectivity profiles for each functional ROI, since similar functional regions should have the similar structural connectivity patterns [19], n m Ec  ďƒĽďƒĽ (Cij  M C j )Covc 1 (Ci j  M C j )T (4) i 1 j 1 where Cij is the connectivity pattern vector for ROI j of subject i , M C j is the group mean 1 for ROI j , and Covc is the inverse of the covariance matrix. The connectivity pattern vector Cij is a fiber target region distribution histogram. To obtain this histogram, we first parcellate all the cortical surfaces into nine regions ( as shown in Fig.5a, four lobes for each hemisphere, and the subcortical region) using the HAMMER algorithm [17]. A finer parcellation is available but not used due to the relatively lower parcellation accuracy, which might render the histogram too sensitive to the parcellation result. Then, we extract fibers penetrating region Sij , and calculate the distribution of the fibers’ target cortical regions. Fig.5 illustrates the ideas. Fig.5. Structural connectivity pattern descriptor. (a): Cortical surface parcellation using HAMMER [17]; (b): Joint visualization of the cortical surface, two ROIs (blue and green spheres), and fibers penetrating the ROIs (in red and yellow, respectively); (c): Corresponding target region distribution histogram of ROIs in Fig.5b. There are nine bins corresponding to the nine cortical regions. Each bin contains the number of fibers that penetrate the ROI and are connected to the corresponding cortical region. Fiber numbers are normalized across subjects. 2.2.3 Functional connectivity constraint energy Functional connectivity constraint energy E f is defined to ensure each individual has similar functional connectivity patterns for the working memory system, assuming the human brain has similar functional architecture across individuals [21]. đ?‘› đ??¸ đ?‘“ = ∑ đ?‘–=1‖đ??šđ?‘– − đ?‘€ đ??š ‖ (5) Here, Fi is the functional connectivity matrix for subject i , and M F is the group mean of the dataset. The connectivity between each pair of ROIs is defined using the Pearson correlation. The matrix distance used here is the Frobenius norm. 2.3 Energy minimization solution The minimization of the energy defined in Section 2.2 is known as a combinatorial optimization problem. Traditional optimization methods may not fit this problem, since there are two noticeable characteristics in this application. First, we do not know how the energy changes with the varying locations of ROIs. Therefore, techniques like Newton’s method cannot be used. Second, the structure of search space is not smooth, which may lead to multiple local minima during optimization. To address this problem, we adopt the simulated annealing (SA) algorithm [23] for the energy minimization. The idea of the SA algorithm is based on random walk through the space for lower energies. In these random walks, the probability of taking a step is determined by the Boltzmann distribution, - (E - E )/ ( KT ) p = e i+ 1 i (6) if Ei 1  Ei , and p  1 when Ei 1 ď‚Ł Ei . Here, đ??¸ đ?‘– and đ??¸ đ?‘–+1 are the system energies at solution configuration đ?‘– and đ?‘– + 1 respectively; đ??ž is the Boltzmann constant; and đ?‘‡ is the system temperature. In other words, a step will be taken when a lower energy is found. A step will also be taken with probability p if a higher energy is found. This helps avoid the local minima in the search space. 3 R esult s Compared to structural and functional connectivity patterns, anatomical profiles are more easily affected by variability across individuals. Therefore, the anatomical constraint energy is designed to provide constraint only to ROIs that are obviously far away from reasonableness. The reasonable range was statistically modeled by the localizations of ROIs warped into the atlas space in Section 2.2.1. Our focus in this paper is the structural and functional profiles. 3.1 Optimization using anatomical and structural connectivity profile s In this section, we use only anatomical and structural connectivity profiles to optimize the locations of ROIs. The goal is to check whether the structural constraint energy Ec works as expected. Fig.6 shows the fibers penetrating the right precuneus for eight subjects before (top panel) and after optimization (bottom panel). The ROI is highlighted in a red sphere for each subject. As we can see from the figure (please refer to the highlighted yellow arrows), after optimization, the third and sixth subjects have significantly improved consistency with the rest of the group than before optimization, which proves the validity of the energy function Eq.(4). Fig.6. Comparison of structural profiles before and after optimization. Each column shows the corresponding before-optimization (top) and after-optimization (bottom) fibers of one subject. The ROI (right precuneus) is presented by the red sphere. 3.2 Optimization using anatomical and functional connectivity profiles In this section, we optimize the locations of ROIs using anatomical and functional profiles, aiming to validate the definition of functional connectivity constraint energy E f . If this energy constraint worked well, the functional connectivity variance of the working memory system across subjects would decrease. Fig.7 shows the comparison of the standard derivation for functional connectivity before (left) and after (right) optimization. As we can see, the variance is significantly reduced after optimization. This demonstrated the effectiveness of the defined functional connectivity constraint energy. Fig.7. Comparison of the standard derivation for functional connectivity before and after the optimization. Lower values mean more consistent connectivity pattern cross subjects. 3.3 Consistency between optimization of functional profiles and structural profiles Fig.8. Optimization consistency between functional and structural profiles. Top: Functional profile energy drop along with structural profile optimization; Bottom: Structural profile energy drop along with functional profile optimization. Each experiment was repeated 15 times with random initial ROI locations that met the anatomical constraint. The relationship between structure and function has been extensively studied [24], and it is widely believed that they are closely related. In this section, we study the relationship between functional profiles and structural profiles by looking at how the energy for one of them changes while the energy of the other decreases. The optimization processes in Section 3.1 and 3.2 were repeated 15 times respectively with random initial ROI locations that met the anatomical constraint. As shown in Fig.8, in general, the functional profile energies and structural profile energies are closely related in such a way that the functional profile energies tend to decrease along with the structural profile optimization process, while the structural profile energies also tend to decrease as the functional profile is optimized. This positively correlated decrease of functional profile energy and structural profile energy not only proves the close relationship between functional and structural profiles, but also demonstrates the consistency between functional and structural optimization, laying down the foundation of the joint optimiza tion, whose results are detailed in the following section. 3.4 Optimization connectivity profiles using anatomical, structural and functional In this section, we used all the constraints in Eq. (1) to optimize the individual locations of all ROIs in the working memory system. Ten runs of the optimization were performed using random initial ROI locations that met the anatomical constraint. Weighting parameter ď Ź equaled 0.5 for all these runs. Starting and ending temperatures for the simulated annealing algorithm are 8 and 0.05; Boltzmann constant K  1 . As we can see from Fig.9, most runs started to converge at step 24, and the convergence energy is quite close for all runs. This indicates that the simulated annealing algorithm provides a valid solution to our problem. By visual inspection, most of the ROIs move to more reasonable and consistent locations after the joint optimization. As an example, Fig.10 depicts the location movements of the ROI in Fig. 6 for eight subjects. As we can see, the ROIs for these subjects share a similar anatomical landmark, which appears to be the tip of the upper bank of the parieto-occipital sulcus. If the initial ROI was not at this landmark, it moved to the landmark after the optimization, which was the case for subjects 1, 4 and 7. The structural profiles of these ROIs are very similar to Fig.6. The results in Fig. 10 indicate the significant improvement of ROI locations achieved by the joint optimization procedure. Fig.9. Convergence performance of the simulated annealing . Each run has 28 temperature conditions. Fig.10. The movement of right precuneus before (in red sphere) and after (in green sphere) optimization for eight subjects. The

4 0.79440659 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

Author: Morten Mørup, Kristoffer Madsen, Anne-marie Dogonowski, Hartwig Siebner, Lars K. Hansen

Abstract: Functional magnetic resonance imaging (fMRI) can be applied to study the functional connectivity of the neural elements which form complex network at a whole brain level. Most analyses of functional resting state networks (RSN) have been based on the analysis of correlation between the temporal dynamics of various regions of the brain. While these models can identify coherently behaving groups in terms of correlation they give little insight into how these groups interact. In this paper we take a different view on the analysis of functional resting state networks. Starting from the definition of resting state as functional coherent groups we search for functional units of the brain that communicate with other parts of the brain in a coherent manner as measured by mutual information. We use the infinite relational model (IRM) to quantify functional coherent groups of resting state networks and demonstrate how the extracted component interactions can be used to discriminate between functional resting state activity in multiple sclerosis and normal subjects. 1

5 0.78496987 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

Author: Gael Varoquaux, Alexandre Gramfort, Jean-baptiste Poline, Bertrand Thirion

Abstract: Spontaneous brain activity, as observed in functional neuroimaging, has been shown to display reproducible structure that expresses brain architecture and carries markers of brain pathologies. An important view of modern neuroscience is that such large-scale structure of coherent activity reflects modularity properties of brain connectivity graphs. However, to date, there has been no demonstration that the limited and noisy data available in spontaneous activity observations could be used to learn full-brain probabilistic models that generalize to new data. Learning such models entails two main challenges: i) modeling full brain connectivity is a difficult estimation problem that faces the curse of dimensionality and ii) variability between subjects, coupled with the variability of functional signals between experimental runs, makes the use of multiple datasets challenging. We describe subject-level brain functional connectivity structure as a multivariate Gaussian process and introduce a new strategy to estimate it from group data, by imposing a common structure on the graphical model in the population. We show that individual models learned from functional Magnetic Resonance Imaging (fMRI) data using this population prior generalize better to unseen data than models based on alternative regularization schemes. To our knowledge, this is the first report of a cross-validated model of spontaneous brain activity. Finally, we use the estimated graphical model to explore the large-scale characteristics of functional architecture and show for the first time that known cognitive networks appear as the integrated communities of functional connectivity graph. 1

6 0.63677156 77 nips-2010-Epitome driven 3-D Diffusion Tensor image segmentation: on extracting specific structures

7 0.4114852 57 nips-2010-Decoding Ipsilateral Finger Movements from ECoG Signals in Humans

8 0.37441418 250 nips-2010-Spectral Regularization for Support Estimation

9 0.36774978 234 nips-2010-Segmentation as Maximum-Weight Independent Set

10 0.35429037 280 nips-2010-Unsupervised Kernel Dimension Reduction

11 0.34581092 133 nips-2010-Kernel Descriptors for Visual Recognition

12 0.33679169 279 nips-2010-Universal Kernels on Non-Standard Input Spaces

13 0.33254555 105 nips-2010-Getting lost in space: Large sample analysis of the resistance distance

14 0.3286182 124 nips-2010-Inductive Regularized Learning of Kernel Functions

15 0.31989193 256 nips-2010-Structural epitome: a way to summarize one’s visual experience

16 0.30720124 149 nips-2010-Learning To Count Objects in Images

17 0.30316782 259 nips-2010-Subgraph Detection Using Eigenvector L1 Norms

18 0.29665777 145 nips-2010-Learning Kernels with Radiuses of Minimum Enclosing Balls

19 0.29664695 174 nips-2010-Multi-label Multiple Kernel Learning by Stochastic Approximation: Application to Visual Object Recognition

20 0.29431143 12 nips-2010-A Primal-Dual Algorithm for Group Sparse Regularization with Overlapping Groups


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.032), (17, 0.011), (27, 0.149), (30, 0.045), (35, 0.073), (45, 0.146), (50, 0.042), (52, 0.042), (56, 0.267), (60, 0.025), (77, 0.03), (78, 0.013), (90, 0.056)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77395546 249 nips-2010-Spatial and anatomical regularization of SVM for brain image analysis

Author: Remi Cuingnet, Marie Chupin, Habib Benali, Olivier Colliot

Abstract: Support vector machines (SVM) are increasingly used in brain image analyses since they allow capturing complex multivariate relationships in the data. Moreover, when the kernel is linear, SVMs can be used to localize spatial patterns of discrimination between two groups of subjects. However, the features’ spatial distribution is not taken into account. As a consequence, the optimal margin hyperplane is often scattered and lacks spatial coherence, making its anatomical interpretation difficult. This paper introduces a framework to spatially regularize SVM for brain image analysis. We show that Laplacian regularization provides a flexible framework to integrate various types of constraints and can be applied to both cortical surfaces and 3D brain images. The proposed framework is applied to the classification of MR images based on gray matter concentration maps and cortical thickness measures from 30 patients with Alzheimer’s disease and 30 elderly controls. The results demonstrate that the proposed method enables natural spatial and anatomical regularization of the classifier. 1

2 0.65573633 215 nips-2010-Probabilistic Deterministic Infinite Automata

Author: Nicholas Bartlett, Frank Wood, David Tax

Abstract: We propose a novel Bayesian nonparametric approach to learning with probabilistic deterministic finite automata (PDFA). We define and develop a sampler for a PDFA with an infinite number of states which we call the probabilistic deterministic infinite automata (PDIA). Posterior predictive inference in this model, given a finite training sequence, can be interpreted as averaging over multiple PDFAs of varying structure, where each PDFA is biased towards having few states. We suggest that our method for averaging over PDFAs is a novel approach to predictive distribution smoothing. We test PDIA inference both on PDFA structure learning and on both natural language and DNA data prediction tasks. The results suggest that the PDIA presents an attractive compromise between the computational cost of hidden Markov models and the storage requirements of hierarchically smoothed Markov models. 1

3 0.6489417 97 nips-2010-Functional Geometry Alignment and Localization of Brain Areas

Author: Georg Langs, Yanmei Tie, Laura Rigolo, Alexandra Golby, Polina Golland

Abstract: Matching functional brain regions across individuals is a challenging task, largely due to the variability in their location and extent. It is particularly difficult, but highly relevant, for patients with pathologies such as brain tumors, which can cause substantial reorganization of functional systems. In such cases spatial registration based on anatomical data is only of limited value if the goal is to establish correspondences of functional areas among different individuals, or to localize potentially displaced active regions. Rather than rely on spatial alignment, we propose to perform registration in an alternative space whose geometry is governed by the functional interaction patterns in the brain. We first embed each brain into a functional map that reflects connectivity patterns during a fMRI experiment. The resulting functional maps are then registered, and the obtained correspondences are propagated back to the two brains. In application to a language fMRI experiment, our preliminary results suggest that the proposed method yields improved functional correspondences across subjects. This advantage is pronounced for subjects with tumors that affect the language areas and thus cause spatial reorganization of the functional regions. 1

4 0.63614762 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

Author: Haefner Ralf, Matthias Bethge

Abstract: Many studies have explored the impact of response variability on the quality of sensory codes. The source of this variability is almost always assumed to be intrinsic to the brain. However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. Here we study the impact of such stimulus-induced response variability for the case of binocular disparity inference. We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed. We then compute the Fisher information with respect to binocular disparity, present in the monocular inputs to the standard model of early binocular processing, and thereby obtain an upper bound on how much information a model could theoretically extract from them. Then we analyze the information loss incurred by the different ways of combining those inputs to produce a scalar single-neuron response. We find that in the case of depth inference, monocular stimulus variability places a greater limit on the extractable information than intrinsic neuronal noise for typical spike counts. Furthermore, the largest loss of information is incurred by the standard model for position disparity neurons (tuned-excitatory), that are the most ubiquitous in monkey primary visual cortex, while more information from the inputs is preserved in phase-disparity neurons (tuned-near or tuned-far) primarily found in higher cortical regions. 1

5 0.63412613 161 nips-2010-Linear readout from a neural population with partial correlation data

Author: Adrien Wohrer, Ranulfo Romo, Christian K. Machens

Abstract: How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix grows quadratically with population size. Here, we suggest to bypass this problem by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On average, noise correlations will therefore reflect signal correlations, which can be measured in neural populations. We suggest an explicit parametric dependency between signal and noise correlations. We show how this dependency can be used to ”fill the gaps” in noise correlations matrices using an iterative application of the Wishart distribution over positive definitive matrices. We apply our method to data from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the population of recorded neurons with the discrimination threshold of the monkey and show that our method predicts different results than simpler, average schemes of noise correlations. 1

6 0.62985468 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

7 0.62950373 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

8 0.62464947 121 nips-2010-Improving Human Judgments by Decontaminating Sequential Dependencies

9 0.62407899 39 nips-2010-Bayesian Action-Graph Games

10 0.62008423 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters

11 0.61492079 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

12 0.61269093 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

13 0.61018288 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

14 0.60971999 98 nips-2010-Functional form of motion priors in human motion perception

15 0.60898697 6 nips-2010-A Discriminative Latent Model of Image Region and Object Tag Correspondence

16 0.60842234 268 nips-2010-The Neural Costs of Optimal Control

17 0.60772097 60 nips-2010-Deterministic Single-Pass Algorithm for LDA

18 0.60741687 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

19 0.60143763 194 nips-2010-Online Learning for Latent Dirichlet Allocation

20 0.59873563 17 nips-2010-A biologically plausible network for the computation of orientation dominance