nips nips2005 nips2005-135 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Ofer Pasternak, Nathan Intrator, Nir Sochen, Yaniv Assaf
Abstract: Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a non invasive method for brain neuronal fibers delineation. Here we show a modification for DT-MRI that allows delineation of neuronal fibers which are infiltrated by edema. We use the Muliple Tensor Variational (MTV) framework which replaces the diffusion model of DT-MRI with a multiple component model and fits it to the signal attenuation with a variational regularization mechanism. In order to reduce free water contamination we estimate the free water compartment volume fraction in each voxel, remove it, and then calculate the anisotropy of the remaining compartment. The variational framework was applied on data collected with conventional clinical parameters, containing only six diffusion directions. By using the variational framework we were able to overcome the highly ill posed fitting. The results show that we were able to find fibers that were not found by DT-MRI.
Reference: text
sentIndex sentText sentNum sentScore
1 il Abstract Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a non invasive method for brain neuronal fibers delineation. [sent-13, score-0.218]
2 Here we show a modification for DT-MRI that allows delineation of neuronal fibers which are infiltrated by edema. [sent-14, score-0.162]
3 We use the Muliple Tensor Variational (MTV) framework which replaces the diffusion model of DT-MRI with a multiple component model and fits it to the signal attenuation with a variational regularization mechanism. [sent-15, score-0.652]
4 In order to reduce free water contamination we estimate the free water compartment volume fraction in each voxel, remove it, and then calculate the anisotropy of the remaining compartment. [sent-16, score-0.734]
5 The variational framework was applied on data collected with conventional clinical parameters, containing only six diffusion directions. [sent-17, score-0.634]
6 By using the variational framework we were able to overcome the highly ill posed fitting. [sent-18, score-0.108]
7 1 Introduction Diffusion weighted Magnetic Resonance Imaging (DT-MRI) enables the measurement of the apparent water self-diffusion along a specified direction [1]. [sent-20, score-0.137]
8 Using a series of Diffusion Weighted Images (DWIs) DT-MRI can extract quantitative measures of water molecule diffusion anisotropy which characterize tissue microstructure [2]. [sent-21, score-0.748]
9 Such measures are in particular useful for the segmentation of neuronal fibers from other brain tissue which then allows a noninvasive delineation and visualization of major brain neuronal fiber bundles in vivo [3]. [sent-22, score-0.665]
10 Based on the assumptions that each voxel can be represented by a single diffusion compartment and that the diffusion within this compartment has a Gaussian distribution ∗ http://www. [sent-23, score-1.602]
11 il/∼oferpas DT-MRI states the relation between the signal attenuation, E, and the diffusion tensor, D, as follows [4, 5, 6]: A(qk ) T E(qk ) = = exp(−bqk Dqk ) , (1) A(0) where A(qk ) is the DWI for the k’th applied diffusion gradient direction qk . [sent-27, score-1.069]
12 The notation A(0) is for the non weighted image and b is a constant reflecting the experimental diffusion weighting [2]. [sent-28, score-0.539]
13 , a 3 × 3 positive semidefinite matrix, that requires at least 6 DWIs from different non-collinear applied gradient directions to uniquely determine it. [sent-31, score-0.026]
14 The symmetric diffusion tensor has a spectral decomposition for three eigenvectors U a and three positive eigenvalues λa . [sent-32, score-0.892]
15 The relation between the eigenvalues determines the diffusion anisotropy using measures such as Fractional Anisotropy (FA) [5]: FA = 3((λ1 − D )2 + (λ2 − D )2 + (λ3 − D )2 ) , 2(λ2 + λ2 + λ2 ) 1 2 3 (2) where D = (λ1 + λ2 + λ3 )/3. [sent-33, score-0.573]
16 FA is relatively high in neuronal fiber bundles (white matter), where the cylindrical geometry of fibers causes the diffusion perpendicular to the fibers be much smaller than parallel to them. [sent-34, score-0.637]
17 Other brain tissues, such as gray matter and Cerebro-Spinal Fluid (CSF), are less confined with diffusion direction and exhibit isotropic diffusion. [sent-35, score-0.667]
18 In cases of partial volume where neuronal fibers reside other tissue type in the same voxel, or present complex architecture, the diffusion has no longer a single pronounced orientation and therefore the FA value of the fitted tensor is decreased. [sent-36, score-1.13]
19 The decreased FA values causes errors in segmentation and in any proceeding fiber analysis. [sent-37, score-0.018]
20 In this paper we focus on the case where partial volume occurs when fiber bundles are infiltrated with edema. [sent-38, score-0.175]
21 Edema might occur in response to brain trauma, or surrounding a tumor. [sent-39, score-0.129]
22 The brain tissue accumulate water which creates pressure and might change the fiber architecture, or infiltrate it. [sent-40, score-0.292]
23 Since the edema consists mostly of relatively free diffusing water molecules, the diffusion attenuation increases and the anisotropy decreases. [sent-41, score-0.936]
24 We chose to reduce the effect of edema by changing the diffusion model to a dual compartment model, assuming an isotropic compartment added to a tensor compartment. [sent-42, score-1.919]
25 2 Theory The method we offer is based on the dual compartment model which was already demonstrated as able to reduce CSF contamination [7], where it required a large number of diffusion measurement with different diffusion times. [sent-43, score-1.457]
26 Here we require the conventional DT-MRI data of only six diffusion measurement, and apply it on the edema case. [sent-44, score-0.786]
27 1 The Dual Compartment Model The dual compartment model is described as follows: T E(qk ) = f exp(−bqk D1 qk ) + (1 − f ) exp(−bD2 ) . [sent-46, score-0.498]
28 (3) The diffusion tensor for the tensor compartment is denoted by D1 , and the diffusion coefficient of the isotropic water compartment is denoted by D2 . [sent-47, score-2.466]
29 The compartments have relative volume of f and 1−f . [sent-48, score-0.124]
30 Finding the best fitting parameters D1 , D2 and f is highly ill-posed, especially in the case of six measurement, where for any arbitrarily chosen isotropic compartment there could be found a tensor compartment which exactly fits the data. [sent-49, score-1.126]
31 In addition to the DWI data, MTV uses the T2 image to initialize f . [sent-51, score-0.029]
32 The initial orientation for the tensor compartment are those that DT-MRI calculated. [sent-52, score-0.7]
33 2 The Variational Framework In order to stabilize the fitting process we chose to use the Multiple Tensor Variational (MTV) framework [8] which was previously used to resolve partial volume caused by complex fiber architecture [9], and to reduce CSF contamination in cases of hydrocephalus [10]. [sent-54, score-0.385]
34 We note that the dual compartment model is a special case of the more general multiple tensor model, where the number of the compartments is restricted to 2 and one of the compartments is restricted to equal eigenvalues (isotropy). [sent-55, score-0.93]
35 Therefore the MTV framework adapted for separation of fiber compartments from edema is composed of the following functional, whose minima should provide the wanted diffusion parameters: d S(f, D1 , D2 ) = Ω α ˆ (E(qk ) − E(qk ))2 + φ(|∇Ui1 |) dΩ . [sent-56, score-0.851]
36 (4) k=1 ˆ The notation E is for the observed diffusion signal attenuation and E is calculated using (3) for d different acquisition directions. [sent-57, score-0.514]
37 Ω is the image domain with 3D axis (x, y, z), ∂I ∂I ∂I |∇I| = ( ∂x )2 + ( ∂y )2 + ( ∂z )2 is defined as the vector gradient norm. [sent-58, score-0.055]
38 The notation Ui1 stands for the principal eigenvector of the i’th diffusion tensor. [sent-59, score-0.458]
39 The fixed parameters α is set to keep the solution closer to the observed diffusion signal. [sent-60, score-0.458]
40 The function φ is a diffusion flow function, which controls the regularization behavior. [sent-61, score-0.488]
41 Here we chose to use φi (s) = s2 1 + K 2 which lead to anisotropic diffusion-like flow while preserving discontinuities i [11]. [sent-62, score-0.025]
42 The regularized fitting allows the identification of smoothed fiber compartments and reduces noise. [sent-63, score-0.074]
43 The minimum of (4) solves the Euler-Lagrange equations, and can be found by the gradient descent scheme. [sent-64, score-0.026]
44 3 Initialization Scheme Since the functional space is highly irregular (not enough measurements), the minimization process requires initial guess (figure 1), which is as close as possible to the global minimum. [sent-66, score-0.035]
45 In order to apriori estimate the relative volume of the isotropic compartment we used a normalized diffusion non-weighted image, where high contrast correlates to larger fluid volume. [sent-67, score-0.935]
46 In order to apriori estimate the parameters of D1 we used the result of conventional DT-MRI fitting on the original data. [sent-68, score-0.069]
47 The DT-MRI results were spectrally decomposed and the eigenvectors were used as initial guess for the eigenvectors of D1 . [sent-69, score-0.098]
48 The initial guess for the eigenvalues of D1 were set to λ1 = 1. [sent-70, score-0.063]
49 3 methods We demonstrate how partial volume of neuronal fiber and edema can be reduced by applying the modified MTV framework on a brain slice taken from a patient with sever edema surrounding a brain tumor. [sent-73, score-1.073]
50 The experimental parameters were as follows: T R/T E = 10000/98ms, ∆/δ = 31/25ms, b = 1000s/mm2 with six diffusion gradient directions. [sent-77, score-0.518]
51 48 slices with thickness of 3mm and no gap were acquired covering the whole brain with FOV of 240mm2 and matrix of 128x128. [sent-78, score-0.106]
52 Head movement and image distortions were corrected using a mutual information based registration algorithm [12]. [sent-80, score-0.073]
53 The corrected DWIs were fitted to the dual compartment model via the modified MTV framework, then the isotropic compartment was omitted. [sent-81, score-0.79]
54 FA was calculated for the remaining tensor for which FA higher than 0. [sent-82, score-0.383]
55 We compared these results to single component DT-MRI with no regularization, which was also used for initialization of the MTV fitting. [sent-84, score-0.027]
56 4 Results and Discussion Figure 2: A single slice of a patient with edema. [sent-85, score-0.067]
57 (A) a non diffusion weighted image with ROI marked. [sent-86, score-0.539]
58 Showing the tumor in black surrounded by sever edema which appear bright. [sent-87, score-0.3]
59 A much larger part of the corpus callosum is revealed Figure (2) shows the Edema case, where DTI was unable to delineate large parts of the corpus callosum. [sent-95, score-0.233]
60 Since the corpus callosum is one of the largest fiber bundles in the brain it was highly unlikely that the fibers were disconnected or disappeared. [sent-96, score-0.324]
61 The expected FA should have been on the same order as on the opposite side of the brain, where the corpus callosum shows high FA values. [sent-97, score-0.139]
62 Applying the MTV on the slice and mapping the FA value of the tensor compartment reveals considerably much more pixels of higher FA in the area of the corpus callosum. [sent-98, score-0.796]
63 In general the FA values of most pixels were increased, which was predicted, since by removing any size of a sphere (isotropic compartment) we should be left with a shape which is less spherical, and therefore with increased FA. [sent-99, score-0.021]
64 The benefit of using the MTV framework over an overall reduce of FA threshold in recognizing neuronal fiber voxels is that the amount of FA increase is not uniform in all tissue types. [sent-100, score-0.304]
65 In areas where the partial volume was not big due to the edema, the increase was much lower than in areas contaminated with edema. [sent-101, score-0.114]
66 This keeps the nice contrast reflected by FA values between neuronal fibers and other tissue types. [sent-102, score-0.193]
67 Reducing the FA threshold on original DT-MRI results would cause a less clear separation between the fiber bundles and other tissue types. [sent-103, score-0.216]
68 This tool could be used for fiber tracking in the vicinity of brain tumors, or with stroke, where edema contaminates the fibers and prevents fiber delineation with the conventional DT-MRI. [sent-104, score-0.497]
69 5 Conclusions We show that by modifying the MTV framework to fit the dual compartment model we can reduce the contamination of edema, and delineate much larger fiber bundle areas. [sent-105, score-0.569]
70 By using the MTV framework we stabilize the fitting process, and also include some biological constraints, such as the piece-wise smoothness nature of neuronal fibers in the brain. [sent-106, score-0.15]
71 There is no doubt that using a much larger number of diffusion measurements should increase the stabilization of the process, and will increase its accuracy. [sent-107, score-0.518]
72 However, more measurement require much more scan time, which might not be available in some cases. [sent-108, score-0.04]
73 The variational framework is a powerful tool for the modeling and regularization of various mappings. [sent-109, score-0.138]
74 It is applied, with great success, to scalar and vector fields in image processing and computer vision. [sent-110, score-0.029]
75 Recently it has been generalized to deal with tensor fields which are of great interest to brain research via the analysis of DWIs and DT-MRI. [sent-111, score-0.489]
76 We show that the more realistic model of multi-compartment voxels conjugated with the variational framework provides much improved results. [sent-112, score-0.138]
77 Spin diffusion measurements: Spin echoes in the presence of a time-dependant field gradient. [sent-115, score-0.458]
78 Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. [sent-150, score-0.04]
79 Removing CSF contamination in brain DT-MRIs by using a two-compartment tensor model. [sent-165, score-0.589]
80 International Society for Magnetic Resonance in Medicine 12th Scientific meeting ISMRM04, page 1215, Kyoto, Japan, 2004. [sent-167, score-0.066]
81 Separation of white matter fascicles from diffusion MRI using φ-functional regularization. [sent-181, score-0.507]
82 In Proceedings of 12th Annual Meeting of the ISMRM, page 1227, 2004. [sent-182, score-0.032]
83 CSF partial volume reduction in hydrocephalus using a variational framework. [sent-187, score-0.211]
84 In Proceedings of 13th Annual Meeting of the ISMRM, page 1100, 2005. [sent-188, score-0.032]
85 Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, volume 147 of Applied Mathematical Sciences. [sent-192, score-0.05]
wordName wordTfidf (topN-words)
[('diffusion', 0.458), ('tensor', 0.383), ('compartment', 0.317), ('fa', 0.265), ('edema', 0.26), ('mtv', 0.24), ('ber', 0.222), ('bers', 0.175), ('qk', 0.127), ('tissue', 0.111), ('brain', 0.106), ('contamination', 0.1), ('csf', 0.1), ('sochen', 0.1), ('resonance', 0.089), ('magnetic', 0.089), ('anisotropy', 0.087), ('neuronal', 0.082), ('basser', 0.08), ('callosum', 0.08), ('delineation', 0.08), ('dwis', 0.08), ('pasternak', 0.08), ('bundles', 0.079), ('variational', 0.075), ('isotropic', 0.075), ('water', 0.075), ('compartments', 0.074), ('mri', 0.059), ('corpus', 0.059), ('attenuation', 0.056), ('imaging', 0.055), ('dual', 0.054), ('voxel', 0.052), ('volume', 0.05), ('partial', 0.046), ('barnett', 0.04), ('bqk', 0.04), ('dwi', 0.04), ('fiber', 0.04), ('hydrocephalus', 0.04), ('ismrm', 0.04), ('ltrated', 0.04), ('oferpas', 0.04), ('pierpaoli', 0.04), ('sever', 0.04), ('tissues', 0.04), ('measurement', 0.04), ('slice', 0.037), ('guess', 0.035), ('stabilize', 0.035), ('roi', 0.035), ('apriori', 0.035), ('delineate', 0.035), ('conventional', 0.034), ('six', 0.034), ('meeting', 0.034), ('framework', 0.033), ('page', 0.032), ('mr', 0.032), ('israel', 0.031), ('tting', 0.031), ('reduce', 0.03), ('non', 0.03), ('regularization', 0.03), ('patient', 0.03), ('voxels', 0.03), ('image', 0.029), ('eigenvalues', 0.028), ('matter', 0.028), ('initialization', 0.027), ('corrected', 0.027), ('spin', 0.027), ('gradient', 0.026), ('architecture', 0.026), ('separation', 0.026), ('medicine', 0.025), ('chose', 0.025), ('measurements', 0.024), ('eigenvectors', 0.023), ('ow', 0.023), ('surrounding', 0.023), ('weighted', 0.022), ('white', 0.021), ('removing', 0.021), ('visualization', 0.019), ('tted', 0.019), ('increase', 0.018), ('causes', 0.018), ('tracking', 0.017), ('hagen', 0.017), ('fluid', 0.017), ('adams', 0.017), ('mori', 0.017), ('spectrally', 0.017), ('microstructure', 0.017), ('clark', 0.017), ('distortions', 0.017), ('je', 0.017), ('molecules', 0.017)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999994 135 nips-2005-Neuronal Fiber Delineation in Area of Edema from Diffusion Weighted MRI
Author: Ofer Pasternak, Nathan Intrator, Nir Sochen, Yaniv Assaf
Abstract: Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a non invasive method for brain neuronal fibers delineation. Here we show a modification for DT-MRI that allows delineation of neuronal fibers which are infiltrated by edema. We use the Muliple Tensor Variational (MTV) framework which replaces the diffusion model of DT-MRI with a multiple component model and fits it to the signal attenuation with a variational regularization mechanism. In order to reduce free water contamination we estimate the free water compartment volume fraction in each voxel, remove it, and then calculate the anisotropy of the remaining compartment. The variational framework was applied on data collected with conventional clinical parameters, containing only six diffusion directions. By using the variational framework we were able to overcome the highly ill posed fitting. The results show that we were able to find fibers that were not found by DT-MRI.
2 0.24565256 56 nips-2005-Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators
Author: Boaz Nadler, Stephane Lafon, Ioannis Kevrekidis, Ronald R. Coifman
Abstract: This paper presents a diffusion based probabilistic interpretation of spectral clustering and dimensionality reduction algorithms that use the eigenvectors of the normalized graph Laplacian. Given the pairwise adjacency matrix of all points, we define a diffusion distance between any two data points and show that the low dimensional representation of the data by the first few eigenvectors of the corresponding Markov matrix is optimal under a certain mean squared error criterion. Furthermore, assuming that data points are random samples from a density p(x) = e−U (x) we identify these eigenvectors as discrete approximations of eigenfunctions of a Fokker-Planck operator in a potential 2U (x) with reflecting boundary conditions. Finally, applying known results regarding the eigenvalues and eigenfunctions of the continuous Fokker-Planck operator, we provide a mathematical justification for the success of spectral clustering and dimensional reduction algorithms based on these first few eigenvectors. This analysis elucidates, in terms of the characteristics of diffusion processes, many empirical findings regarding spectral clustering algorithms. Keywords: Algorithms and architectures, learning theory. 1
3 0.19576272 199 nips-2005-Value Function Approximation with Diffusion Wavelets and Laplacian Eigenfunctions
Author: Sridhar Mahadevan, Mauro Maggioni
Abstract: We investigate the problem of automatically constructing efficient representations or basis functions for approximating value functions based on analyzing the structure and topology of the state space. In particular, two novel approaches to value function approximation are explored based on automatically constructing basis functions on state spaces that can be represented as graphs or manifolds: one approach uses the eigenfunctions of the Laplacian, in effect performing a global Fourier analysis on the graph; the second approach is based on diffusion wavelets, which generalize classical wavelets to graphs using multiscale dilations induced by powers of a diffusion operator or random walk on the graph. Together, these approaches form the foundation of a new generation of methods for solving large Markov decision processes, in which the underlying representation and policies are simultaneously learned.
4 0.14545535 189 nips-2005-Tensor Subspace Analysis
Author: Xiaofei He, Deng Cai, Partha Niyogi
Abstract: Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. The typical linear subspace learning algorithms include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projection (LPP). All of these methods consider an n1 × n2 image as a high dimensional vector in Rn1 ×n2 , while an image represented in the plane is intrinsically a matrix. In this paper, we propose a new algorithm called Tensor Subspace Analysis (TSA). TSA considers an image as the second order tensor in Rn1 ⊗ Rn2 , where Rn1 and Rn2 are two vector spaces. The relationship between the column vectors of the image matrix and that between the row vectors can be naturally characterized by TSA. TSA detects the intrinsic local geometrical structure of the tensor space by learning a lower dimensional tensor subspace. We compare our proposed approach with PCA, LDA and LPP methods on two standard databases. Experimental results demonstrate that TSA achieves better recognition rate, while being much more efficient. 1
5 0.071504459 66 nips-2005-Estimation of Intrinsic Dimensionality Using High-Rate Vector Quantization
Author: Maxim Raginsky, Svetlana Lazebnik
Abstract: We introduce a technique for dimensionality estimation based on the notion of quantization dimension, which connects the asymptotic optimal quantization error for a probability distribution on a manifold to its intrinsic dimension. The definition of quantization dimension yields a family of estimation algorithms, whose limiting case is equivalent to a recent method based on packing numbers. Using the formalism of high-rate vector quantization, we address issues of statistical consistency and analyze the behavior of our scheme in the presence of noise.
6 0.062432379 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression
7 0.062246379 23 nips-2005-An Application of Markov Random Fields to Range Sensing
8 0.059695333 130 nips-2005-Modeling Neuronal Interactivity using Dynamic Bayesian Networks
9 0.053992316 94 nips-2005-Identifying Distributed Object Representations in Human Extrastriate Visual Cortex
10 0.047243662 81 nips-2005-Gaussian Processes for Multiuser Detection in CDMA receivers
11 0.047213752 202 nips-2005-Variational EM Algorithms for Non-Gaussian Latent Variable Models
12 0.045611877 128 nips-2005-Modeling Memory Transfer and Saving in Cerebellar Motor Learning
13 0.036585283 141 nips-2005-Norepinephrine and Neural Interrupts
14 0.036174908 201 nips-2005-Variational Bayesian Stochastic Complexity of Mixture Models
15 0.034014974 52 nips-2005-Correlated Topic Models
16 0.033419181 29 nips-2005-Analyzing Coupled Brain Sources: Distinguishing True from Spurious Interaction
17 0.030189674 42 nips-2005-Combining Graph Laplacians for Semi--Supervised Learning
18 0.029239729 150 nips-2005-Optimizing spatio-temporal filters for improving Brain-Computer Interfacing
19 0.028990928 119 nips-2005-Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods
20 0.027244169 161 nips-2005-Radial Basis Function Network for Multi-task Learning
topicId topicWeight
[(0, 0.1), (1, 0.016), (2, -0.005), (3, 0.016), (4, -0.147), (5, -0.029), (6, -0.041), (7, -0.173), (8, 0.094), (9, -0.136), (10, 0.133), (11, 0.076), (12, 0.067), (13, -0.193), (14, -0.161), (15, -0.107), (16, -0.249), (17, -0.117), (18, 0.102), (19, -0.021), (20, -0.038), (21, 0.074), (22, 0.175), (23, -0.09), (24, -0.233), (25, 0.096), (26, 0.028), (27, 0.186), (28, 0.12), (29, 0.053), (30, -0.024), (31, -0.052), (32, 0.015), (33, -0.037), (34, -0.057), (35, 0.033), (36, 0.112), (37, 0.014), (38, 0.023), (39, 0.055), (40, -0.058), (41, -0.01), (42, 0.124), (43, -0.038), (44, 0.104), (45, 0.055), (46, -0.032), (47, 0.0), (48, -0.071), (49, 0.052)]
simIndex simValue paperId paperTitle
same-paper 1 0.98344797 135 nips-2005-Neuronal Fiber Delineation in Area of Edema from Diffusion Weighted MRI
Author: Ofer Pasternak, Nathan Intrator, Nir Sochen, Yaniv Assaf
Abstract: Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a non invasive method for brain neuronal fibers delineation. Here we show a modification for DT-MRI that allows delineation of neuronal fibers which are infiltrated by edema. We use the Muliple Tensor Variational (MTV) framework which replaces the diffusion model of DT-MRI with a multiple component model and fits it to the signal attenuation with a variational regularization mechanism. In order to reduce free water contamination we estimate the free water compartment volume fraction in each voxel, remove it, and then calculate the anisotropy of the remaining compartment. The variational framework was applied on data collected with conventional clinical parameters, containing only six diffusion directions. By using the variational framework we were able to overcome the highly ill posed fitting. The results show that we were able to find fibers that were not found by DT-MRI.
2 0.64482599 56 nips-2005-Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators
Author: Boaz Nadler, Stephane Lafon, Ioannis Kevrekidis, Ronald R. Coifman
Abstract: This paper presents a diffusion based probabilistic interpretation of spectral clustering and dimensionality reduction algorithms that use the eigenvectors of the normalized graph Laplacian. Given the pairwise adjacency matrix of all points, we define a diffusion distance between any two data points and show that the low dimensional representation of the data by the first few eigenvectors of the corresponding Markov matrix is optimal under a certain mean squared error criterion. Furthermore, assuming that data points are random samples from a density p(x) = e−U (x) we identify these eigenvectors as discrete approximations of eigenfunctions of a Fokker-Planck operator in a potential 2U (x) with reflecting boundary conditions. Finally, applying known results regarding the eigenvalues and eigenfunctions of the continuous Fokker-Planck operator, we provide a mathematical justification for the success of spectral clustering and dimensional reduction algorithms based on these first few eigenvectors. This analysis elucidates, in terms of the characteristics of diffusion processes, many empirical findings regarding spectral clustering algorithms. Keywords: Algorithms and architectures, learning theory. 1
3 0.61159611 199 nips-2005-Value Function Approximation with Diffusion Wavelets and Laplacian Eigenfunctions
Author: Sridhar Mahadevan, Mauro Maggioni
Abstract: We investigate the problem of automatically constructing efficient representations or basis functions for approximating value functions based on analyzing the structure and topology of the state space. In particular, two novel approaches to value function approximation are explored based on automatically constructing basis functions on state spaces that can be represented as graphs or manifolds: one approach uses the eigenfunctions of the Laplacian, in effect performing a global Fourier analysis on the graph; the second approach is based on diffusion wavelets, which generalize classical wavelets to graphs using multiscale dilations induced by powers of a diffusion operator or random walk on the graph. Together, these approaches form the foundation of a new generation of methods for solving large Markov decision processes, in which the underlying representation and policies are simultaneously learned.
4 0.35105234 189 nips-2005-Tensor Subspace Analysis
Author: Xiaofei He, Deng Cai, Partha Niyogi
Abstract: Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. The typical linear subspace learning algorithms include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projection (LPP). All of these methods consider an n1 × n2 image as a high dimensional vector in Rn1 ×n2 , while an image represented in the plane is intrinsically a matrix. In this paper, we propose a new algorithm called Tensor Subspace Analysis (TSA). TSA considers an image as the second order tensor in Rn1 ⊗ Rn2 , where Rn1 and Rn2 are two vector spaces. The relationship between the column vectors of the image matrix and that between the row vectors can be naturally characterized by TSA. TSA detects the intrinsic local geometrical structure of the tensor space by learning a lower dimensional tensor subspace. We compare our proposed approach with PCA, LDA and LPP methods on two standard databases. Experimental results demonstrate that TSA achieves better recognition rate, while being much more efficient. 1
5 0.27173278 130 nips-2005-Modeling Neuronal Interactivity using Dynamic Bayesian Networks
Author: Lei Zhang, Dimitris Samaras, Nelly Alia-klein, Nora Volkow, Rita Goldstein
Abstract: Functional Magnetic Resonance Imaging (fMRI) has enabled scientists to look into the active brain. However, interactivity between functional brain regions, is still little studied. In this paper, we contribute a novel framework for modeling the interactions between multiple active brain regions, using Dynamic Bayesian Networks (DBNs) as generative models for brain activation patterns. This framework is applied to modeling of neuronal circuits associated with reward. The novelty of our framework from a Machine Learning perspective lies in the use of DBNs to reveal the brain connectivity and interactivity. Such interactivity models which are derived from fMRI data are then validated through a group classification task. We employ and compare four different types of DBNs: Parallel Hidden Markov Models, Coupled Hidden Markov Models, Fully-linked Hidden Markov Models and Dynamically MultiLinked HMMs (DML-HMM). Moreover, we propose and compare two schemes of learning DML-HMMs. Experimental results show that by using DBNs, group classification can be performed even if the DBNs are constructed from as few as 5 brain regions. We also demonstrate that, by using the proposed learning algorithms, different DBN structures characterize drug addicted subjects vs. control subjects. This finding provides an independent test for the effect of psychopathology on brain function. In general, we demonstrate that incorporation of computer science principles into functional neuroimaging clinical studies provides a novel approach for probing human brain function.
6 0.25408828 128 nips-2005-Modeling Memory Transfer and Saving in Cerebellar Motor Learning
7 0.23332234 71 nips-2005-Fast Krylov Methods for N-Body Learning
8 0.21944743 94 nips-2005-Identifying Distributed Object Representations in Human Extrastriate Visual Cortex
9 0.2188632 81 nips-2005-Gaussian Processes for Multiuser Detection in CDMA receivers
10 0.19521332 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression
11 0.1684224 183 nips-2005-Stimulus Evoked Independent Factor Analysis of MEG Data with Large Background Activity
12 0.14704378 161 nips-2005-Radial Basis Function Network for Multi-task Learning
13 0.14702435 24 nips-2005-An Approximate Inference Approach for the PCA Reconstruction Error
14 0.14204727 4 nips-2005-A Bayesian Spatial Scan Statistic
15 0.13910842 143 nips-2005-Off-Road Obstacle Avoidance through End-to-End Learning
16 0.13837738 188 nips-2005-Temporally changing synaptic plasticity
17 0.13641199 66 nips-2005-Estimation of Intrinsic Dimensionality Using High-Rate Vector Quantization
18 0.1336751 68 nips-2005-Factorial Switching Kalman Filters for Condition Monitoring in Neonatal Intensive Care
19 0.13185495 29 nips-2005-Analyzing Coupled Brain Sources: Distinguishing True from Spurious Interaction
20 0.13111074 23 nips-2005-An Application of Markov Random Fields to Range Sensing
topicId topicWeight
[(3, 0.029), (10, 0.031), (12, 0.51), (27, 0.021), (31, 0.035), (34, 0.058), (39, 0.013), (41, 0.016), (55, 0.027), (69, 0.032), (73, 0.022), (88, 0.05), (91, 0.022)]
simIndex simValue paperId paperTitle
same-paper 1 0.87214535 135 nips-2005-Neuronal Fiber Delineation in Area of Edema from Diffusion Weighted MRI
Author: Ofer Pasternak, Nathan Intrator, Nir Sochen, Yaniv Assaf
Abstract: Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a non invasive method for brain neuronal fibers delineation. Here we show a modification for DT-MRI that allows delineation of neuronal fibers which are infiltrated by edema. We use the Muliple Tensor Variational (MTV) framework which replaces the diffusion model of DT-MRI with a multiple component model and fits it to the signal attenuation with a variational regularization mechanism. In order to reduce free water contamination we estimate the free water compartment volume fraction in each voxel, remove it, and then calculate the anisotropy of the remaining compartment. The variational framework was applied on data collected with conventional clinical parameters, containing only six diffusion directions. By using the variational framework we were able to overcome the highly ill posed fitting. The results show that we were able to find fibers that were not found by DT-MRI.
2 0.6400485 86 nips-2005-Generalized Nonnegative Matrix Approximations with Bregman Divergences
Author: Suvrit Sra, Inderjit S. Dhillon
Abstract: Nonnegative matrix approximation (NNMA) is a recent technique for dimensionality reduction and data analysis that yields a parts based, sparse nonnegative representation for nonnegative input data. NNMA has found a wide variety of applications, including text analysis, document clustering, face/image recognition, language modeling, speech processing and many others. Despite these numerous applications, the algorithmic development for computing the NNMA factors has been relatively deficient. This paper makes algorithmic progress by modeling and solving (using multiplicative updates) new generalized NNMA problems that minimize Bregman divergences between the input matrix and its lowrank approximation. The multiplicative update formulae in the pioneering work by Lee and Seung [11] arise as a special case of our algorithms. In addition, the paper shows how to use penalty functions for incorporating constraints other than nonnegativity into the problem. Further, some interesting extensions to the use of “link” functions for modeling nonlinear relationships are also discussed. 1
3 0.60290271 116 nips-2005-Learning Topology with the Generative Gaussian Graph and the EM Algorithm
Author: Michaël Aupetit
Abstract: Given a set of points and a set of prototypes representing them, how to create a graph of the prototypes whose topology accounts for that of the points? This problem had not yet been explored in the framework of statistical learning theory. In this work, we propose a generative model based on the Delaunay graph of the prototypes and the ExpectationMaximization algorithm to learn the parameters. This work is a first step towards the construction of a topological model of a set of points grounded on statistics. 1 1.1
4 0.52549511 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions
Author: Inna Weiner, Tomer Hertz, Israel Nelken, Daphna Weinshall
Abstract: We present a novel approach to the characterization of complex sensory neurons. One of the main goals of characterizing sensory neurons is to characterize dimensions in stimulus space to which the neurons are highly sensitive (causing large gradients in the neural responses) or alternatively dimensions in stimulus space to which the neuronal response are invariant (defining iso-response manifolds). We formulate this problem as that of learning a geometry on stimulus space that is compatible with the neural responses: the distance between stimuli should be large when the responses they evoke are very different, and small when the responses they evoke are similar. Here we show how to successfully train such distance functions using rather limited amount of information. The data consisted of the responses of neurons in primary auditory cortex (A1) of anesthetized cats to 32 stimuli derived from natural sounds. For each neuron, a subset of all pairs of stimuli was selected such that the responses of the two stimuli in a pair were either very similar or very dissimilar. The distance function was trained to fit these constraints. The resulting distance functions generalized to predict the distances between the responses of a test stimulus and the trained stimuli. 1
5 0.23398368 94 nips-2005-Identifying Distributed Object Representations in Human Extrastriate Visual Cortex
Author: Rory Sayres, David Ress, Kalanit Grill-spector
Abstract: The category of visual stimuli has been reliably decoded from patterns of neural activity in extrastriate visual cortex [1]. It has yet to be seen whether object identity can be inferred from this activity. We present fMRI data measuring responses in human extrastriate cortex to a set of 12 distinct object images. We use a simple winner-take-all classifier, using half the data from each recording session as a training set, to evaluate encoding of object identity across fMRI voxels. Since this approach is sensitive to the inclusion of noisy voxels, we describe two methods for identifying subsets of voxels in the data which optimally distinguish object identity. One method characterizes the reliability of each voxel within subsets of the data, while another estimates the mutual information of each voxel with the stimulus set. We find that both metrics can identify subsets of the data which reliably encode object identity, even when noisy measurements are artificially added to the data. The mutual information metric is less efficient at this task, likely due to constraints in fMRI data. 1
6 0.20154217 154 nips-2005-Preconditioner Approximations for Probabilistic Graphical Models
7 0.20147455 144 nips-2005-Off-policy Learning with Options and Recognizers
8 0.20069069 78 nips-2005-From Weighted Classification to Policy Search
9 0.20031314 23 nips-2005-An Application of Markov Random Fields to Range Sensing
10 0.20018961 184 nips-2005-Structured Prediction via the Extragradient Method
11 0.19955763 132 nips-2005-Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity
12 0.19856539 177 nips-2005-Size Regularized Cut for Data Clustering
13 0.19736198 50 nips-2005-Convex Neural Networks
14 0.19702539 43 nips-2005-Comparing the Effects of Different Weight Distributions on Finding Sparse Representations
15 0.19684345 96 nips-2005-Inference with Minimal Communication: a Decision-Theoretic Variational Approach
16 0.19680184 200 nips-2005-Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery
17 0.19627692 92 nips-2005-Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification
18 0.19577406 72 nips-2005-Fast Online Policy Gradient Learning with SMD Gain Vector Adaptation
19 0.19566236 30 nips-2005-Assessing Approximations for Gaussian Process Classification
20 0.19541207 90 nips-2005-Hot Coupling: A Particle Approach to Inference and Normalization on Pairwise Undirected Graphs