nips nips2012 nips2012-78 knowledge-graph by maker-knowledge-mining

78 nips-2012-Compressive Sensing MRI with Wavelet Tree Sparsity


Source: pdf

Author: Chen Chen, Junzhou Huang

Abstract: In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. [sent-4, score-0.105]

2 According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. [sent-6, score-0.232]

3 However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. [sent-7, score-0.558]

4 On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. [sent-8, score-0.955]

5 Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. [sent-10, score-0.482]

6 The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. [sent-11, score-0.166]

7 Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. [sent-12, score-0.401]

8 One limitation of MRI is its imaging speed, including both scanning speed and reconstruction speed. [sent-14, score-0.307]

9 Long waiting time and slow scanning may result in patients’ annoyance and blur on images due to local motion such as breathing, heart beating etc. [sent-15, score-0.194]

10 According to compressive sensing (CS) [1,2] theory, only a small number of measurements is enough to recover an image with good quality. [sent-16, score-0.44]

11 This is an extension of Nyquist-Shannon sampling theorem when data is sparse or can be sparsely represented. [sent-17, score-0.099]

12 Compressive Sensing Magnetic Resonance Imaging (CSMRI) becomes one of the most successful applications of compressive sensing, since MR scanning time is directly related to the number of sampling measurements [3]. [sent-18, score-0.339]

13 As most images can be transferred to some sparse domain (wavelet etc. [sent-19, score-0.125]

14 ), only O(K + K log n) samples are enough to obtain robust MR image reconstruction. [sent-20, score-0.105]

15 Recent works on structured sparsity show that the required number of sampling measurements could be further reduced to O(K + log n) by exploring the tree structure [4-6]. [sent-22, score-0.517]

16 A typical relationship in tree sparsity is that, if a parent coefficient has a large/small value, its children also tend to be large/small. [sent-23, score-0.362]

17 Some methods have been proposed to improve standard CS reconstruction by utilizing this prior. [sent-24, score-0.141]

18 Specially, two convex models are proposed to handle the treebased reconstruction problem [7]. [sent-25, score-0.175]

19 In Bayesian compressive sensing, Markov Chain Monte Carlo (MCMC) and variational Bayesian (VB) are used to solve the tree-based hierarchical models [8][9]. [sent-27, score-0.2]

20 Turbo AMP [10] also well exploits tree sparsity for compressive sensing with an iterative approximate message 1 passing approach. [sent-28, score-0.594]

21 However, none of them has conducted numerous experiments on MR images to validate their superiority. [sent-29, score-0.198]

22 In existing CS-MRI models, the linear combination of total variation and wavelet sparse regularization is very popular [3,12-15]. [sent-30, score-0.528]

23 They are the state-of-the-art algorithms for CS-MRI, but none of them utilizes tree sparsity prior to enhance their performance. [sent-34, score-0.398]

24 In this paper, we propose a new model for CS-MRI, which combines wavelet sparsity, gradient sparsity and tree sparsity seamlessly. [sent-35, score-0.856]

25 In tree structure modeling, we assign each pair of parentchild wavelet coefficients to one group, which forces them to be zeros or non-zeros simultaneously. [sent-36, score-0.662]

26 This is an overlapping group problem and hard to be solved directly. [sent-37, score-0.11]

27 Then each of subproblems has closed form solution or can be solved efficiently by existing techniques. [sent-39, score-0.084]

28 We conduct extensive experiments to compare the proposed algorithm with the state-of-the-art CS-MRI algorithms and several tree sparsity algorithms. [sent-40, score-0.365]

29 (3) Numerous experiments have been conducted to compare the proposed algorithm with the state-of-the-art CS-MRI algorithms and several general tree-based algorithms or solvers. [sent-44, score-0.098]

30 1 Related work Tree based compressive sensing If a signal is sparse or can be sparsely represented, the necessary samples to reconstruct it can be significantly smaller than that needed in Nyquist-Shannon sampling theorem. [sent-47, score-0.39]

31 Moreover, if we know some prior about the structure of the original signal, such as group or graph structure, the measurements can be further reduced [4,5]. [sent-48, score-0.184]

32 Some previous algorithms have utilized the tree structure of wavelet coefficients to improve CS reconstruction [7-10]. [sent-49, score-0.833]

33 OGL [7] is a convex approach to model the tree structure: 1 ˆ θ = arg min{F (θ) = ||b − AΦT θ||2 + λg 2 θ 2 n g∈G 1 ||θg ||2 + τ 2 2 i=1 j (θi − θi )2 } (1) j∈Ji where θ is a set of the wavelet coefficients. [sent-50, score-0.69]

34 A represents a partial Fourier transform for MR reconstruction problem and b is the measurements data. [sent-51, score-0.239]

35 When wavelet coefficients are recovered, they can be transformed to the recovered image by an inverse wavelet transform. [sent-55, score-0.917]

36 This method well explores the tree structure assumption, but may be slow in general as following reasons: a) the parent-child relationship in their model is hard to maintain; b) it applies SpaRSA [11] to solve (1). [sent-56, score-0.334]

37 In (2), x is the original image to be reconstructed and w is Gaussian white noise. [sent-59, score-0.17]

38 In these approaches, graphical models are used to represent the wavelet tree structure and the distribution of each coefficient is decided by its parent’s value. [sent-60, score-0.662]

39 2 Efficient MR image reconstruction algorithms In existing CS-MRI algorithms, the linear combination of total variation and wavelet sparsity constrains has shown good property for MR images. [sent-62, score-0.876]

40 α and β are two positive parameters, and Φ denotes the wavelet transform. [sent-64, score-0.406]

41 These approaches are very effective on real MR image reconstruction, but none of them utilizes the wavelet tree structure to get further enhancement. [sent-69, score-0.832]

42 3 1 Ax − b 2 2 2 +α x TV + β Φx 1 } (3) Convex overlapped group sparsity solvers SLEP [18] (Sparse Learning with Efficient Projections) has the package for tree structured group lasso (4). [sent-71, score-0.503]

43 Its main function is to iteratively solve the tree structured denoising problem. [sent-72, score-0.33]

44 When it comes to reconstruction problem, it applies FISTA to transfer the problem to denoising. [sent-73, score-0.141]

45 x = arg min{ ˆ x 1 Ax − b 2 2 2 + β Φx tree } (4) YALL1 [19] (Your Algorithms for L1) can solve the general overlapping group sparse problem efficiently. [sent-74, score-0.423]

46 It first relaxes the constrained overlapping group minimization to unconstrained problem by Lagrangian method. [sent-76, score-0.085]

47 Then the minimization of the x and z subproblems can be written as: x = arg min{ ˆ x,z β2 Ax − b 2 s 2 2 + λT GΦx + 1 β1 wi ||zi ||2 } ||z − GΦx||2 − λT Ax + 2 2 2 i=1 (5) where G indicates the grouping index with all its elements to be 1 or 0. [sent-77, score-0.089]

48 3 Algorithm Observations tell us that the wavelet coefficients of real MR images tend to be quadtree structured [20], although not strictly. [sent-80, score-0.561]

49 Moreover they are generally sparse in wavelet and gradient domain. [sent-81, score-0.449]

50 Tree-based MRI problem can be formulated as follows: min{F (x) = x 1 Ax − b 2 2 2 +α x TV + β( Φx 1 ||Φxg ||2 )} + (6) g∈G The total variation and L1 term in fact have complemented the tree structure assumption, which make our model more robust on real MR images. [sent-84, score-0.367]

51 This is a main difference with previous tree structured algorithms or solvers. [sent-85, score-0.291]

52 1 Solution As mentioned above, z-subproblem in (7) can be written as: zgi = arg min{β||zgi ||2 + zgi λ ||zgi − (GΦx)gi ||2 }, i = 1, 2, . [sent-93, score-0.276]

53 , s 2 2 (8) where gi is the i-th group and s is number of total groups. [sent-96, score-0.112]

54 It has a closed form solution by soft thresholding: zgi = max(||ri ||2 − β ri , 0) , i = 1, 2, . [sent-97, score-0.149]

55 2 Algorithm analysis Suppose x represents an image with n pixels and z contains n elements. [sent-106, score-0.105]

56 Step 4 takes O(n log n) when the fast wavelet transform is applied. [sent-111, score-0.45]

57 Note that n ≤ 2n since we assign every parent-child coefficients to one group and leave every wavelet scaling in one group. [sent-113, score-0.455]

58 We introduce the wavelet tree structure constrain in our model, without increasing the total computation complexity. [sent-115, score-0.694]

59 In the MR imaging problem, A is partial Fourier transform with m rows and n columns. [sent-119, score-0.109]

60 The fewer measurements we samples, the less MR scanning time is need. [sent-121, score-0.155]

61 So MR imaging is always interested in low sampling ratio cases. [sent-122, score-0.116]

62 We conduct experiments on four MR images : ”Cardiac”, ”Brain”, ”Chest” and ”Shoulder” ( Figure 1). [sent-131, score-0.082]

63 We first compare our algorithm with the classical and fastest MR image reconstruction algorithms: CG[3], TVCMRI[12], RecPF[13], FCSA[14,15], and then with general tree based algorithms or solvers: AMP[10], VB[9], YALL1[19], SLEP[18]. [sent-132, score-0.496]

64 2 Comparisons with MR image reconstruction algorithms We first compare our method with the state-of-the-art MR image reconstruction algorithms. [sent-145, score-0.522]

65 For convenience, all test images are resized to 256×256. [sent-146, score-0.082]

66 We decompose the wavelet coefficients to 4 levels here since more levels would increase the computation cost and less levels will weaken tree structure benefit. [sent-149, score-0.687]

67 Although tree structure definitely cost a little more time to solve, it always achieves the best performance in terms of SNR and CPU time. [sent-151, score-0.256]

68 We have conducted experiments on other images and obtain similar results that the proposed algorithm always has the best performance in terms of SNR and CPU time. [sent-152, score-0.12]

69 This result is reasonable because 5 we exploit wavelet tree structure in our model, which can reduce requirement for the number of measurements or increase the accuracy of the solution with the same measurements. [sent-153, score-0.738]

70 Figure 2: Brain image reconstruction with 20% sampling. [sent-154, score-0.246]

71 Visual results from left to right, top to bottom are original image, images reconstructed by CG [3], TVCMRI [12], RecPF [13], FCSA [14,15], and the proposed algorithm. [sent-155, score-0.171]

72 3 Comparisons with general algorithms of tree structure We also compare our algorithm with existing algorithms for tree sparsity with statistical inference and convex optimization. [sent-164, score-0.685]

73 Due to the higher space requirement and time complexity of VB, we resize all images to 128 × 128. [sent-169, score-0.082]

74 Figure 3 shows the reconstruction results on ”Brain” image with only 20% measurements. [sent-171, score-0.246]

75 These results are reasonable because none of the other algorithms uses the sparse prior of MR images in wavelet and gradient domain simultaneously. [sent-176, score-0.594]

76 Table 1: Comparisons of SNR (db) on four MR images Algorithms Iterations Cardiac Brain Chest Shoulder AMP [10] VB [9] SLEP [18] YALL1 [19] Proposed 10 10 50 50 50 11. [sent-177, score-0.082]

77 Although statistical algorithms are slow in general, they have the convenience without tuning parameters, as all parameters are learned from data. [sent-218, score-0.086]

78 Fortunately, good parameters for MR image reconstruction are easy to tune in our model. [sent-219, score-0.246]

79 Except the proposed algorithm, all other algorithms have a strong assumption of the tree structure. [sent-220, score-0.25]

80 However for real MR data, many images do not strictly follow this assumption. [sent-221, score-0.114]

81 14 Figure 3: Brain image reconstruction with 20% sampling. [sent-262, score-0.246]

82 Visual results from left to right, top to bottom are original image, images reconstructed by AMP [10], VB [9], SLEP [18], YALL1 [19], and the proposed algorithm. [sent-263, score-0.171]

83 To show the benefit of proposed model, we design another experiment on a toy MR image, which more strictly follow the tree structure assumption. [sent-273, score-0.284]

84 First we set the wavelet coefficients who have the smallest 0. [sent-274, score-0.406]

85 Figure 4 shows the original toy brain image and corresponding results of different algorithms. [sent-278, score-0.227]

86 From Figure 4 and 3, we could find that the proposed algorithm has great superiority on real MR image, because we combined TV and wavelet sparsity, which ”soften” and complement the tree structure assumption for real MR data. [sent-280, score-0.726]

87 Other tree based algorithms depend on ”hard” tree structure only, which makes them hard to perform well on CS-MRI. [sent-281, score-0.506]

88 7 Figure 4: Toy image reconstruction with 20% sampling. [sent-287, score-0.246]

89 Visual results from left to right, top to bottom are original image, images reconstructed by AMP [10], VB [9], SLEP [18], YALL1 [19], and the proposed algorithm. [sent-288, score-0.171]

90 All algorithms terminate after 50 iterations except AMP [10] and VB [9] terminate after 10 iterations. [sent-297, score-0.128]

91 5 Conclusions Real MR images not only tend to be tree structured sparse, but also are sparse in wavelet and gradient domain . [sent-299, score-0.792]

92 To solve this model, we decompose the original problem into three simpler ones and solve each of them very efficiently. [sent-301, score-0.138]

93 Compared with the state-of-the-art algorithms in CS-MRI, the tree structure in our model can help reduce the required measurements, and leads to better performance. [sent-304, score-0.286]

94 Compared with general tree sparsity algorithms, our algorithm can obtain more robust results on real MR data. [sent-305, score-0.367]

95 Future work will be combining the proposed algorithm with the nonlocal total variation [22] for multi-contrast MRI [21]. [sent-306, score-0.112]

96 (2006) Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. [sent-314, score-0.173]

97 (2007) Sparse MRI: The application of compressed sensing for rapid MR imaging. [sent-320, score-0.158]

98 (2008) An efficient algorithm for compressed MR imaging using total variation and wavelets. [sent-376, score-0.22]

99 (2010) A fast alternating direction method for tvl1-l2 signal reconstruction from partial fourier data. [sent-382, score-0.244]

100 (2009) Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. [sent-401, score-0.238]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('mr', 0.541), ('wavelet', 0.406), ('tree', 0.22), ('snr', 0.19), ('vb', 0.171), ('amp', 0.167), ('slep', 0.164), ('compressive', 0.155), ('reconstruction', 0.141), ('fcsa', 0.123), ('recpf', 0.123), ('tvcmri', 0.123), ('zgi', 0.123), ('sparsity', 0.115), ('mri', 0.106), ('image', 0.105), ('sensing', 0.104), ('ax', 0.091), ('shoulder', 0.091), ('cardiac', 0.091), ('imaging', 0.087), ('chest', 0.084), ('images', 0.082), ('fista', 0.08), ('scanning', 0.079), ('measurements', 0.076), ('brain', 0.071), ('xg', 0.067), ('magnetic', 0.063), ('ogl', 0.062), ('shrinkgroup', 0.062), ('subproblems', 0.059), ('tv', 0.058), ('resonance', 0.055), ('compressed', 0.054), ('cpu', 0.054), ('coef', 0.051), ('sparsa', 0.05), ('group', 0.049), ('fourier', 0.049), ('metaxas', 0.047), ('variation', 0.047), ('cients', 0.047), ('numerous', 0.045), ('solve', 0.045), ('comparisons', 0.044), ('sparse', 0.043), ('reconstructed', 0.042), ('prox', 0.041), ('huang', 0.041), ('assisted', 0.041), ('watmri', 0.041), ('structured', 0.041), ('conducted', 0.038), ('yin', 0.038), ('rk', 0.037), ('structure', 0.036), ('overlapping', 0.036), ('teboulle', 0.036), ('convex', 0.034), ('zhang', 0.034), ('iterations', 0.034), ('miccai', 0.033), ('nonlocal', 0.033), ('medical', 0.033), ('slow', 0.033), ('none', 0.033), ('xk', 0.032), ('terminate', 0.032), ('signal', 0.032), ('total', 0.032), ('real', 0.032), ('replicates', 0.031), ('gi', 0.031), ('arg', 0.03), ('algorithms', 0.03), ('sampling', 0.029), ('solvers', 0.029), ('toy', 0.028), ('parent', 0.027), ('intervention', 0.027), ('sparsely', 0.027), ('cs', 0.026), ('ri', 0.026), ('nowak', 0.026), ('carin', 0.025), ('cg', 0.025), ('solved', 0.025), ('decompose', 0.025), ('donoho', 0.024), ('denoising', 0.024), ('bottom', 0.024), ('original', 0.023), ('chen', 0.023), ('wright', 0.023), ('convenience', 0.023), ('fast', 0.022), ('ratios', 0.022), ('patients', 0.022), ('transform', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999928 78 nips-2012-Compressive Sensing MRI with Wavelet Tree Sparsity

Author: Chen Chen, Junzhou Huang

Abstract: In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. 1

2 0.17499648 234 nips-2012-Multiresolution analysis on the symmetric group

Author: Risi Kondor, Walter Dempsey

Abstract: There is no generally accepted way to define wavelets on permutations. We address this issue by introducing the notion of coset based multiresolution analysis (CMRA) on the symmetric group, find the corresponding wavelet functions, and describe a fast wavelet transform for sparse signals. We discuss potential applications in ranking, sparse approximation, and multi-object tracking. 1

3 0.17154312 363 nips-2012-Wavelet based multi-scale shape features on arbitrary surfaces for cortical thickness discrimination

Author: Won H. Kim, Deepti Pachauri, Charles Hatt, Moo. K. Chung, Sterling Johnson, Vikas Singh

Abstract: Hypothesis testing on signals defined on surfaces (such as the cortical surface) is a fundamental component of a variety of studies in Neuroscience. The goal here is to identify regions that exhibit changes as a function of the clinical condition under study. As the clinical questions of interest move towards identifying very early signs of diseases, the corresponding statistical differences at the group level invariably become weaker and increasingly hard to identify. Indeed, after a multiple comparisons correction is adopted (to account for correlated statistical tests over all surface points), very few regions may survive. In contrast to hypothesis tests on point-wise measurements, in this paper, we make the case for performing statistical analysis on multi-scale shape descriptors that characterize the local topological context of the signal around each surface vertex. Our descriptors are based on recent results from harmonic analysis, that show how wavelet theory extends to non-Euclidean settings (i.e., irregular weighted graphs). We provide strong evidence that these descriptors successfully pick up group-wise differences, where traditional methods either fail or yield unsatisfactory results. Other than this primary application, we show how the framework allows performing cortical surface smoothing in the native space without mappint to a unit sphere. 1

4 0.16785878 79 nips-2012-Compressive neural representation of sparse, high-dimensional probabilities

Author: Xaq Pitkow

Abstract: This paper shows how sparse, high-dimensional probability distributions could be represented by neurons with exponential compression. The representation is a novel application of compressive sensing to sparse probability distributions rather than to the usual sparse signals. The compressive measurements correspond to expected values of nonlinear functions of the probabilistically distributed variables. When these expected values are estimated by sampling, the quality of the compressed representation is limited only by the quality of sampling. Since the compression preserves the geometric structure of the space of sparse probability distributions, probabilistic computation can be performed in the compressed domain. Interestingly, functions satisfying the requirements of compressive sensing can be implemented as simple perceptrons. If we use perceptrons as a simple model of feedforward computation by neurons, these results show that the mean activity of a relatively small number of neurons can accurately represent a highdimensional joint distribution implicitly, even without accounting for any noise correlations. This comprises a novel hypothesis for how neurons could encode probabilities in the brain. 1

5 0.087472431 260 nips-2012-Online Sum-Product Computation Over Trees

Author: Mark Herbster, Stephen Pasteris, Fabio Vitale

Abstract: We consider the problem of performing efficient sum-product computations in an online setting over a tree. A natural application of our methods is to compute the marginal distribution at a vertex in a tree-structured Markov random field. Belief propagation can be used to solve this problem, but requires time linear in the size of the tree, and is therefore too slow in an online setting where we are continuously receiving new data and computing individual marginals. With our method we aim to update the data and compute marginals in time that is no more than logarithmic in the size of the tree, and is often significantly less. We accomplish this via a hierarchical covering structure that caches previous local sum-product computations. Our contribution is three-fold: we i) give a linear time algorithm to find an optimal hierarchical cover of a tree; ii) give a sum-productlike algorithm to efficiently compute marginals with respect to this cover; and iii) apply “i” and “ii” to find an efficient algorithm with a regret bound for the online allocation problem in a multi-task setting. 1

6 0.086008631 104 nips-2012-Dual-Space Analysis of the Sparse Linear Model

7 0.085900292 339 nips-2012-The Time-Marginalized Coalescent Prior for Hierarchical Clustering

8 0.079984546 228 nips-2012-Multilabel Classification using Bayesian Compressed Sensing

9 0.079615451 282 nips-2012-Proximal Newton-type methods for convex optimization

10 0.076144911 81 nips-2012-Context-Sensitive Decision Forests for Object Detection

11 0.074462548 63 nips-2012-CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem

12 0.073443778 159 nips-2012-Image Denoising and Inpainting with Deep Neural Networks

13 0.073303252 120 nips-2012-Exact and Stable Recovery of Sequences of Signals with Sparse Increments via Differential 1-Minimization

14 0.071837433 9 nips-2012-A Geometric take on Metric Learning

15 0.069949515 290 nips-2012-Recovery of Sparse Probability Measures via Convex Programming

16 0.06784749 268 nips-2012-Perfect Dimensionality Recovery by Variational Bayesian PCA

17 0.066035137 180 nips-2012-Learning Mixtures of Tree Graphical Models

18 0.063779488 172 nips-2012-Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs

19 0.06198065 6 nips-2012-A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation

20 0.061250828 284 nips-2012-Q-MKL: Matrix-induced Regularization in Multi-Kernel Learning with Applications to Neuroimaging


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.153), (1, 0.053), (2, -0.023), (3, -0.053), (4, 0.024), (5, 0.04), (6, -0.017), (7, -0.086), (8, 0.001), (9, 0.021), (10, -0.065), (11, 0.013), (12, 0.044), (13, -0.038), (14, -0.053), (15, 0.035), (16, -0.002), (17, 0.068), (18, 0.092), (19, -0.151), (20, -0.036), (21, 0.202), (22, -0.057), (23, -0.035), (24, 0.033), (25, 0.083), (26, -0.107), (27, 0.085), (28, 0.018), (29, 0.177), (30, -0.049), (31, -0.145), (32, -0.04), (33, -0.236), (34, -0.031), (35, -0.052), (36, 0.004), (37, 0.177), (38, -0.062), (39, -0.084), (40, -0.036), (41, -0.012), (42, 0.035), (43, 0.052), (44, 0.01), (45, -0.016), (46, -0.01), (47, 0.109), (48, 0.04), (49, 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95109475 78 nips-2012-Compressive Sensing MRI with Wavelet Tree Sparsity

Author: Chen Chen, Junzhou Huang

Abstract: In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. 1

2 0.69563609 363 nips-2012-Wavelet based multi-scale shape features on arbitrary surfaces for cortical thickness discrimination

Author: Won H. Kim, Deepti Pachauri, Charles Hatt, Moo. K. Chung, Sterling Johnson, Vikas Singh

Abstract: Hypothesis testing on signals defined on surfaces (such as the cortical surface) is a fundamental component of a variety of studies in Neuroscience. The goal here is to identify regions that exhibit changes as a function of the clinical condition under study. As the clinical questions of interest move towards identifying very early signs of diseases, the corresponding statistical differences at the group level invariably become weaker and increasingly hard to identify. Indeed, after a multiple comparisons correction is adopted (to account for correlated statistical tests over all surface points), very few regions may survive. In contrast to hypothesis tests on point-wise measurements, in this paper, we make the case for performing statistical analysis on multi-scale shape descriptors that characterize the local topological context of the signal around each surface vertex. Our descriptors are based on recent results from harmonic analysis, that show how wavelet theory extends to non-Euclidean settings (i.e., irregular weighted graphs). We provide strong evidence that these descriptors successfully pick up group-wise differences, where traditional methods either fail or yield unsatisfactory results. Other than this primary application, we show how the framework allows performing cortical surface smoothing in the native space without mappint to a unit sphere. 1

3 0.65669423 234 nips-2012-Multiresolution analysis on the symmetric group

Author: Risi Kondor, Walter Dempsey

Abstract: There is no generally accepted way to define wavelets on permutations. We address this issue by introducing the notion of coset based multiresolution analysis (CMRA) on the symmetric group, find the corresponding wavelet functions, and describe a fast wavelet transform for sparse signals. We discuss potential applications in ranking, sparse approximation, and multi-object tracking. 1

4 0.58222568 63 nips-2012-CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem

Author: Henrik Ohlsson, Allen Yang, Roy Dong, Shankar Sastry

Abstract: While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique – CPRL, which relaxes the NP-hard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1

5 0.54934376 79 nips-2012-Compressive neural representation of sparse, high-dimensional probabilities

Author: Xaq Pitkow

Abstract: This paper shows how sparse, high-dimensional probability distributions could be represented by neurons with exponential compression. The representation is a novel application of compressive sensing to sparse probability distributions rather than to the usual sparse signals. The compressive measurements correspond to expected values of nonlinear functions of the probabilistically distributed variables. When these expected values are estimated by sampling, the quality of the compressed representation is limited only by the quality of sampling. Since the compression preserves the geometric structure of the space of sparse probability distributions, probabilistic computation can be performed in the compressed domain. Interestingly, functions satisfying the requirements of compressive sensing can be implemented as simple perceptrons. If we use perceptrons as a simple model of feedforward computation by neurons, these results show that the mean activity of a relatively small number of neurons can accurately represent a highdimensional joint distribution implicitly, even without accounting for any noise correlations. This comprises a novel hypothesis for how neurons could encode probabilities in the brain. 1

6 0.46700191 260 nips-2012-Online Sum-Product Computation Over Trees

7 0.44777122 104 nips-2012-Dual-Space Analysis of the Sparse Linear Model

8 0.44626731 81 nips-2012-Context-Sensitive Decision Forests for Object Detection

9 0.42081872 159 nips-2012-Image Denoising and Inpainting with Deep Neural Networks

10 0.39250088 339 nips-2012-The Time-Marginalized Coalescent Prior for Hierarchical Clustering

11 0.36719567 101 nips-2012-Discriminatively Trained Sparse Code Gradients for Contour Detection

12 0.36193025 232 nips-2012-Multiplicative Forests for Continuous-Time Processes

13 0.35560694 210 nips-2012-Memorability of Image Regions

14 0.35502994 120 nips-2012-Exact and Stable Recovery of Sequences of Signals with Sparse Increments via Differential 1-Minimization

15 0.35022703 235 nips-2012-Natural Images, Gaussian Mixtures and Dead Leaves

16 0.33829278 43 nips-2012-Approximate Message Passing with Consistent Parameter Estimation and Applications to Sparse Learning

17 0.33686557 92 nips-2012-Deep Representations and Codes for Image Auto-Annotation

18 0.3343896 46 nips-2012-Assessing Blinding in Clinical Trials

19 0.31682923 365 nips-2012-Why MCA? Nonlinear sparse coding with spike-and-slab prior for neurally plausible image encoding

20 0.31503457 207 nips-2012-Mandatory Leaf Node Prediction in Hierarchical Multilabel Classification


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.094), (17, 0.01), (21, 0.017), (38, 0.143), (39, 0.011), (42, 0.016), (54, 0.013), (55, 0.017), (63, 0.27), (74, 0.062), (76, 0.128), (80, 0.083), (92, 0.027), (94, 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.82578075 89 nips-2012-Coupling Nonparametric Mixtures via Latent Dirichlet Processes

Author: Dahua Lin, John W. Fisher

Abstract: Mixture distributions are often used to model complex data. In this paper, we develop a new method that jointly estimates mixture models over multiple data sets by exploiting the statistical dependencies between them. Specifically, we introduce a set of latent Dirichlet processes as sources of component models (atoms), and for each data set, we construct a nonparametric mixture model by combining sub-sampled versions of the latent DPs. Each mixture model may acquire atoms from different latent DPs, while each atom may be shared by multiple mixtures. This multi-to-multi association distinguishes the proposed method from previous ones that require the model structure to be a tree or a chain, allowing more flexible designs. We also derive a sampling algorithm that jointly infers the model parameters and present experiments on both document analysis and image modeling. 1

same-paper 2 0.8118788 78 nips-2012-Compressive Sensing MRI with Wavelet Tree Sparsity

Author: Chen Chen, Junzhou Huang

Abstract: In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. 1

3 0.71241444 316 nips-2012-Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models

Author: Ke Jiang, Brian Kulis, Michael I. Jordan

Abstract: Sampling and variational inference techniques are two standard methods for inference in probabilistic models, but for many problems, neither approach scales effectively to large-scale data. An alternative is to relax the probabilistic model into a non-probabilistic formulation which has a scalable associated algorithm. This can often be fulfilled by performing small-variance asymptotics, i.e., letting the variance of particular distributions in the model go to zero. For instance, in the context of clustering, such an approach yields connections between the kmeans and EM algorithms. In this paper, we explore small-variance asymptotics for exponential family Dirichlet process (DP) and hierarchical Dirichlet process (HDP) mixture models. Utilizing connections between exponential family distributions and Bregman divergences, we derive novel clustering algorithms from the asymptotic limit of the DP and HDP mixtures that features the scalability of existing hard clustering methods as well as the flexibility of Bayesian nonparametric models. We focus on special cases of our analysis for discrete-data problems, including topic modeling, and we demonstrate the utility of our results by applying variants of our algorithms to problems arising in vision and document analysis. 1

4 0.64933175 47 nips-2012-Augment-and-Conquer Negative Binomial Processes

Author: Mingyuan Zhou, Lawrence Carin

Abstract: By developing data augmentation methods unique to the negative binomial (NB) distribution, we unite seemingly disjoint count and mixture models under the NB process framework. We develop fundamental properties of the models and derive efficient Gibbs sampling inference. We show that the gamma-NB process can be reduced to the hierarchical Dirichlet process with normalization, highlighting its unique theoretical, structural and computational advantages. A variety of NB processes with distinct sharing mechanisms are constructed and applied to topic modeling, with connections to existing algorithms, showing the importance of inferring both the NB dispersion and probability parameters. 1

5 0.64539558 192 nips-2012-Learning the Dependency Structure of Latent Factors

Author: Yunlong He, Yanjun Qi, Koray Kavukcuoglu, Haesun Park

Abstract: In this paper, we study latent factor models with dependency structure in the latent space. We propose a general learning framework which induces sparsity on the undirected graphical model imposed on the vector of latent factors. A novel latent factor model SLFA is then proposed as a matrix factorization problem with a special regularization term that encourages collaborative reconstruction. The main benefit (novelty) of the model is that we can simultaneously learn the lowerdimensional representation for data and model the pairwise relationships between latent factors explicitly. An on-line learning algorithm is devised to make the model feasible for large-scale learning problems. Experimental results on two synthetic data and two real-world data sets demonstrate that pairwise relationships and latent factors learned by our model provide a more structured way of exploring high-dimensional data, and the learned representations achieve the state-of-the-art classification performance. 1

6 0.64355224 172 nips-2012-Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs

7 0.64226723 342 nips-2012-The variational hierarchical EM algorithm for clustering hidden Markov models

8 0.64087605 142 nips-2012-Generalization Bounds for Domain Adaptation

9 0.6392222 19 nips-2012-A Spectral Algorithm for Latent Dirichlet Allocation

10 0.63828284 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

11 0.63781381 69 nips-2012-Clustering Sparse Graphs

12 0.63765377 354 nips-2012-Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes

13 0.63671178 68 nips-2012-Clustering Aggregation as Maximum-Weight Independent Set

14 0.63524806 104 nips-2012-Dual-Space Analysis of the Sparse Linear Model

15 0.63410819 233 nips-2012-Multiresolution Gaussian Processes

16 0.63400781 282 nips-2012-Proximal Newton-type methods for convex optimization

17 0.63393551 149 nips-2012-Hierarchical Optimistic Region Selection driven by Curiosity

18 0.63339573 7 nips-2012-A Divide-and-Conquer Method for Sparse Inverse Covariance Estimation

19 0.63137287 83 nips-2012-Controlled Recognition Bounds for Visual Learning and Exploration

20 0.63106984 135 nips-2012-Forging The Graphs: A Low Rank and Positive Semidefinite Graph Learning Approach