nips nips2012 nips2012-63 knowledge-graph by maker-knowledge-mining

63 nips-2012-CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem


Source: pdf

Author: Henrik Ohlsson, Allen Yang, Roy Dong, Shankar Sastry

Abstract: While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique – CPRL, which relaxes the NP-hard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Shankar Sastry Department of Electrical Engineering and Computer Sciences University of California at Berkeley, CA, USA Abstract While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. [sent-6, score-0.14]

2 This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. [sent-8, score-0.451]

3 We propose a novel solution using a lifting technique – CPRL, which relaxes the NP-hard problem to a nonsmooth semidefinite program. [sent-9, score-0.225]

4 1 Introduction In the area of X-ray imaging, phase retrieval (PR) refers to the problem of recovering a complex multivariate signal from the squared magnitude of its Fourier transform. [sent-12, score-0.329]

5 Existing sensor devices for collecting X-ray images are only sensitive to signal intensities but not the phases. [sent-13, score-0.098]

6 However, it is very important to be able to recover the missing phase information as it reveals finer structures of the subjects than using the intensities alone. [sent-14, score-0.203]

7 If the complex measurements y are available and the matrix A is assumed given, it is well known that the leastsquares (LS) solution recovers the model parameter x that minimizes the squared estimation error: 1 y − Ax 2 . [sent-17, score-0.238]

8 In PR, we assume that the phase of the coefficients of y is omitted and only the squared 2 magnitude of the output is observed: bi = |yi |2 = | x, ai |2 , i = 1, · · · , N, (1) where AH = [a1 , · · · , aN ] ∈ Cn×N , y T = [y1 , · · · , yN ] ∈ CN , and AH denotes the Hermitian transpose of A. [sent-18, score-0.222]

9 The problem is known as compressive phase retrieval (CPR) [25, 27, 28]. [sent-20, score-0.282]

10 In many X-ray imaging applications, for instance, if the complex source signal is indeed sparse under a proper basis, CPR provides a viable solution to exactly recover the signal while collecting much fewer measurements than the traditional non-compressive solutions. [sent-21, score-0.422]

11 Clearly, the PR problem and its CPR extension are much more challenging than the LS problem, as the phase of y is lost while only its squared magnitude is available. [sent-22, score-0.162]

12 For example, if x0 ∈ Cn is a solution to y = Ax, then any multiplication of x and a scalar c ∈ C, |c| = 1, leads to the same squared output b. [sent-24, score-0.072]

13 As mentioned in [10], when the dictionary A represents the unitary discrete Fourier transform (DFT), the ambiguities may represent time-reversed or time-shifted solutions of the ground-truth signal. [sent-25, score-0.113]

14 In this paper, when we talk about a unique solution to PR, it is indeed a representative of a family of solutions up to a global phase ambiguity. [sent-27, score-0.142]

15 Using the lifting technique, the NP-hard problem is relaxed as a semidefinite program (SDP). [sent-30, score-0.09]

16 We will briefly summarize several theoretical bounds for guaranteed recovery of the complex input signal, which is presented in full detail in our technical report [26]. [sent-31, score-0.082]

17 In the experiment, we will present a comprehensive comparison of the new algorithm with the traditional interior-point method, other state-of-the-art sparse optimization techniques, and a greedy algorithm proposed in [26]. [sent-36, score-0.067]

18 Finally, the paper also provides practical guidelines to practitioners at large working on other similar nonsmooth SDP applications. [sent-38, score-0.118]

19 2 Compressive Phase Retrieval via Lifting (CPRL) Since (1) is nonlinear in the unknown x, N n measurements are in general needed for a unique solution. [sent-44, score-0.175]

20 When the number of measurements N are fewer than necessary for such a unique solution, additional assumptions are needed as regularization to select one of the solutions. [sent-45, score-0.183]

21 In classical CS, the ability to find the sparsest solution to a linear equation system enables reconstruction of signals from far fewer measurements than previously thought possible. [sent-46, score-0.325]

22 Classical CS is however only applicable to systems with linear relations between measurements and unknowns. [sent-47, score-0.125]

23 To extend classical CS to the nonlinear PR problem, we seek the sparsest solution satisfying (1): min x 0 , x subj. [sent-48, score-0.192]

24 In the literature, a lifting technique has been extensively used to reframe problems such as (3) to a standard form in SDP, such as in Sparse PCA [15]. [sent-53, score-0.09]

25 The lifting approach addresses this issue by replacing rank(X) with Tr(X). [sent-58, score-0.09]

26 This leads to the nonsmooth SDP minX 0 Tr(X) + λ X 1, subj. [sent-60, score-0.096]

27 to bi = Tr(Φi X), i = 1, · · · , N, (5) where we further denote Φi ai aH ∈ Cn×n and λ ≥ 0 is a design parameter. [sent-61, score-0.06]

28 We refer to the approach as compressive phase retrieval via lifting (CPRL). [sent-63, score-0.372]

29 Consider now the case that the measurements are contaminated by data noise. [sent-64, score-0.125]

30 However, in phase retrieval, we follow closely a more special noise model used in [13]: bi = | x, ai |2 + ei . [sent-66, score-0.163]

31 (6) This nonstandard model avoids the need to calculate the squared magnitude output |y|2 with the added noise term. [sent-67, score-0.059]

32 More importantly, in most practical phase retrieval applications, measurement noise is introduced when the squared magnitudes or intensities of the linear system are measured on the sensing device, but not y itself. [sent-68, score-0.297]

33 Accordingly, we denote a linear operator B of X as B : X ∈ Cn×n → {Tr(Φi X)}1≤i≤N ∈ RN , (7) which measures the noise-free squared output. [sent-69, score-0.059]

34 Then the approximate CPR problem with bounded 2 -norm error model can be solved by the following nonsmooth SDP program: minX 0 Tr(X) + λ X 1, subj. [sent-70, score-0.096]

35 B(X) 2 2 X 2 2 − 1| < for all We can now state the following theorem: Theorem 2 (Recoverability/Uniqueness) Let B(·) be a ( , 2 X ∗ 0 )-RIP linear operator with < ¯ 1 and let x be the sparsest solution to (1). [sent-80, score-0.166]

36 ¯ We can also give a bound on the sparsity of x: ˜ ¯¯ ¯ Theorem 3 (Bound on xxH 0 from above) Let x be the sparsest solution to (1) and let X be the ˜ has rank 1 then X 0 ≥ xxH 0 . [sent-82, score-0.183]

37 ¯ Corollary 4 (Guaranteed recovery using RIP) Let x be the sparsest solution to (1). [sent-85, score-0.181]

38 The solution ˜ is equal to xxH if it has rank 1 and B(·) is ( , 2 X 0 )-RIP with < 1. [sent-86, score-0.082]

39 Instead we summarize the most important property in the following theorem: ¯ Theorem 7 (Upper bound & recoverability through 1 ) Let x be the sparsest solution to (1). [sent-93, score-0.14]

40 The ˜ ˜ ¯¯ solution of CPRL (5), X, is equal to xxH if it has rank 1 and B(·) is ( , 2 X 0 )-RIP-1 with < 1. [sent-94, score-0.082]

41 Two alternative arguments are spark [14] and mutual coherence [17, 11]. [sent-98, score-0.15]

42 On the other hand, mutual coherence may give less tight bounds, but is more tractable. [sent-100, score-0.118]

43 We will focus on mutual coherence, which is defined as: Definition 8 (Mutual coherence) For a matrix A, define the mutual coherence as µ(A) = H max1≤i,j≤n,i=j a|ai2 ajj| 2 . [sent-101, score-0.159]

44 We are now ready to state the following theorem: ¯ Theorem 9 (Recovery using mutual coherence) Let x be the sparsest solution to (1). [sent-103, score-0.181]

45 The solution ˜ ˜ ¯¯ of CPRL (5), X, is equal to xxH if it has rank 1 and X 0 < 0. [sent-104, score-0.082]

46 4 Numerical Implementation via ADMM In addition to the above analysis of guaranteed recovery properties, a critical issue for practitioners is the availability of efficient numerical solvers. [sent-106, score-0.088]

47 Several numerical solvers used in CS may be applied to solve nonsmooth SDPs, which include interior-point methods (e. [sent-107, score-0.121]

48 Gradient projection methods also fail to meaningfully accelerate the CPRL implementation due to the complexity of the projection operator. [sent-111, score-0.056]

49 In summary, as we will demonstrate in Section 5, CPRL as a nonsmooth complex SDP is categorically more expensive to solve compared to the linear programs underlying CS, and the task exceeds the capability of many popular sparse optimization techniques. [sent-114, score-0.169]

50 In this paper, we propose a novel solver to the nonsmooth SDP underlying CPRL via the alternating directions method of multipliers (ADMM, see for instance [6] and [5, Sec. [sent-115, score-0.125]

51 (12) Assuming that a feasible solution exists, and defining ΠA as the projection onto the convex set given l l+1 by the linear constraints, the solution is: X1 = ΠA (Z l − I+Y1 ). [sent-133, score-0.128]

52 (13) We note that the soft operator in the complex domain must be coded with care. [sent-145, score-0.067]

53 One does not simply check the sign of the difference, as in the real case, but rather the magnitude of the complex number: soft(x, q) = 0 |x|−q |x| x if |x| ≤ q, otherwise, (14) l l where q is a positive real number. [sent-146, score-0.067]

54 We also update ρ according to the rule discussed in [6]:  l if rl 2 > µ sl 2 , τincr ρ l+1 l ρ = (16) ρ /τ if sl 2 > µ rl 2 ,  l decr ρ otherwise, where τincr , τdecr , and µ are algorithm parameters. [sent-149, score-0.197]

55 [28] proposes to alternate between fit the estimate to measurements and thresholding. [sent-157, score-0.125]

56 GCPRL, which stands for greedy CPRL, is a new greedy approximate algorithm tailored to the lifting technique in (5). [sent-158, score-0.16]

57 We have observed that if the number of nonzero elements in x is expected to be low, the algorithm can successfully recover the ground-truth sparse signal while consuming less time compared to interior-point methods for the original SDP. [sent-161, score-0.18]

58 2 In general, greedy algorithms for solving CPR problems work well when a good guess for the true solution is available, are often computationally efficient but lack theoretical recovery guarantees. [sent-162, score-0.115]

59 In general, nonlinear CS can be solved locally by greedy simplex pursuit algorithms. [sent-164, score-0.06]

60 Let x ∈ C64 be a 2-sparse complex signal, A RF where F ∈ C64×64 is the Fourier transform matrix and R ∈ C32×64 a random projection matrix (generated by sampling a unit complex Gaussian), and let the measurements b satisfy the PR relation (1). [sent-180, score-0.281]

61 The left plot of Figure 1 gives the recovered signal x using CPRL, GCPRL and PhaseLift. [sent-181, score-0.09]

62 As seen, CPRL and GCPRL correctly identify the two nonzero elements in x while PhaseLift fails to identify the true signal and gives a dense estimate. [sent-182, score-0.096]

63 For very sparse examples, like this one, CPRL and GCPRL often both succeed in finding the ground truth (even though we have twice as many unknowns as measurements). [sent-184, score-0.063]

64 PhaseLift, on the other side, does not favor sparse solutions and would need considerably more measurements to recover the 2-sparse signal. [sent-185, score-0.209]

65 The middle plot of Figure 1 shows the computational time needed to solve the nonsmooth SDP of CPRL using CVX, ADMM, and GCPRL. [sent-186, score-0.15]

66 The right plot of Figure 1 shows the mutual coherence bound 0. [sent-188, score-0.118]

67 5(1 + 1/µ(B)) for a number of different N ’s and n’s, A RF , F ∈ Cn×n the Fourier transform matrix and R ∈ CN ×n a random projection ˜ ˜ matrix. [sent-189, score-0.074]

68 This is of interest since Theorem 9 states that when the CRPL solution X satisfies X 0 < ˜ = xxH , where x is the sparsest solution to (1). [sent-190, score-0.179]

69 5(1 + 1/µ(B)) and has rank 1, then X 2 We have also tested an off-the-shelf toolbox that solves convex cone problems, called TFOCS [2]. [sent-192, score-0.087]

70 Unfortunately, TFOCS cannot be applied directly to solving the nonsmooth SDP in CPRL. [sent-193, score-0.096]

71 6 ˜ the plot it can be concluded that if the CPRL solution X has rank 1 and only a single nonzero ˜ ¯¯ component for a choice of 125 ≥ n, N ≥ 5, Theorem 9 guarantees that X = xxH . [sent-194, score-0.128]

72 We also observe that Theorem 9 is conservative, since we previously saw that 2 nonzero components could be recovered correctly for n = 64 and N = 32. [sent-195, score-0.086]

73 In fact, numerical simulation can be used to show that N = 30 suffices to recover the ground truth in 95 out of 100 runs [26]. [sent-196, score-0.077]

74 08 30 0 0 10 20 30 40 50 60 0 0 i 20 40 60 time [s] 80 100 40 60 80 100 120 n Figure 1: Left: The magnitude of the estimated signal provided by CPRL, GCPRL and PhaseLift. [sent-215, score-0.076]

75 2 Compressive sampling and PR One of the motivations of presented work and CPRL is that it enables compressive sensing for PR problems. [sent-221, score-0.14]

76 To illustrate this, consider the 20 × 20 complex image in Figure 2 Left. [sent-222, score-0.076]

77 It has been shown that the original image can be recovered from far fewer samples than the total number of pixels in the image. [sent-226, score-0.108]

78 However, traditional CS only discuss linear relations between measurements and unknowns. [sent-228, score-0.125]

79 To extend CS to PR applications, consider again the complex image in Figure 2 Left and assume that we only can measure intensities or intensities of linear combinations of pixels. [sent-229, score-0.172]

80 Let R ∈ CN ×400 capture how intensity measurements b are formed from linear combinations of pixels in the image, b = |Rz|2 (z is a vectorized version of the image). [sent-230, score-0.147]

81 An essential part in CS is also to find a dictionary (possibly overcomplete) in which the image can be represented using only a few basis images. [sent-231, score-0.076]

82 We will use a 2D inverse Fourier transform dictionary in our example and arrange the basis vectors as columns in F ∈ C400×400 . [sent-234, score-0.087]

83 This is rather remarkable since the PR relation (1) is nonlinear in the unknown x and N n measurements are in general needed for a unique solution. [sent-236, score-0.175]

84 If we instead sample the intensity of each pixel, one-by-one, neither CPRL or PhaseLift recover the true image. [sent-237, score-0.074]

85 If we set A = R and do not care about finding a dictionary, we can use a classical PR algorithm to recover the true image. [sent-238, score-0.079]

86 If PhaseLift is used, N = 1600 measurements are sufficient to recover the true image. [sent-239, score-0.177]

87 The main reasons for the low number of samples needed in CPRL is that we managed to find a good dictionary (20 basis images were needed to recover the true image) and CPRL’s ability to recover the sparsest solution. [sent-240, score-0.296]

88 In fact, setting A = RF , PhaseLift still needs 1600 measurements to recover the true solution. [sent-241, score-0.177]

89 3 The Shepp-Logan phantom In this last example, we again consider the recovery of complex valued images from random samples. [sent-243, score-0.108]

90 This 30 × 30 SheppLogan phantom has a 2D Fourier transform with 100 nonzero coefficients. [sent-248, score-0.118]

91 The middel image in Figure 2 shows the recovered result using PhaseLift with N = 2400, the second image from the right shows the recovered result using CPRL with the same number N = 2400 and the right image is the recovered result using CPRL with N = 1500. [sent-250, score-0.225]

92 The number of measurements with respect to the sparsity in x is too low for both CPRL and PhaseLift to perfectly recover z. [sent-251, score-0.177]

93 Templates for convex cone problems with applications to sparse e signal recovery. [sent-281, score-0.126]

94 Combining geometry and combinatorics: A unified approach to sparse signal recovery. [sent-289, score-0.082]

95 From sparse solutions of systems of equations to sparse modeling of signals and images. [sent-315, score-0.064]

96 PhaseLift: Exact and stable signal recovery from magnie tude measurements via convex programming. [sent-350, score-0.238]

97 Optimally sparse representation in general (nonorthogonal) dictionaries via 1 -minimization. [sent-373, score-0.057]

98 Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint. [sent-377, score-0.072]

99 Source reconstruction from the modulus of the correlation function: a practical approach to the phase problem of optical coherence theory. [sent-389, score-0.264]

100 Alternating direction methods for classical and ptychographic phase retrieval. [sent-471, score-0.13]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('cprl', 0.679), ('cpr', 0.21), ('phaselift', 0.21), ('gcprl', 0.194), ('xxh', 0.194), ('admm', 0.178), ('cs', 0.16), ('measurements', 0.125), ('sdp', 0.125), ('pr', 0.117), ('compressive', 0.103), ('cn', 0.103), ('phase', 0.103), ('sparsest', 0.101), ('nonsmooth', 0.096), ('fourier', 0.092), ('lifting', 0.09), ('rip', 0.087), ('yil', 0.081), ('cvx', 0.079), ('coherence', 0.077), ('retrieval', 0.076), ('hermitian', 0.071), ('ah', 0.065), ('decr', 0.065), ('ohlsson', 0.065), ('aij', 0.059), ('tr', 0.058), ('optical', 0.058), ('incr', 0.057), ('cand', 0.055), ('recover', 0.052), ('bij', 0.051), ('signal', 0.05), ('imag', 0.049), ('sdps', 0.049), ('minx', 0.048), ('intensities', 0.048), ('ax', 0.048), ('transform', 0.046), ('nonzero', 0.046), ('rank', 0.043), ('complex', 0.041), ('recovery', 0.041), ('mutual', 0.041), ('dictionary', 0.041), ('recovered', 0.04), ('rel', 0.039), ('swedish', 0.039), ('solution', 0.039), ('rf', 0.038), ('sl', 0.038), ('semide', 0.038), ('sensing', 0.037), ('abs', 0.037), ('eldar', 0.037), ('bi', 0.037), ('greedy', 0.035), ('aii', 0.035), ('image', 0.035), ('america', 0.033), ('squared', 0.033), ('fewer', 0.033), ('spark', 0.032), ('strohmer', 0.032), ('szameit', 0.032), ('electrical', 0.032), ('sparse', 0.032), ('unknowns', 0.031), ('berkeley', 0.031), ('middle', 0.029), ('donoho', 0.029), ('alternating', 0.029), ('rl', 0.028), ('projection', 0.028), ('classical', 0.027), ('california', 0.026), ('ambiguities', 0.026), ('bii', 0.026), ('modulus', 0.026), ('phantom', 0.026), ('tfocs', 0.026), ('dong', 0.026), ('magnitude', 0.026), ('operator', 0.026), ('needed', 0.025), ('numerical', 0.025), ('dictionaries', 0.025), ('nonlinear', 0.025), ('argminx', 0.025), ('wen', 0.025), ('romberg', 0.024), ('ai', 0.023), ('badly', 0.023), ('convex', 0.022), ('cone', 0.022), ('intensity', 0.022), ('september', 0.022), ('practitioners', 0.022), ('av', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 63 nips-2012-CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem

Author: Henrik Ohlsson, Allen Yang, Roy Dong, Shankar Sastry

Abstract: While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique – CPRL, which relaxes the NP-hard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1

2 0.13033879 79 nips-2012-Compressive neural representation of sparse, high-dimensional probabilities

Author: Xaq Pitkow

Abstract: This paper shows how sparse, high-dimensional probability distributions could be represented by neurons with exponential compression. The representation is a novel application of compressive sensing to sparse probability distributions rather than to the usual sparse signals. The compressive measurements correspond to expected values of nonlinear functions of the probabilistically distributed variables. When these expected values are estimated by sampling, the quality of the compressed representation is limited only by the quality of sampling. Since the compression preserves the geometric structure of the space of sparse probability distributions, probabilistic computation can be performed in the compressed domain. Interestingly, functions satisfying the requirements of compressive sensing can be implemented as simple perceptrons. If we use perceptrons as a simple model of feedforward computation by neurons, these results show that the mean activity of a relatively small number of neurons can accurately represent a highdimensional joint distribution implicitly, even without accounting for any noise correlations. This comprises a novel hypothesis for how neurons could encode probabilities in the brain. 1

3 0.12453844 120 nips-2012-Exact and Stable Recovery of Sequences of Signals with Sparse Increments via Differential 1-Minimization

Author: Demba Ba, Behtash Babadi, Patrick Purdon, Emery Brown

Abstract: We consider the problem of recovering a sequence of vectors, (xk )K , for which k=0 the increments xk − xk−1 are Sk -sparse (with Sk typically smaller than S1 ), based on linear measurements (yk = Ak xk + ek )K , where Ak and ek denote the meak=1 surement matrix and noise, respectively. Assuming each Ak obeys the restricted isometry property (RIP) of a certain order—depending only on Sk —we show that in the absence of noise a convex program, which minimizes the weighted sum of the ℓ1 -norm of successive differences subject to the linear measurement constraints, recovers the sequence (xk )K exactly. This is an interesting result bek=1 cause this convex program is equivalent to a standard compressive sensing problem with a highly-structured aggregate measurement matrix which does not satisfy the RIP requirements in the standard sense, and yet we can achieve exact recovery. In the presence of bounded noise, we propose a quadratically-constrained convex program for recovery and derive bounds on the reconstruction error of the sequence. We supplement our theoretical analysis with simulations and an application to real video data. These further support the validity of the proposed approach for acquisition and recovery of signals with time-varying sparsity.

4 0.09449178 304 nips-2012-Selecting Diverse Features via Spectral Regularization

Author: Abhimanyu Das, Anirban Dasgupta, Ravi Kumar

Abstract: We study the problem of diverse feature selection in linear regression: selecting a small subset of diverse features that can predict a given objective. Diversity is useful for several reasons such as interpretability, robustness to noise, etc. We propose several spectral regularizers that capture a notion of diversity of features and show that these are all submodular set functions. These regularizers, when added to the objective function for linear regression, result in approximately submodular functions, which can then be maximized by efficient greedy and local search algorithms, with provable guarantees. We compare our algorithms to traditional greedy and 1 -regularization schemes and show that we obtain a more diverse set of features that result in the regression problem being stable under perturbations. 1

5 0.09301389 290 nips-2012-Recovery of Sparse Probability Measures via Convex Programming

Author: Mert Pilanci, Laurent E. Ghaoui, Venkat Chandrasekaran

Abstract: We consider the problem of cardinality penalized optimization of a convex function over the probability simplex with additional convex constraints. The classical 1 regularizer fails to promote sparsity on the probability simplex since 1 norm on the probability simplex is trivially constant. We propose a direct relaxation of the minimum cardinality problem and show that it can be efficiently solved using convex programming. As a first application we consider recovering a sparse probability measure given moment constraints, in which our formulation becomes linear programming, hence can be solved very efficiently. A sufficient condition for exact recovery of the minimum cardinality solution is derived for arbitrary affine constraints. We then develop a penalized version for the noisy setting which can be solved using second order cone programs. The proposed method outperforms known rescaling heuristics based on 1 norm. As a second application we consider convex clustering using a sparse Gaussian mixture and compare our results with the well known soft k-means algorithm. 1

6 0.082671165 104 nips-2012-Dual-Space Analysis of the Sparse Linear Model

7 0.074462548 78 nips-2012-Compressive Sensing MRI with Wavelet Tree Sparsity

8 0.072893493 228 nips-2012-Multilabel Classification using Bayesian Compressed Sensing

9 0.069801271 6 nips-2012-A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation

10 0.066995092 258 nips-2012-Online L1-Dictionary Learning with Application to Novel Document Detection

11 0.062150232 34 nips-2012-Active Learning of Multi-Index Function Models

12 0.061301541 327 nips-2012-Structured Learning of Gaussian Graphical Models

13 0.058710251 249 nips-2012-Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison

14 0.053303991 153 nips-2012-How Prior Probability Influences Decision Making: A Unifying Probabilistic Model

15 0.052023623 29 nips-2012-Accelerated Training for Matrix-norm Regularization: A Boosting Approach

16 0.051581226 164 nips-2012-Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation

17 0.050801087 199 nips-2012-Link Prediction in Graphs with Autoregressive Features

18 0.045920115 86 nips-2012-Convex Multi-view Subspace Learning

19 0.045212787 69 nips-2012-Clustering Sparse Graphs

20 0.04397095 139 nips-2012-Fused sparsity and robust estimation for linear models with unknown variance


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.137), (1, 0.034), (2, 0.034), (3, -0.055), (4, 0.042), (5, 0.069), (6, -0.019), (7, -0.036), (8, 0.049), (9, 0.006), (10, -0.025), (11, -0.016), (12, -0.008), (13, 0.012), (14, 0.012), (15, 0.026), (16, 0.017), (17, -0.006), (18, 0.008), (19, -0.088), (20, -0.015), (21, 0.137), (22, -0.094), (23, -0.005), (24, 0.106), (25, 0.029), (26, -0.047), (27, 0.088), (28, 0.041), (29, 0.089), (30, -0.098), (31, -0.042), (32, 0.082), (33, -0.016), (34, -0.089), (35, -0.018), (36, 0.055), (37, 0.069), (38, -0.017), (39, -0.009), (40, -0.023), (41, 0.009), (42, 0.023), (43, 0.035), (44, -0.085), (45, -0.035), (46, 0.108), (47, -0.024), (48, 0.002), (49, 0.075)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.90483826 63 nips-2012-CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem

Author: Henrik Ohlsson, Allen Yang, Roy Dong, Shankar Sastry

Abstract: While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique – CPRL, which relaxes the NP-hard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1

2 0.73984927 79 nips-2012-Compressive neural representation of sparse, high-dimensional probabilities

Author: Xaq Pitkow

Abstract: This paper shows how sparse, high-dimensional probability distributions could be represented by neurons with exponential compression. The representation is a novel application of compressive sensing to sparse probability distributions rather than to the usual sparse signals. The compressive measurements correspond to expected values of nonlinear functions of the probabilistically distributed variables. When these expected values are estimated by sampling, the quality of the compressed representation is limited only by the quality of sampling. Since the compression preserves the geometric structure of the space of sparse probability distributions, probabilistic computation can be performed in the compressed domain. Interestingly, functions satisfying the requirements of compressive sensing can be implemented as simple perceptrons. If we use perceptrons as a simple model of feedforward computation by neurons, these results show that the mean activity of a relatively small number of neurons can accurately represent a highdimensional joint distribution implicitly, even without accounting for any noise correlations. This comprises a novel hypothesis for how neurons could encode probabilities in the brain. 1

3 0.63328308 104 nips-2012-Dual-Space Analysis of the Sparse Linear Model

Author: Yi Wu, David P. Wipf

Abstract: Sparse linear (or generalized linear) models combine a standard likelihood function with a sparse prior on the unknown coefficients. These priors can conveniently be expressed as a maximization over zero-mean Gaussians with different variance hyperparameters. Standard MAP estimation (Type I) involves maximizing over both the hyperparameters and coefficients, while an empirical Bayesian alternative (Type II) first marginalizes the coefficients and then maximizes over the hyperparameters, leading to a tractable posterior approximation. The underlying cost functions can be related via a dual-space framework from [22], which allows both the Type I or Type II objectives to be expressed in either coefficient or hyperparmeter space. This perspective is useful because some analyses or extensions are more conducive to development in one space or the other. Herein we consider the estimation of a trade-off parameter balancing sparsity and data fit. As this parameter is effectively a variance, natural estimators exist by assessing the problem in hyperparameter (variance) space, transitioning natural ideas from Type II to solve what is much less intuitive for Type I. In contrast, for analyses of update rules and sparsity properties of local and global solutions, as well as extensions to more general likelihood models, we can leverage coefficient-space techniques developed for Type I and apply them to Type II. For example, this allows us to prove that Type II-inspired techniques can be successful recovering sparse coefficients when unfavorable restricted isometry properties (RIP) lead to failure of popular ℓ1 reconstructions. It also facilitates the analysis of Type II when non-Gaussian likelihood models lead to intractable integrations. 1

4 0.61736101 78 nips-2012-Compressive Sensing MRI with Wavelet Tree Sparsity

Author: Chen Chen, Junzhou Huang

Abstract: In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. 1

5 0.52130872 34 nips-2012-Active Learning of Multi-Index Function Models

Author: Tyagi Hemant, Volkan Cevher

Abstract: We consider the problem of actively learning multi-index functions of the form k f (x) = g(Ax) = i=1 gi (aT x) from point evaluations of f . We assume that i the function f is defined on an 2 -ball in Rd , g is twice continuously differentiable almost everywhere, and A ∈ Rk×d is a rank k matrix, where k d. We propose a randomized, active sampling scheme for estimating such functions with uniform approximation guarantees. Our theoretical developments leverage recent techniques from low rank matrix recovery, which enables us to derive an estimator of the function f along with sample complexity bounds. We also characterize the noise robustness of the scheme, and provide empirical evidence that the highdimensional scaling of our sample complexity bounds are quite accurate. 1

6 0.504583 120 nips-2012-Exact and Stable Recovery of Sequences of Signals with Sparse Increments via Differential 1-Minimization

7 0.49135846 304 nips-2012-Selecting Diverse Features via Spectral Regularization

8 0.47291553 290 nips-2012-Recovery of Sparse Probability Measures via Convex Programming

9 0.46348861 258 nips-2012-Online L1-Dictionary Learning with Application to Novel Document Detection

10 0.44653916 43 nips-2012-Approximate Message Passing with Consistent Parameter Estimation and Applications to Sparse Learning

11 0.43800429 103 nips-2012-Distributed Probabilistic Learning for Camera Networks with Missing Data

12 0.43318498 228 nips-2012-Multilabel Classification using Bayesian Compressed Sensing

13 0.42489612 234 nips-2012-Multiresolution analysis on the symmetric group

14 0.40811586 363 nips-2012-Wavelet based multi-scale shape features on arbitrary surfaces for cortical thickness discrimination

15 0.40574548 31 nips-2012-Action-Model Based Multi-agent Plan Recognition

16 0.40548301 302 nips-2012-Scaling MPE Inference for Constrained Continuous Markov Random Fields with Consensus Optimization

17 0.4017742 17 nips-2012-A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and Tighter Bound

18 0.40053529 6 nips-2012-A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation

19 0.39233077 86 nips-2012-Convex Multi-view Subspace Learning

20 0.38496217 268 nips-2012-Perfect Dimensionality Recovery by Variational Bayesian PCA


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.04), (17, 0.011), (21, 0.02), (30, 0.239), (38, 0.155), (39, 0.017), (42, 0.024), (54, 0.026), (55, 0.031), (68, 0.014), (74, 0.062), (76, 0.133), (80, 0.081), (92, 0.033), (94, 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.86268175 71 nips-2012-Co-Regularized Hashing for Multimodal Data

Author: Yi Zhen, Dit-Yan Yeung

Abstract: Hashing-based methods provide a very promising approach to large-scale similarity search. To obtain compact hash codes, a recent trend seeks to learn the hash functions from data automatically. In this paper, we study hash function learning in the context of multimodal data. We propose a novel multimodal hash function learning method, called Co-Regularized Hashing (CRH), based on a boosted coregularization framework. The hash functions for each bit of the hash codes are learned by solving DC (difference of convex functions) programs, while the learning for multiple bits proceeds via a boosting procedure so that the bias introduced by the hash functions can be sequentially minimized. We empirically compare CRH with two state-of-the-art multimodal hash function learning methods on two publicly available data sets. 1

same-paper 2 0.79813218 63 nips-2012-CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem

Author: Henrik Ohlsson, Allen Yang, Roy Dong, Shankar Sastry

Abstract: While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique – CPRL, which relaxes the NP-hard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1

3 0.75181109 117 nips-2012-Ensemble weighted kernel estimators for multivariate entropy estimation

Author: Kumar Sricharan, Alfred O. Hero

Abstract: The problem of estimation of entropy functionals of probability densities has received much attention in the information theory, machine learning and statistics communities. Kernel density plug-in estimators are simple, easy to implement and widely used for estimation of entropy. However, for large feature dimension d, kernel plug-in estimators suffer from the curse of dimensionality: the MSE rate of convergence is glacially slow - of order O(T −γ/d ), where T is the number of samples, and γ > 0 is a rate parameter. In this paper, it is shown that for sufficiently smooth densities, an ensemble of kernel plug-in estimators can be combined via a weighted convex combination, such that the resulting weighted estimator has a superior parametric MSE rate of convergence of order O(T −1 ). Furthermore, it is shown that these optimal weights can be determined by solving a convex optimization problem which does not require training data or knowledge of the underlying density, and therefore can be performed offline. This novel result is remarkable in that, while each of the individual kernel plug-in estimators belonging to the ensemble suffer from the curse of dimensionality, by appropriate ensemble averaging we can achieve parametric convergence rates. 1

4 0.69595116 226 nips-2012-Multiclass Learning Approaches: A Theoretical Comparison with Implications

Author: Amit Daniely, Sivan Sabato, Shai S. Shwartz

Abstract: We theoretically analyze and compare the following five popular multiclass classification methods: One vs. All, All Pairs, Tree-based classifiers, Error Correcting Output Codes (ECOC) with randomly generated code matrices, and Multiclass SVM. In the first four methods, the classification is based on a reduction to binary classification. We consider the case where the binary classifier comes from a class of VC dimension d, and in particular from the class of halfspaces over Rd . We analyze both the estimation error and the approximation error of these methods. Our analysis reveals interesting conclusions of practical relevance, regarding the success of the different approaches under various conditions. Our proof technique employs tools from VC theory to analyze the approximation error of hypothesis classes. This is in contrast to most previous uses of VC theory, which only deal with estimation error. 1

5 0.69425678 83 nips-2012-Controlled Recognition Bounds for Visual Learning and Exploration

Author: Vasiliy Karasev, Alessandro Chiuso, Stefano Soatto

Abstract: We describe the tradeoff between the performance in a visual recognition problem and the control authority that the agent can exercise on the sensing process. We focus on the problem of “visual search” of an object in an otherwise known and static scene, propose a measure of control authority, and relate it to the expected risk and its proxy (conditional entropy of the posterior density). We show this analytically, as well as empirically by simulation using the simplest known model that captures the phenomenology of image formation, including scaling and occlusions. We show that a “passive” agent given a training set can provide no guarantees on performance beyond what is afforded by the priors, and that an “omnipotent” agent, capable of infinite control authority, can achieve arbitrarily good performance (asymptotically). In between these limiting cases, the tradeoff can be characterized empirically. 1

6 0.69191992 199 nips-2012-Link Prediction in Graphs with Autoregressive Features

7 0.69015926 178 nips-2012-Learning Label Trees for Probabilistic Modelling of Implicit Feedback

8 0.68988192 149 nips-2012-Hierarchical Optimistic Region Selection driven by Curiosity

9 0.68842882 236 nips-2012-Near-Optimal MAP Inference for Determinantal Point Processes

10 0.68806136 172 nips-2012-Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs

11 0.68701684 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

12 0.68691194 364 nips-2012-Weighted Likelihood Policy Search with Model Selection

13 0.68677759 38 nips-2012-Algorithms for Learning Markov Field Policies

14 0.68677151 335 nips-2012-The Bethe Partition Function of Log-supermodular Graphical Models

15 0.68636149 316 nips-2012-Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models

16 0.68555599 162 nips-2012-Inverse Reinforcement Learning through Structured Classification

17 0.68548185 16 nips-2012-A Polynomial-time Form of Robust Regression

18 0.6853323 187 nips-2012-Learning curves for multi-task Gaussian process regression

19 0.68525428 179 nips-2012-Learning Manifolds with K-Means and K-Flats

20 0.6852411 96 nips-2012-Density Propagation and Improved Bounds on the Partition Function