nips nips2009 nips2009-93 knowledge-graph by maker-knowledge-mining

93 nips-2009-Fast Image Deconvolution using Hyper-Laplacian Priors


Source: pdf

Author: Dilip Krishnan, Rob Fergus

Abstract: The heavy-tailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and super-resolution. α These distributions are well modeled by a hyper-Laplacian p(x) ∝ e−k|x| , typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem non-convex and impractically slow to solve for multi-megapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors. We adopt an alternating minimization scheme where one of the two phases is a non-convex problem that is separable over pixels. This per-pixel sub-problem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract The heavy-tailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and super-resolution. [sent-7, score-0.218]

2 In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors. [sent-12, score-0.348]

3 We adopt an alternating minimization scheme where one of the two phases is a non-convex problem that is separable over pixels. [sent-13, score-0.193]

4 This per-pixel sub-problem may be solved with a lookup table (LUT). [sent-14, score-0.157]

5 Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. [sent-15, score-0.816]

6 Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. [sent-16, score-0.422]

7 Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. [sent-17, score-0.469]

8 1 Introduction Natural image statistics are a powerful tool in image processing, computer vision and computational photography. [sent-18, score-0.298]

9 Denoising [14], deblurring [3], transparency separation [11] and super-resolution [20], are all tasks that are inherently ill-posed. [sent-19, score-0.169]

10 Priors based on natural image statistics can regularize these problems to yield high-quality results. [sent-20, score-0.182]

11 In this paper we focus on one particular problem: non-blind deconvolution, and propose an algorithm that is practical for very large images while still yielding high quality results. [sent-27, score-0.126]

12 Numerous deconvolution approaches exist, varying greatly in their speed and sophistication. [sent-28, score-0.344]

13 Most of the best-performing approaches solve globally for the corrected image, encouraging the marginal statistics of a set of filter outputs to match those of uncorrupted images, which act as a prior to regularize the problem. [sent-30, score-0.158]

14 For these methods, a trade-off exists between accurately modeling the image statistics and being able to solve the ensuing optimization problem efficiently. [sent-31, score-0.205]

15 If the marginal distributions are assumed to be Gaussian, a closed-form solution exists in the frequency domain and FFTs can be used to recover the image very quickly. [sent-32, score-0.149]

16 66) −15 −100 −80 −60 −40 −20 0 20 40 60 80 100 Gradient Figure 1: A hyper-Laplacian with exponent α = 2/3 is a better model of image gradients than a Laplacian or a Gaussian. [sent-38, score-0.208]

17 Although such priors give the best quality results, they are typically far slower than methods that use either Gaussian or Laplacian priors. [sent-43, score-0.106]

18 In both cases, typically hundreds of CG iterations are needed, each involving an expensive convolution of the blur kernel with the current image estimate. [sent-47, score-0.34]

19 In this paper we introduce an efficient scheme for non-blind deconvolution of images using a hyperLaplacian image prior for 0 < α ≤ 1. [sent-48, score-0.62]

20 Our algorithm uses an alternating minimization scheme where the non-convex part of the problem is solved in one phase, followed by a quadratic phase which can be efficiently solved in the frequency domain using FFTs. [sent-49, score-0.222]

21 We focus on the first phase where at each pixel we are required to solve a non-convex separable minimization. [sent-50, score-0.095]

22 The first uses a lookup table (LUT); the second is an analytic approach specific to two values of α. [sent-52, score-0.294]

23 For α = 1/2 the global minima can be determined by finding the roots of a cubic polynomial analytically. [sent-53, score-0.53]

24 In the α = 2/3 case, the polynomial is a quartic whose roots can also be found efficiently in closed-form. [sent-54, score-0.565]

25 However, in our method each approximation is solved by alternating between the two phases above a few times, thus avoiding the expensive CG descent used by IRLS. [sent-56, score-0.103]

26 This allows our scheme to operate several orders of magnitude faster. [sent-57, score-0.083]

27 Although we focus on the problem of non-blind deconvolution, it would be straightforward to adapt our algorithm to other related problems, such as denoising or super-resolution. [sent-58, score-0.083]

28 1 Related Work Hyper-Laplacian image priors have been used in a range of settings: super-resolution [20], transparency separation [11] and motion deblurring [9]. [sent-60, score-0.368]

29 [7] have applied them to non-blind deconvolution problems using IRLS to solve for the deblurred image. [sent-63, score-0.376]

30 Other types of sparse image prior include: Gaussian Scale Mixtures (GSM) [21], which have been used for image deblurring [3] and denoising [14] and student-T distributions for denoising [25, 16]. [sent-64, score-0.622]

31 The alternating minimization that we adopt is a common technique, known as half-quadratic splitting, originally proposed by Geman and colleagues [5, 6]. [sent-66, score-0.099]

32 Similar to our approach, a splitting scheme is used, resulting in a non-convex per-pixel sub-problem. [sent-73, score-0.085]

33 To solve this, a Huber 2 approximation (see [1]) to the quasi-norm is used, allowing the derivation of a generalized shrinkage operator to solve the sub-problem efficiently. [sent-74, score-0.138]

34 2 Algorithm We now introduce the non-blind deconvolution problem. [sent-76, score-0.32]

35 x is the original uncorrupted linear grayscale image of N pixels; y is an image degraded by blur and/or noise, which we assume to be produced by convolving x with a blur kernel k and adding zero mean Gaussian noise. [sent-77, score-0.619]

36 From a probabilistic perspective, we seek the MAP estimate of x: p(x|y, k) ∝ p(y|x, k)p(x), the first term being a Gaussian likelihood and second being the hyper-Laplacian image prior. [sent-85, score-0.149]

37 Maximizing p(x|y, k) is equivalent to minimizing the cost − log p(x|y, k):   N J λ  (x ⊕ k − y)2 + min |(x ⊕ fj )i |α  (1) i x 2 i=1 j=1 where i is the pixel index, and ⊕ is the 2-dimensional convolution operator. [sent-86, score-0.101]

38 1 2 Using the half-quadratic penalty method [5, 6, 22], we now introduce auxiliary variables wi and wi j α (together denoted as w) at each pixel that allow us to move the Fi x terms outside the |. [sent-93, score-0.088]

39 | expression, giving a new cost function: λ β 1 2 1 2 min (x ⊕ k − y)2 + Fi1 x − wi 2 + Fi2 x − wi 2 + |wi |α + |wi |α (2) i 2 2 x,w 2 2 i where β is a weight that we will vary during the optimization, as described in Section 2. [sent-94, score-0.115]

40 2 for a fixed β can be performed by alternating between two steps, one where we solve for x, given values of w and vice-versa. [sent-100, score-0.125]

41 5 only depends on two variables, β and v, hence can easily be tabulated off-line to form a lookup table. [sent-117, score-0.094]

42 For some special cases of 1 < α < 2, there exist analytic solutions [26]. [sent-130, score-0.171]

43 9 appears to be two different cubic equations with the ±1/4β term, however we need only consider one of these as v is fixed and w∗ must lie between 0 and v. [sent-137, score-0.139]

44 5, is either 0 or a root of the cubic polynomial in Eqn. [sent-141, score-0.32]

45 10 for α = 1/2, or equivalently a root of the quartic polynomial in Eqn. [sent-142, score-0.355]

46 Finding the roots of the cubic and quartic polynomials: Analytic formulae exist for the roots of cubic and quartic polynomials [23, 24] and they form the basis of our approach, as detailed in Algorithms 2 and 3. [sent-145, score-1.37]

47 In both the cubic and quartic cases, the computational bottleneck is the cube root operation. [sent-146, score-0.435]

48 An alternative way of finding the roots of the polynomials Eqn. [sent-147, score-0.379]

49 In our experiments, we found NewtonRaphson to be slower and less accurate than either the analytic method or the LUT approach (see [8] for futher details). [sent-150, score-0.196]

50 Selecting the correct roots: Given the roots of the polynomial, we need to determine which one corresponds to the global minima of Eqn. [sent-151, score-0.332]

51 When α = 1/2, the resulting cubic equation can have: (a) 3 imaginary roots; (b) 2 imaginary roots and 1 real root, or (c) 3 real roots. [sent-153, score-0.743]

52 5 has positive derivatives around 0 and the lack of real roots implies the derivative never becomes negative, thus w∗ = 0. [sent-155, score-0.429]

53 For (b), we need to compare the costs of the single real root and w = 0, an operation that can be efficiently performed using Eqn. [sent-156, score-0.217]

54 8, we see that the squaring operation introduces a spurious root above v when v > 0, and below v when v < 0. [sent-161, score-0.18]

55 This root can be ignored, since w∗ must lie between 0 and v. [sent-162, score-0.122]

56 Finally, we need to compare the cost of this root with that of w = 0 using Eqn. [sent-166, score-0.149]

57 Here we can potentially have: (a) 4 imaginary roots, (b) 2 imaginary and 2 real roots, or (c) 4 real roots. [sent-169, score-0.272]

58 For (b), we pick the larger of the 2 real roots and compare the costs with w = 0 using Eqn. [sent-171, score-0.43]

59 13, similar to the case of 3 real roots for the cubic. [sent-172, score-0.396]

60 Case (c) never occurs: the final quartic polynomial Eqn. [sent-173, score-0.233]

61 11 was derived with a cubing operation from the analytic derivative. [sent-174, score-0.202]

62 This introduces 2 spurious roots into the final solution, both of which are imaginary, thus only cases (a) and (b) are possible. [sent-175, score-0.359]

63 In both the cubic and quartic cases, we need an efficient way to pick between w = 0 and a real root that is between 0 and v. [sent-176, score-0.533]

64 4 β βv 2 (r − v)2 > 2 2 β α−1 sign(r)|r| + (r − 2v) ≶ 0 , r ≶ 0 2 |r|α + (12) Since we are only considering roots of the polynomial, we can use Eqn. [sent-183, score-0.332]

65 Overall, the analytic approach is slower than the LUT, but it gives an exact solution to the w sub-problem. [sent-192, score-0.196]

66 Algorithm 1 Fast image deconvolution using hyper-Laplacian priors Require: Blurred image y, kernel k, regularization weight λ, exponent α (¿0) Require: β regime parameters: β0 , βInc , βMax Require: Number of inner iterations T . [sent-199, score-0.747]

67 4 to give x 8: end for 9: β = βInc · β 10: end while 11: return Deconvolved image x As with any non-convex optimization problem, it is difficult to derive any guarantees regarding the convergence of Algorithm 1. [sent-204, score-0.149]

68 Like other methods that use this form of alternating minimization [5, 6, 22], there is little theoretical guidance for setting the β schedule. [sent-206, score-0.099]

69 3 Experiments We evaluate the deconvolution performance of our algorithm on images, comparing them to numerous other methods: (i) ℓ2 (Gaussian) prior on image gradients; (ii) Lucy-Richardson [15]; (iii) the algorithm of Wang et al. [sent-211, score-0.494]

70 In our approach, we use α = 1/2 and α = 2/3, and compare the performance of the LUT and analytic methods as well. [sent-219, score-0.171]

71 7 in-focus grayscale real-world images were downloaded from the web. [sent-222, score-0.099]

72 They were then blurred by real-world camera shake kernels from [12]. [sent-223, score-0.188]

73 In any practical deconvolution setting the blur kernel is never perfectly known. [sent-225, score-0.472]

74 Therefore, the kernel passed to the algorithms was a minor perturbation of the true kernel, to mimic kernel estimation errors. [sent-226, score-0.11]

75 2 for an example of a kernel from [12] and its perturbed version. [sent-229, score-0.079]

76 Our evaluation metric was the SNR between the original ˆ x 2 ˆ ˆ image x and the deconvolved output x, defined as 10 log10 x−µ(ˆ )2 , µ(ˆ ) being the mean of x. [sent-230, score-0.237]

77 x ˆ x−x In Table 1 we compare the algorithms on 7 different images, all blurred with the same 19×19 kernel. [sent-231, score-0.115]

78 In Table 3 we evaluate the algorithms with the same 512×512 image blurred by 8 different kernels (from [12]) of varying size. [sent-233, score-0.264]

79 Figure 2 shows a larger 27×27 blur being deconvolved from two example images, comparing the output of different methods. [sent-236, score-0.185]

80 However, our algorithm is around 70 to 350 times faster than IRLS depending on whether the analytic or LUT method is used. [sent-238, score-0.171]

81 This speedup factor is independent of image size, as shown by Table 2. [sent-239, score-0.173]

82 The SNR results for our method are almost the same whether we use LUTs or analytic approach. [sent-241, score-0.171]

83 Hence, in practice, the LUT method is preferred, since it is approximately 5 times faster than the analytic method and can be used for any value of α. [sent-242, score-0.171]

84 08 Table 1: Comparison of SNRs and running time of 9 different methods for the deconvolution of 7 576×864 images, blurred with the same 19×19 kernel. [sent-332, score-0.435]

85 42 Table 2: Run-times of different methods for a range of image sizes, using a 13×13 kernel. [sent-355, score-0.149]

86 4 Discussion We have described an image deconvolution scheme that is fast, conceptually simple and yields high quality results. [sent-357, score-0.555]

87 1 Figure 2: Crops from two images (#1 & #5) being deconvolved by 4 different algorithms, including ours using a 27×27 kernel (#7). [sent-376, score-0.214]

88 In the bottom left inset, we show the original kernel from [12] (lower) and the perturbed version provided to the algorithms (upper), to make the problem more realistic. [sent-377, score-0.079]

89 23 Table 3: Comparison of SNRs and running time of 9 different methods for the deconvolution of a 512×512 image blurred by 7 different kernels. [sent-478, score-0.584]

90 Our algorithm beats all other methods in terms of quality, with the exception of IRLS on the largest kernel size. [sent-480, score-0.112]

91 Using a LUT to solve this sub-problem allows for orders of magnitude speedup in the solution over existing methods. [sent-483, score-0.108]

92 A potential drawback to our method, common to the TV and ℓ1 approaches of [22], is its use of frequency domain operations which assume circular boundary conditions, something not present in real images. [sent-488, score-0.115]

93 Although we focus on deconvolution, our scheme can be adapted to a range of other problems which rely on natural image statistics. [sent-491, score-0.204]

94 The speed offered by our algorithm makes it practical to perform these operations on the multi-megapixel images from modern cameras. [sent-493, score-0.095]

95 5 for α = 2/3 Require: Target value v, Weight β Require: Target value v, Weight β 1: ǫ = 10−6 1: ǫ = 10−6 2: {Compute intermediary terms m, t1 , t2 , t3 } 2: {Compute intermediary terms m, t1 , . [sent-496, score-0.078]

96 Fast algorithms for nonconvex compressive sensing: Mri reconstruction from very few data. [sent-503, score-0.071]

97 User assisted separation of reflections from a single image using a sparsity prior. [sent-563, score-0.149]

98 Image denoising using a scale mixture of Gaussians in the wavelet domain. [sent-586, score-0.107]

99 Exploiting the sparse derivative prior for super-resolution and image demosaicing. [sent-623, score-0.207]

100 A new alternating minimization algorithm for total variation image reconstruction. [sent-635, score-0.248]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('irls', 0.465), ('roots', 0.332), ('deconvolution', 0.32), ('snr', 0.273), ('lut', 0.252), ('quartic', 0.174), ('analytic', 0.171), ('image', 0.149), ('sign', 0.145), ('cubic', 0.139), ('deblurring', 0.133), ('root', 0.122), ('abs', 0.116), ('blurred', 0.115), ('tv', 0.097), ('blur', 0.097), ('lookup', 0.094), ('deconvolved', 0.088), ('denoising', 0.083), ('cg', 0.083), ('levin', 0.079), ('inc', 0.077), ('imaginary', 0.072), ('images', 0.071), ('laplacian', 0.069), ('alternating', 0.069), ('blurry', 0.066), ('real', 0.064), ('polynomial', 0.059), ('solve', 0.056), ('scheme', 0.055), ('kernel', 0.055), ('priors', 0.05), ('polynomials', 0.047), ('fergus', 0.044), ('deconvolve', 0.044), ('dilip', 0.044), ('hyperlaplacian', 0.044), ('krishnan', 0.044), ('lucy', 0.044), ('snrs', 0.044), ('tog', 0.044), ('uncorrupted', 0.044), ('wi', 0.044), ('geman', 0.043), ('compressive', 0.04), ('separable', 0.039), ('convolution', 0.039), ('durand', 0.039), ('chartrand', 0.039), ('intermediary', 0.039), ('luts', 0.039), ('shake', 0.039), ('lters', 0.037), ('removal', 0.036), ('fft', 0.036), ('transparency', 0.036), ('imag', 0.036), ('joshi', 0.036), ('gradients', 0.035), ('fj', 0.035), ('pami', 0.035), ('solved', 0.034), ('camera', 0.034), ('pick', 0.034), ('derivative', 0.033), ('beats', 0.033), ('formulae', 0.033), ('obey', 0.033), ('regularize', 0.033), ('nonconvex', 0.031), ('topographic', 0.031), ('quality', 0.031), ('operation', 0.031), ('minimization', 0.03), ('fast', 0.03), ('splitting', 0.03), ('coded', 0.03), ('courant', 0.029), ('table', 0.029), ('orders', 0.028), ('grayscale', 0.028), ('siggraph', 0.028), ('cost', 0.027), ('spurious', 0.027), ('reweighted', 0.027), ('shrinkage', 0.026), ('welling', 0.026), ('boundary', 0.026), ('circular', 0.025), ('slower', 0.025), ('prior', 0.025), ('exception', 0.024), ('perturbed', 0.024), ('speedup', 0.024), ('wavelet', 0.024), ('yielding', 0.024), ('blind', 0.024), ('exponent', 0.024), ('speed', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 93 nips-2009-Fast Image Deconvolution using Hyper-Laplacian Priors

Author: Dilip Krishnan, Rob Fergus

Abstract: The heavy-tailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and super-resolution. α These distributions are well modeled by a hyper-Laplacian p(x) ∝ e−k|x| , typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem non-convex and impractically slow to solve for multi-megapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors. We adopt an alternating minimization scheme where one of the two phases is a non-convex problem that is separable over pixels. This per-pixel sub-problem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. 1

2 0.11322125 137 nips-2009-Learning transport operators for image manifolds

Author: Benjamin Culpepper, Bruno A. Olshausen

Abstract: We describe an unsupervised manifold learning algorithm that represents a surface through a compact description of operators that traverse it. The operators are based on matrix exponentials, which are the solution to a system of first-order linear differential equations. The matrix exponents are represented by a basis that is adapted to the statistics of the data so that the infinitesimal generator for a trajectory along the underlying manifold can be produced by linearly composing a few elements. The method is applied to recover topological structure from low dimensional synthetic data, and to model local structure in how natural images change over time and scale. 1

3 0.091620259 228 nips-2009-Speeding up Magnetic Resonance Image Acquisition by Bayesian Multi-Slice Adaptive Compressed Sensing

Author: Matthias Seeger

Abstract: We show how to sequentially optimize magnetic resonance imaging measurement designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale non-Gaussian linear dynamical system, tracking dominating directions of posterior covariance without imposing any factorization constraints. Our approach can be scaled up to high-resolution images by reductions to numerical mathematics primitives and parallelization on several levels. In a first study, designs are found that improve significantly on others chosen independently for each slice or drawn at random. 1

4 0.085828543 211 nips-2009-Segmenting Scenes by Matching Image Composites

Author: Bryan Russell, Alyosha Efros, Josef Sivic, Bill Freeman, Andrew Zisserman

Abstract: In this paper, we investigate how, given an image, similar images sharing the same global description can help with unsupervised scene segmentation. In contrast to recent work in semantic alignment of scenes, we allow an input image to be explained by partial matches of similar scenes. This allows for a better explanation of the input scenes. We perform MRF-based segmentation that optimizes over matches, while respecting boundary information. The recovered segments are then used to re-query a large database of images to retrieve better matches for the target regions. We show improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.

5 0.073045604 185 nips-2009-Orthogonal Matching Pursuit From Noisy Random Measurements: A New Analysis

Author: Sundeep Rangan, Alyson K. Fletcher

Abstract: A well-known analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a k-sparse n-dimensional real vector from m = 4k log(n) noise-free linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n → ∞. This work strengthens this result by showing that a lower number of measurements, m = 2k log(n − k), is in fact sufficient for asymptotic recovery. More generally, when the sparsity level satisfies kmin ≤ k ≤ kmax but is unknown, m = 2kmax log(n − kmin ) measurements is sufficient. Furthermore, this number of measurements is also sufficient for detection of the sparsity pattern (support) of the vector with measurement errors provided the signal-to-noise ratio (SNR) scales to infinity. The scaling m = 2k log(n − k) exactly matches the number of measurements required by the more complex lasso method for signal recovery in a similar SNR scaling.

6 0.065613821 167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

7 0.062883995 212 nips-2009-Semi-Supervised Learning in Gigantic Image Collections

8 0.062391229 241 nips-2009-The 'tree-dependent components' of natural scenes are edge filters

9 0.061264511 83 nips-2009-Estimating image bases for visual image reconstruction from human brain activity

10 0.058886372 133 nips-2009-Learning models of object structure

11 0.056484845 6 nips-2009-A Biologically Plausible Model for Rapid Natural Scene Identification

12 0.056445021 104 nips-2009-Group Sparse Coding

13 0.05590342 214 nips-2009-Semi-supervised Regression using Hessian energy with an application to semi-supervised dimensionality reduction

14 0.05553215 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

15 0.05437997 222 nips-2009-Sparse Estimation Using General Likelihoods and Non-Factorial Priors

16 0.053865291 128 nips-2009-Learning Non-Linear Combinations of Kernels

17 0.051528249 77 nips-2009-Efficient Match Kernel between Sets of Features for Visual Recognition

18 0.051138882 225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes

19 0.050978635 165 nips-2009-Noise Characterization, Modeling, and Reduction for In Vivo Neural Recording

20 0.05092968 1 nips-2009-$L 1$-Penalized Robust Estimation for a Class of Inverse Problems Arising in Multiview Geometry


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.156), (1, -0.028), (2, -0.057), (3, 0.048), (4, -0.033), (5, 0.058), (6, 0.061), (7, 0.038), (8, 0.124), (9, -0.002), (10, -0.026), (11, 0.025), (12, -0.005), (13, 0.067), (14, -0.056), (15, 0.026), (16, -0.08), (17, 0.055), (18, 0.018), (19, -0.006), (20, -0.003), (21, 0.015), (22, 0.048), (23, 0.057), (24, 0.026), (25, 0.029), (26, -0.077), (27, 0.025), (28, -0.003), (29, 0.018), (30, -0.095), (31, -0.069), (32, -0.041), (33, -0.037), (34, 0.014), (35, 0.027), (36, 0.013), (37, -0.028), (38, -0.016), (39, 0.022), (40, 0.006), (41, 0.006), (42, 0.048), (43, 0.035), (44, -0.111), (45, -0.04), (46, -0.071), (47, -0.06), (48, 0.028), (49, -0.047)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92973244 93 nips-2009-Fast Image Deconvolution using Hyper-Laplacian Priors

Author: Dilip Krishnan, Rob Fergus

Abstract: The heavy-tailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and super-resolution. α These distributions are well modeled by a hyper-Laplacian p(x) ∝ e−k|x| , typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem non-convex and impractically slow to solve for multi-megapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors. We adopt an alternating minimization scheme where one of the two phases is a non-convex problem that is separable over pixels. This per-pixel sub-problem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. 1

2 0.69360709 137 nips-2009-Learning transport operators for image manifolds

Author: Benjamin Culpepper, Bruno A. Olshausen

Abstract: We describe an unsupervised manifold learning algorithm that represents a surface through a compact description of operators that traverse it. The operators are based on matrix exponentials, which are the solution to a system of first-order linear differential equations. The matrix exponents are represented by a basis that is adapted to the statistics of the data so that the infinitesimal generator for a trajectory along the underlying manifold can be produced by linearly composing a few elements. The method is applied to recover topological structure from low dimensional synthetic data, and to model local structure in how natural images change over time and scale. 1

3 0.65676671 138 nips-2009-Learning with Compressible Priors

Author: Volkan Cevher

Abstract: We describe a set of probability distributions, dubbed compressible priors, whose independent and identically distributed (iid) realizations result in p-compressible signals. A signal x ∈ RN is called p-compressible with magnitude R if its sorted coefficients exhibit a power-law decay as |x|(i) R · i−d , where the decay rate d is equal to 1/p. p-compressible signals live close to K-sparse signals (K N) in the r -norm (r > p) since their best K-sparse approximation error decreases with O R · K 1/r−1/p . We show that the membership of generalized Pareto, Student’s t, log-normal, Fr´ chet, and log-logistic distributions to the set of compresse ible priors depends only on the distribution parameters and is independent of N . In contrast, we demonstrate that the membership of the generalized Gaussian distribution (GGD) depends both on the signal dimension and the GGD parameters: the expected decay rate of N -sample iid realizations from the GGD with the shape parameter q is given by 1/ [q log (N/q)]. As stylized examples, we show via experiments that the wavelet coefficients of natural images are 1.67-compressible whereas their pixel gradients are 0.95 log (N/0.95)-compressible, on the average. We also leverage the connections between compressible priors and sparse signals to develop new iterative re-weighted sparse signal recovery algorithms that outperform the standard 1 -norm minimization. Finally, we describe how to learn the hyperparameters of compressible priors in underdetermined regression problems by exploiting the geometry of their order statistics during signal recovery. 1

4 0.64606583 228 nips-2009-Speeding up Magnetic Resonance Image Acquisition by Bayesian Multi-Slice Adaptive Compressed Sensing

Author: Matthias Seeger

Abstract: We show how to sequentially optimize magnetic resonance imaging measurement designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale non-Gaussian linear dynamical system, tracking dominating directions of posterior covariance without imposing any factorization constraints. Our approach can be scaled up to high-resolution images by reductions to numerical mathematics primitives and parallelization on several levels. In a first study, designs are found that improve significantly on others chosen independently for each slice or drawn at random. 1

5 0.58656937 258 nips-2009-Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise

Author: Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R. Movellan, Paul L. Ruvolo

Abstract: Modern machine learning-based approaches to computer vision require very large databases of hand labeled images. Some contemporary vision systems already require on the order of millions of images for training (e.g., Omron face detector [9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings interesting theoretical and practical challenges: (1) The labelers may have wide ranging levels of expertise which are unknown a priori, and in some cases may be adversarial; (2) images may vary in their level of difficulty; and (3) multiple labels for the same image must be combined to provide an estimate of the actual label of the image. Probabilistic approaches provide a principled way to approach these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the difficulty of each image. On both simulated and real data, we demonstrate that the model outperforms the commonly used “Majority Vote” heuristic for inferring image labels, and is robust to both noisy and adversarial labelers. 1

6 0.58618051 6 nips-2009-A Biologically Plausible Model for Rapid Natural Scene Identification

7 0.55052119 58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection

8 0.54896796 167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

9 0.53546405 211 nips-2009-Segmenting Scenes by Matching Image Composites

10 0.50080174 185 nips-2009-Orthogonal Matching Pursuit From Noisy Random Measurements: A New Analysis

11 0.49534234 175 nips-2009-Occlusive Components Analysis

12 0.49423814 104 nips-2009-Group Sparse Coding

13 0.48920929 231 nips-2009-Statistical Models of Linear and Nonlinear Contextual Interactions in Early Visual Processing

14 0.48692223 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

15 0.48039863 83 nips-2009-Estimating image bases for visual image reconstruction from human brain activity

16 0.46945521 157 nips-2009-Multi-Label Prediction via Compressed Sensing

17 0.46411228 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

18 0.45785359 212 nips-2009-Semi-Supervised Learning in Gigantic Image Collections

19 0.45395213 222 nips-2009-Sparse Estimation Using General Likelihoods and Non-Factorial Priors

20 0.44330567 251 nips-2009-Unsupervised Detection of Regions of Interest Using Iterative Link Analysis


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.015), (20, 0.35), (21, 0.013), (24, 0.027), (25, 0.057), (35, 0.058), (36, 0.103), (39, 0.043), (58, 0.067), (61, 0.016), (71, 0.025), (81, 0.019), (86, 0.093), (91, 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.74922282 93 nips-2009-Fast Image Deconvolution using Hyper-Laplacian Priors

Author: Dilip Krishnan, Rob Fergus

Abstract: The heavy-tailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and super-resolution. α These distributions are well modeled by a hyper-Laplacian p(x) ∝ e−k|x| , typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem non-convex and impractically slow to solve for multi-megapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors. We adopt an alternating minimization scheme where one of the two phases is a non-convex problem that is separable over pixels. This per-pixel sub-problem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. 1

2 0.45622116 104 nips-2009-Group Sparse Coding

Author: Samy Bengio, Fernando Pereira, Yoram Singer, Dennis Strelow

Abstract: Bag-of-words document representations are often used in text, image and video processing. While it is relatively easy to determine a suitable word dictionary for text documents, there is no simple mapping from raw images or videos to dictionary terms. The classical approach builds a dictionary using vector quantization over a large set of useful visual descriptors extracted from a training set, and uses a nearest-neighbor algorithm to count the number of occurrences of each dictionary word in documents to be encoded. More robust approaches have been proposed recently that represent each visual descriptor as a sparse weighted combination of dictionary words. While favoring a sparse representation at the level of visual descriptors, those methods however do not ensure that images have sparse representation. In this work, we use mixed-norm regularization to achieve sparsity at the image level as well as a small overall dictionary. This approach can also be used to encourage using the same dictionary words for all the images in a class, providing a discriminative signal in the construction of image representations. Experimental results on a benchmark image classification dataset show that when compact image or dictionary representations are needed for computational efficiency, the proposed approach yields better mean average precision in classification. 1

3 0.45595112 137 nips-2009-Learning transport operators for image manifolds

Author: Benjamin Culpepper, Bruno A. Olshausen

Abstract: We describe an unsupervised manifold learning algorithm that represents a surface through a compact description of operators that traverse it. The operators are based on matrix exponentials, which are the solution to a system of first-order linear differential equations. The matrix exponents are represented by a basis that is adapted to the statistics of the data so that the infinitesimal generator for a trajectory along the underlying manifold can be produced by linearly composing a few elements. The method is applied to recover topological structure from low dimensional synthetic data, and to model local structure in how natural images change over time and scale. 1

4 0.45071247 169 nips-2009-Nonlinear Learning using Local Coordinate Coding

Author: Kai Yu, Tong Zhang, Yihong Gong

Abstract: This paper introduces a new method for semi-supervised learning on high dimensional nonlinear manifolds, which includes a phase of unsupervised basis learning and a phase of supervised function learning. The learned bases provide a set of anchor points to form a local coordinate system, such that each data point x on the manifold can be locally approximated by a linear combination of its nearby anchor points, and the linear weights become its local coordinate coding. We show that a high dimensional nonlinear function can be approximated by a global linear function with respect to this coding scheme, and the approximation quality is ensured by the locality of such coding. The method turns a difficult nonlinear learning problem into a simple global linear learning problem, which overcomes some drawbacks of traditional local learning methods. 1

5 0.45028058 167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

Author: Mingyuan Zhou, Haojun Chen, Lu Ren, Guillermo Sapiro, Lawrence Carin, John W. Paisley

Abstract: Non-parametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this non-parametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stick-breaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other state-of-the-art approaches. 1

6 0.45004755 119 nips-2009-Kernel Methods for Deep Learning

7 0.45003015 210 nips-2009-STDP enables spiking neurons to detect hidden causes of their inputs

8 0.44910622 17 nips-2009-A Sparse Non-Parametric Approach for Single Channel Separation of Known Sounds

9 0.44884041 70 nips-2009-Discriminative Network Models of Schizophrenia

10 0.44714275 30 nips-2009-An Integer Projected Fixed Point Method for Graph Matching and MAP Inference

11 0.44640183 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

12 0.44583413 191 nips-2009-Positive Semidefinite Metric Learning with Boosting

13 0.4452495 80 nips-2009-Efficient and Accurate Lp-Norm Multiple Kernel Learning

14 0.44503626 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

15 0.44460899 132 nips-2009-Learning in Markov Random Fields using Tempered Transitions

16 0.44458485 32 nips-2009-An Online Algorithm for Large Scale Image Similarity Learning

17 0.44421604 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

18 0.44389385 203 nips-2009-Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks

19 0.44357622 83 nips-2009-Estimating image bases for visual image reconstruction from human brain activity

20 0.44350365 3 nips-2009-AUC optimization and the two-sample problem