cvpr cvpr2013 cvpr2013-449 knowledge-graph by maker-knowledge-mining

449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring


Source: pdf

Author: Li Xu, Shicheng Zheng, Jiaya Jia

Abstract: We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unifiedframeworkfor both uniform andnon-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. [sent-6, score-0.211]

2 Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. [sent-7, score-0.232]

3 It also provides a unifiedframeworkfor both uniform andnon-uniform motion deblurring. [sent-8, score-0.155]

4 blind deconvolution, was extensively studied in these a few years and has achieved great success with a few milestone solutions. [sent-14, score-0.142]

5 Because naive maximum a posterior (MAP) inference could fail on natural images, state-of-the-art methods either maximize marginalized distributions [5, 17, 18, 6] or propose novel techniques in MAP to effectively avoid trivial delta kernel estimates [11, 20, 3, 25, 4, 10]. [sent-15, score-0.146]

6 The particularly useful techniques include adaption of the energy function during optimization [20], explicit sharp edge pursuit [11, 13, 19, 3, 25, 4, 10], edge selection [25], and employment of normalized sparsity measure [16]. [sent-18, score-0.29]

7 Intermediate unnatural image representation exists in many state-of-the-art approaches. [sent-25, score-0.205]

8 [11, 13, 19, 3, 25, 9, 23], which are referred to as semi-blind schemes, and those [20, 16] implicitly incorporating special regularization to remove detrimental structures in early stages and gradually enrich image details in iterations. [sent-26, score-0.11]

9 These maps are vital to make motion deblurring accomplishable in different MAP-variant frameworks. [sent-28, score-0.581]

10 This method, in the first a few iterations, uses a large regularization weight to suppress insignificant structures and preserve strong ones, creating crisp-edge image results, as exemplified in Fig. [sent-31, score-0.121]

11 This scheme is useful to remove harmful subtle image structures, making kernel estimation generally follow correct directions in iterations. [sent-33, score-0.214]

12 Explicit Filter and Selection In [19, 3], shock filter is introduced to create a sharpened reference map for kernel estimation. [sent-41, score-0.483]

13 Cho and Lee [3] performed bilateral filter and edge thresholding in each iteration to remove small- and medium-amplitude structures (illustrated in Fig. [sent-42, score-0.239]

14 These two schemes have been extensively validated in motion deblurring. [sent-46, score-0.136]

15 Unnatural Representation The above techniques enable several successful MAP frameworks in motion deblurring. [sent-47, score-0.1]

16 All of them have their intermediate image results or edge maps different from a natural one, as shown in Fig. [sent-48, score-0.123]

17 We generally call them unnatural representation, which is the key to robust kernel estimation in motion deblurring. [sent-50, score-0.483]

18 Our Contribution Based on the step-edge properties in unnatural representation, we in this paper propose a new sparse L0 approximation scheme to generalize these frameworks. [sent-53, score-0.28]

19 Compared to local shock filter, our edges are not explicitly created by filtering in extra steps. [sent-54, score-0.333]

20 Instead, we incorporate a new regularization term consisting of a family of loss functions to approximate the L0 cost into the objective, which, during optimization, leads to consistent energy minimization and accordingly fast convergence. [sent-55, score-0.327]

21 Besides this new sparse image representation, we also contribute a unified framework for both uniform and nonuniform deblurring, which no longer relies on ad-hoc edge selection, spatial filtering, or edge re-weighting. [sent-58, score-0.289]

22 This framework does not sacrifice the competency in solving the challenging deblurring problem. [sent-59, score-0.528]

23 Given a family of loss functions, based on which a graduate non-convexity could be applied, significant edges quickly improve kernel estimates in only a few iterations. [sent-60, score-0.314]

24 This is most beneficial for non-uniform deblurring where intensive computation is needed in each iteration. [sent-61, score-0.481]

25 Other Related Work In non-uniform deblurring, considering 3D camera rotation, Shan et al. [sent-64, score-0.1]

26 [22] solved non-blind deconvolution with a general projective motion model. [sent-69, score-0.31]

27 [12] advocated a hardware solution that physically captures camera rotation. [sent-71, score-0.1]

28 For acceleration, locally uniform approximation was adopted by Harmeling et al. [sent-75, score-0.091]

29 [9], which combines the patch-based model and a global 3D camera motion one. [sent-77, score-0.2]

30 Shock filter is applied to generate sharp edge prediction. [sent-80, score-0.189]

31 In dealing with depth variation, a tree structure was proposed in [26] to hierarchically estimate blur kernels with scale change. [sent-81, score-0.333]

32 Framework ×× We denote by x the latent image and y the blurred observation. [sent-83, score-0.12]

33 The discrete blur model for camera shake can be expressed as y= ? [sent-85, score-0.457]

34 Hm is a N N transformation matrix, which corresponds to either camera rotation or translation for pose m. [sent-90, score-0.231]

35 km denotes the time that camera pose m lasts and is a weight in this function. [sent-91, score-0.1]

36 (1) models the blurred image as summing unblurred images from all camera poses, which approximates continuous integral on light receival for each pixel. [sent-93, score-0.238]

37 Because H can be camera rotation R or translation M, we discuss these two cases. [sent-94, score-0.231]

38 Camera rotation with 3 degree 1 1 1 1 1 10 0 08 6 6 of freedoms (DoF) is generally sufficient to model nonuniform deblurring [14]. [sent-95, score-0.59]

39 Each Rm thus corresponds to a camera rotation pose, sampled in 3D. [sent-96, score-0.164]

40 Blur with in-plane translation M is referred to as uniform blur. [sent-108, score-0.122]

41 Each Mm is thus a sample in 2D and its total number is called kernel size. [sent-109, score-0.146]

42 In camera translation, we similarly substitute Mm for Hm in Eq. [sent-110, score-0.1]

43 m where BM and AM are block Toeplitz with Toeplitz blocks (BTTB) matrices, since camera translation is linear translation invariant (LTI). [sent-113, score-0.234]

44 New Sparsity Function Our framework contains a sparse φ0(·) loss function, which can effectively approximate L0 sparsity during iterative optimization. [sent-114, score-0.167]

45 A few other sparsity-pursuit functions used in deblurring [20, 16] are also plotted in this figure. [sent-127, score-0.521]

46 Final Objective φ0 (·) is incorporated in our method to regularize optimization, which seeks an intermediate sparse representation ˜x containing only necessary edges. [sent-129, score-0.119]

47 Our objective to estimate the blur kernel from the input image is m( x˜,ikn)⎨⎧? [sent-130, score-0.411]

48 Hm could be either Rm or Mm depending on solving the uniform or non-uniform deblurring problem, as shown in Eqs. [sent-138, score-0.536]

49 φ0 (∂∗ x˜) is the new regularization term, which is instrumental in our method, to guide kernel estimation. [sent-153, score-0.22]

50 The unnatural L0 representation computed from our method is image ˜x produced in iterations to solve Eq. [sent-166, score-0.276]

51 Compared to employing shock filter [19, 3] as an extra step that cannot fit into the overall function for consistent energy minimization, φ(·) and ˜x are elegantly incorporated in one objective, optimizing which monotonically decreases energy. [sent-170, score-0.428]

52 Meanwhile, x˜ is not produced by local filtering, which thus guarantees to contain only necessary strong edges, regardless of blur kernels. [sent-171, score-0.265]

53 , (8) in each iteration t + 1, where the information of kt and x˜t is embedded in the blur matrices Bt and At respectively. [sent-183, score-0.349]

54 By convention, blur kernels are estimated in a coarse-tofine manner in an image pyramid. [sent-184, score-0.333]

55 We employ a similar scheme that minimizes a family of loss functions. [sent-196, score-0.129]

56 (1⎭0) We a⎩lternate between computing ˜x and updating lhi and ⎭lvi in iterations for each loss function controlled by ? [sent-233, score-0.366]

57 Update l Solving for lhi in the function { |lhi |0 + 1/? [sent-235, score-0.176]

58 It is efficient when dealing with in-plane translational camera motion M, as the matrix-vector production with regard to the BTTB matrix can be achieved using FFTs. [sent-246, score-0.234]

59 )λ2 +F FD2(∂v) · F(lv) , (12) where F(·) is the FFT operator, which takes an image vector or a BTTB kernel matrix as input. [sent-249, score-0.146]

60 lh and lv are vectors concatenating all lhi and lvi respectively. [sent-253, score-0.368]

61 When considering non-uniform blur caused by camera rotation, which is spatially variant, the blur matrix BR is no longer block Toeplitz with Toeplitz blocks (BTTB). [sent-255, score-0.63]

62 In this approximation, each patch has one blur kernel basis ARδ, generated by applying rotation to a special patch containing all black pixels except for a white point at the δ center. [sent-258, score-0.595]

63 A blur kernel for each patch is then formed using ARδk = ? [sent-259, score-0.471]

64 ote by Cp( x˜) and Ck (ARδk) the p-th patch from the latent image and its corresponding kernel ARδk respectively. [sent-265, score-0.242]

65 A blurred patch Cp(y) is generated by convolving Ck (ARδk) and Cp( x˜). [sent-266, score-0.144]

66 This model enables a closed-form approximation of ˜x by deconvolving each patch separately, written as ˜x =W1? [sent-278, score-0.096]

67 1/W is a weight to suppress visual artifacts caused by the window functions [9]. [sent-282, score-0.087]

68 On the con- trary, when the blur is uniform, Eqs. [sent-285, score-0.265]

69 In implementation, we use a family of 4 loss functions with ? [sent-287, score-0.169]

70 We achieve it by setting iteration numbers for different loss functions inversely proportional to ? [sent-299, score-0.111]

71 With the duality of the blur kernel and latent image in convolution, the AM matrix for translational camera motion is BTTB, making blur kernel estimation also find a closedform solution in frequency domain [25]. [sent-312, score-1.124]

72 To further reduce iterations, we introduce a parameter α controlling the “step size” [1], making updating expressed as k(n+1)= k(n)·? [sent-322, score-0.084]

73 “Energy” and ”Kernel Similarity” plots denote resulting energy and kernel similarity to the ground truth in iterations. [sent-327, score-0.23]

74 Our method only needs to perform it 5 times (according to number t in Algorithm 1) in one image level, compared to tens or even hundreds iterations involved in other approaches. [sent-331, score-0.104]

75 In the final step, we restore the natural image by non-blind deconvolution given the final kernel estimate. [sent-335, score-0.324]

76 Image restoration for both the uniform and non-uniform blur is accelerated by FFTs. [sent-338, score-0.398]

77 Discussion Difference to Shock Filter Compared to edge prediction using shock filter and edge thresholding [3, 25], our approach employs Eqs. [sent-340, score-0.534]

78 (11) achieves theoretically sound gradient thresholding without extra ad-hoc operations. [sent-343, score-0.167]

79 In the sequel, our method does not have the edge location problem inherent in shock filter when blur kernels are highly non-Gaussian or the saddle points used in shock filter do not correspond to latent image edges. [sent-344, score-1.118]

80 (12)) can naturally produce a sparse representation faithful to the input, vastly benefitting motion deblurring. [sent-346, score-0.139]

81 In practice, 5-pass kernel estimation in one image level is enough, compared to hundreds ofiterations by variational Bayesian inference [5], and tens of iterations in the methods of [20, 16]. [sent-352, score-0.282]

82 We measure the similarity between the estimated kernels and the ground truth using the maximum correlation, counting in kernel shift. [sent-354, score-0.214]

83 3 manifest that the rapid convergence does not sacrifice the quality of kernel estimates. [sent-372, score-0.193]

84 The finding is that using the image space constraint for updating ˜x and gradient domain energy to update kernel k (middle of Table 1) is better than other alternatives. [sent-375, score-0.316]

85 We compare the running time of several representative deblurring methods, ofwhich implementations are available. [sent-377, score-0.481]

86 The set of [17] contains 32 images of size 255 255 blurred with 8 different kernels. [sent-382, score-0.084]

87 We use the provided script and non-blind deconvolution function to generate the results, for fairness. [sent-387, score-0.178]

88 All the 32 kernel estimates from the proposed method are shown in (b). [sent-396, score-0.146]

89 This dataset con- sists of 4 images, each is blurred with 12 kernels, including several large ones. [sent-401, score-0.084]

90 Quantitative evaluation is conducted by comparing each deblurring result with 199 unblurred images captured along the camera motion trajectory and recording the largest PSNR. [sent-404, score-0.765]

91 Note that all top ranking methods [25, 3, 23] use shock filter except ours. [sent-407, score-0.337]

92 Non-uniform Deblurring Our framework is fully appli- cable to non-uniform deblurring with the model depicted in the paper. [sent-412, score-0.481]

93 Concluding Remarks We have presented a new framework for both uniform and non-uniform motion deblurring, leveraging an unnatural L0 sparse representation to greatly benefit kernel estimation and large-scale optimization. [sent-422, score-0.577]

94 We proposed a unified model, which seeks gradient sparsity close to L0 to remove pernicious small-amplitude structures. [sent-423, score-0.164]

95 The method not only provides a principled understanding of effective motion deblurring strategies, but also notably augments performance based on the new optimization process. [sent-424, score-0.581]

96 Spacevariant single-image blind deconvolution for removing cam- [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] era shake. [sent-492, score-0.284]

97 Recording and playback of camera shake: benchmarking blind deconvolution with a real-world database. [sent-536, score-0.384]

98 Total variation minimizing blind deconvolution with shock filter reference. [sent-570, score-0.621]

99 Rotational motion deblurring of a rigid object from a single image. [sent-584, score-0.581]

100 Richardson-lucy deblurring for scenes under a projective motion path. [sent-592, score-0.613]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('deblurring', 0.481), ('blur', 0.265), ('shock', 0.256), ('unnatural', 0.205), ('deconvolution', 0.178), ('lhi', 0.176), ('bttb', 0.165), ('kernel', 0.146), ('toeplitz', 0.132), ('whyte', 0.117), ('hirsch', 0.109), ('zi', 0.106), ('blind', 0.106), ('ar', 0.104), ('camera', 0.1), ('motion', 0.1), ('harmeling', 0.09), ('cp', 0.086), ('blurred', 0.084), ('kt', 0.084), ('lvi', 0.082), ('filter', 0.081), ('xu', 0.076), ('hm', 0.076), ('edge', 0.075), ('regularization', 0.074), ('krishnan', 0.073), ('joshi', 0.073), ('shan', 0.071), ('iterations', 0.071), ('loss', 0.071), ('lh', 0.07), ('kernels', 0.068), ('translation', 0.067), ('kmhm', 0.066), ('kmrmx', 0.066), ('psnrs', 0.066), ('rotation', 0.064), ('patch', 0.06), ('cho', 0.059), ('colm', 0.059), ('ffts', 0.059), ('family', 0.058), ('sparsity', 0.057), ('ck', 0.056), ('shake', 0.056), ('uniform', 0.055), ('unblurred', 0.054), ('energy', 0.05), ('shaken', 0.049), ('jia', 0.049), ('intermediate', 0.048), ('updating', 0.048), ('thresholding', 0.047), ('sacrifice', 0.047), ('suppress', 0.047), ('pages', 0.046), ('restoration', 0.046), ('nonuniform', 0.045), ('levin', 0.044), ('extra', 0.041), ('salient', 0.041), ('sound', 0.04), ('sch', 0.04), ('functions', 0.04), ('lv', 0.04), ('gradient', 0.039), ('graduate', 0.039), ('sparse', 0.039), ('fidelity', 0.037), ('expressed', 0.036), ('remove', 0.036), ('latent', 0.036), ('extensively', 0.036), ('filtering', 0.036), ('approximation', 0.036), ('bm', 0.036), ('mm', 0.035), ('multiplication', 0.035), ('tai', 0.035), ('translational', 0.034), ('zitnick', 0.034), ('plots', 0.034), ('br', 0.034), ('fast', 0.034), ('acceleration', 0.033), ('tens', 0.033), ('sharp', 0.033), ('update', 0.033), ('projective', 0.032), ('estimation', 0.032), ('mathematically', 0.032), ('seeks', 0.032), ('accelerated', 0.032), ('project', 0.031), ('durand', 0.031), ('hong', 0.031), ('kong', 0.031), ('bt', 0.031), ('recording', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring

Author: Li Xu, Shicheng Zheng, Jiaya Jia

Abstract: We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unifiedframeworkfor both uniform andnon-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.

2 0.49294853 131 cvpr-2013-Discriminative Non-blind Deblurring

Author: Uwe Schmidt, Carsten Rother, Sebastian Nowozin, Jeremy Jancsary, Stefan Roth

Abstract: Non-blind deblurring is an integral component of blind approaches for removing image blur due to camera shake. Even though learning-based deblurring methods exist, they have been limited to the generative case and are computationally expensive. To this date, manually-defined models are thus most widely used, though limiting the attained restoration quality. We address this gap by proposing a discriminative approach for non-blind deblurring. One key challenge is that the blur kernel in use at test time is not known in advance. To address this, we analyze existing approaches that use half-quadratic regularization. From this analysis, we derive a discriminative model cascade for image deblurring. Our cascade model consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields. We train our model by loss minimization and use synthetically generated blur kernels to generate training data. Our experiments show that the proposed approach is efficient and yields state-of-the-art restoration quality on images corrupted with synthetic and real blur.

3 0.48806381 265 cvpr-2013-Learning to Estimate and Remove Non-uniform Image Blur

Author: Florent Couzinié-Devy, Jian Sun, Karteek Alahari, Jean Ponce

Abstract: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multilabel energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa ’s method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al. [5].

4 0.43363908 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

Author: Hee Seok Lee, Kuoung Mu Lee

Abstract: Motion blur frequently occurs in dense 3D reconstruction using a single moving camera, and it degrades the quality of the 3D reconstruction. To handle motion blur caused by rapid camera shakes, we propose a blur-aware depth reconstruction method, which utilizes a pixel correspondence that is obtained by considering the effect of motion blur. Motion blur is dependent on 3D geometry, thus parameterizing blurred appearance of images with scene depth given camera motion is possible and a depth map can be accurately estimated from the blur-considered pixel correspondence. The estimated depth is then converted intopixel-wise blur kernels, and non-uniform motion blur is easily removed with low computational cost. The obtained blur kernel is depth-dependent, thus it effectively addresses scene-depth variation, which is a challenging problem in conventional non-uniform deblurring methods.

5 0.41477334 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

Author: Haichao Zhang, David Wipf, Yanning Zhang

Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.

6 0.36939919 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

7 0.34464976 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

8 0.2750352 412 cvpr-2013-Stochastic Deconvolution

9 0.20540941 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform

10 0.16145857 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution

11 0.1308137 65 cvpr-2013-Blind Deconvolution of Widefield Fluorescence Microscopic Data by Regularization of the Optical Transfer Function (OTF)

12 0.098513685 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera

13 0.092490941 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

14 0.086081661 312 cvpr-2013-On a Link Between Kernel Mean Maps and Fraunhofer Diffraction, with an Application to Super-Resolution Beyond the Diffraction Limit

15 0.081630111 124 cvpr-2013-Determining Motion Directly from Normal Flows Upon the Use of a Spherical Eye Platform

16 0.080904536 345 cvpr-2013-Real-Time Model-Based Rigid Object Pose Estimation and Tracking Combining Dense and Sparse Visual Cues

17 0.080516569 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation

18 0.080113985 244 cvpr-2013-Large Displacement Optical Flow from Nearest Neighbor Fields

19 0.07693965 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition

20 0.072440729 41 cvpr-2013-An Iterated L1 Algorithm for Non-smooth Non-convex Optimization in Computer Vision


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.209), (1, 0.206), (2, -0.049), (3, 0.177), (4, -0.162), (5, 0.54), (6, 0.093), (7, -0.057), (8, 0.028), (9, 0.012), (10, -0.003), (11, 0.009), (12, -0.009), (13, -0.015), (14, -0.011), (15, -0.004), (16, 0.091), (17, 0.002), (18, -0.003), (19, 0.009), (20, 0.001), (21, 0.017), (22, -0.006), (23, 0.046), (24, -0.013), (25, -0.011), (26, 0.005), (27, 0.015), (28, 0.006), (29, 0.023), (30, 0.026), (31, 0.016), (32, -0.034), (33, -0.005), (34, 0.012), (35, -0.014), (36, 0.023), (37, 0.009), (38, 0.013), (39, -0.017), (40, 0.023), (41, 0.027), (42, -0.001), (43, 0.005), (44, 0.032), (45, 0.006), (46, 0.016), (47, -0.012), (48, -0.013), (49, 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.97250777 265 cvpr-2013-Learning to Estimate and Remove Non-uniform Image Blur

Author: Florent Couzinié-Devy, Jian Sun, Karteek Alahari, Jean Ponce

Abstract: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multilabel energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa ’s method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al. [5].

2 0.9553588 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

Author: Haichao Zhang, David Wipf, Yanning Zhang

Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.

3 0.93803614 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform

Author: Yi Zhang, Keigo Hirakawa

Abstract: We propose a notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. DDWT greatly enhances our ability to analyze, detect, and process blur kernels and blurry images—the proposed framework handles both global and spatially varying blur kernels seamlessly, and unifies the treatment of blur caused by object motion, optical defocus, and camera shake. To illustrate the potential of DDWT in computer vision and image processing, we develop example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction.

4 0.93166023 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

Author: Lin Zhong, Sunghyun Cho, Dimitris Metaxas, Sylvain Paris, Jue Wang

Abstract: State-of-the-art single image deblurring techniques are sensitive to image noise. Even a small amount of noise, which is inevitable in low-light conditions, can degrade the quality of blur kernel estimation dramatically. The recent approach of Tai and Lin [17] tries to iteratively denoise and deblur a blurry and noisy image. However, as we show in this work, directly applying image denoising methods often partially damages the blur information that is extracted from the input image, leading to biased kernel estimation. We propose a new method for handling noise in blind image deconvolution based on new theoretical and practical insights. Our key observation is that applying a directional low-pass filter to the input image greatly reduces the noise level, while preserving the blur information in the orthogonal direction to the filter. Based on this observation, our method applies a series of directional filters at different orientations to the input image, and estimates an accurate Radon transform of the blur kernel from each filtered image. Finally, we reconstruct the blur kernel using inverse Radon transform. Experimental results on synthetic and real data show that our algorithm achieves higher quality results than previous approaches on blurry and noisy images. 1

same-paper 5 0.9316563 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring

Author: Li Xu, Shicheng Zheng, Jiaya Jia

Abstract: We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unifiedframeworkfor both uniform andnon-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.

6 0.91943222 131 cvpr-2013-Discriminative Non-blind Deblurring

7 0.82341123 412 cvpr-2013-Stochastic Deconvolution

8 0.79497051 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution

9 0.78560889 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

10 0.77171481 65 cvpr-2013-Blind Deconvolution of Widefield Fluorescence Microscopic Data by Regularization of the Optical Transfer Function (OTF)

11 0.70714045 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

12 0.44367442 312 cvpr-2013-On a Link Between Kernel Mean Maps and Fraunhofer Diffraction, with an Application to Super-Resolution Beyond the Diffraction Limit

13 0.40357319 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation

14 0.38816118 266 cvpr-2013-Learning without Human Scores for Blind Image Quality Assessment

15 0.35776368 195 cvpr-2013-HDR Deghosting: How to Deal with Saturation?

16 0.34591076 346 cvpr-2013-Real-Time No-Reference Image Quality Assessment Based on Filter Learning

17 0.33236507 35 cvpr-2013-Adaptive Compressed Tomography Sensing

18 0.3232514 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

19 0.32164136 176 cvpr-2013-Five Shades of Grey for Fast and Reliable Camera Pose Estimation

20 0.30298677 41 cvpr-2013-An Iterated L1 Algorithm for Non-smooth Non-convex Optimization in Computer Vision


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.24), (16, 0.043), (26, 0.026), (28, 0.011), (33, 0.263), (67, 0.058), (69, 0.037), (87, 0.072), (93, 0.182)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.90372658 208 cvpr-2013-Hyperbolic Harmonic Mapping for Constrained Brain Surface Registration

Author: Rui Shi, Wei Zeng, Zhengyu Su, Hanna Damasio, Zhonglin Lu, Yalin Wang, Shing-Tung Yau, Xianfeng Gu

Abstract: Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquer this problem by changing the Riemannian metric on the target surface to a hyperbolic metric, so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on the Ricci flow method and the method is general and robust. We apply our algorithm to study constrained human brain surface registration problem. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic, and achieve relative high performance when evaluated with some popular cortical surface registration evaluation standards.

2 0.88789839 154 cvpr-2013-Explicit Occlusion Modeling for 3D Object Class Representations

Author: M. Zeeshan Zia, Michael Stark, Konrad Schindler

Abstract: Despite the success of current state-of-the-art object class detectors, severe occlusion remains a major challenge. This is particularly true for more geometrically expressive 3D object class representations. While these representations have attracted renewed interest for precise object pose estimation, the focus has mostly been on rather clean datasets, where occlusion is not an issue. In this paper, we tackle the challenge of modeling occlusion in the context of a 3D geometric object class model that is capable of fine-grained, part-level 3D object reconstruction. Following the intuition that 3D modeling should facilitate occlusion reasoning, we design an explicit representation of likely geometric occlusion patterns. Robustness is achieved by pooling image evidence from of a set of fixed part detectors as well as a non-parametric representation of part configurations in the spirit of poselets. We confirm the potential of our method on cars in a newly collected data set of inner-city street scenes with varying levels of occlusion, and demonstrate superior performance in occlusion estimation and part localization, compared to baselines that are unaware of occlusions.

3 0.88697678 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello

Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.

same-paper 4 0.88684547 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring

Author: Li Xu, Shicheng Zheng, Jiaya Jia

Abstract: We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unifiedframeworkfor both uniform andnon-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.

5 0.88640666 186 cvpr-2013-GeoF: Geodesic Forests for Learning Coupled Predictors

Author: Peter Kontschieder, Pushmeet Kohli, Jamie Shotton, Antonio Criminisi

Abstract: Conventional decision forest based methods for image labelling tasks like object segmentation make predictions for each variable (pixel) independently [3, 5, 8]. This prevents them from enforcing dependencies between variables and translates into locally inconsistent pixel labellings. Random field models, instead, encourage spatial consistency of labels at increased computational expense. This paper presents a new and efficient forest based model that achieves spatially consistent semantic image segmentation by encoding variable dependencies directly in the feature space the forests operate on. Such correlations are captured via new long-range, soft connectivity features, computed via generalized geodesic distance transforms. Our model can be thought of as a generalization of the successful Semantic Texton Forest, Auto-Context, and Entangled Forest models. A second contribution is to show the connection between the typical Conditional Random Field (CRF) energy and the forest training objective. This analysis yields a new objective for training decision forests that encourages more accurate structured prediction. Our GeoF model is validated quantitatively on the task of semantic image segmentation, on four challenging and very diverse image datasets. GeoF outperforms both stateof-the-art forest models and the conventional pairwise CRF.

6 0.88589102 90 cvpr-2013-Computing Diffeomorphic Paths for Large Motion Interpolation

7 0.88465333 386 cvpr-2013-Self-Paced Learning for Long-Term Tracking

8 0.8816151 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

9 0.88001138 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

10 0.87535071 3 cvpr-2013-3D R Transform on Spatio-temporal Interest Points for Action Recognition

11 0.87528181 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

12 0.87484056 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

13 0.87482345 462 cvpr-2013-Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines

14 0.86894953 324 cvpr-2013-Part-Based Visual Tracking with Online Latent Structural Learning

15 0.86254716 314 cvpr-2013-Online Object Tracking: A Benchmark

16 0.86151904 193 cvpr-2013-Graph Transduction Learning with Connectivity Constraints with Application to Multiple Foreground Cosegmentation

17 0.85951471 414 cvpr-2013-Structure Preserving Object Tracking

18 0.85938692 131 cvpr-2013-Discriminative Non-blind Deblurring

19 0.85681987 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking

20 0.85401636 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems