cvpr cvpr2013 cvpr2013-295 knowledge-graph by maker-knowledge-mining

295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior


Source: pdf

Author: Haichao Zhang, David Wipf, Yanning Zhang

Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 cn Abstract This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. [sent-4, score-0.581]

2 The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. [sent-5, score-1.332]

3 This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. [sent-6, score-0.713]

4 In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. [sent-7, score-0.229]

5 The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. [sent-8, score-0.648]

6 A typical factor causing blur is the relative motion between camera and scene during the exposure period, which may arise from hand jitter [6, 14]. [sent-14, score-0.354]

7 Multi-image blind deconvolution algorithms are designed to jointly utilize all available observations to produce a single sharp estimate of the underlying scene. [sent-15, score-0.761]

8 Given 퐿 corrupted versions of a latent sharp image x, the uniform convolutional blur model [6] assumes the observa- FiBgLuUreR 1Y. [sent-16, score-0.513]

9 , 퐿}, (1) where k푙 is a Point Spread Function (PSF) or blur kernel, ∗ denotes the convolution operator, and n푙 is a zero-mean G∗a duesnsoitaens n thoeise c otenvrmol uwtiitohn c oopvearraitaonrc,e a 휆n푙dI . [sent-22, score-0.305]

10 n Within this context, the ultimate goal of multi-image blind deblurring is to estimate the sharp and clean image x given only the blurry and noisy observations {y푙}푙퐿=1, without any prior knowledge regarding tvhaet uonnskn {oyw}n kernels k푙 or noise levels 휆푙 (see Figure 1). [sent-23, score-1.526]

11 By combining the complementary information from multiple images, it is often possible to generate higher quality estimates of the scene x than in the singleimage, blind deconvolution case [12]. [sent-24, score-0.633]

12 While a number of successful multi-image blind deconvolution methods exist, e. [sent-25, score-0.533]

13 In this context, we present a principled energy-minimization algorithm that can handle a flexible number of degraded observations without requiring that we know the nature (e. [sent-28, score-0.209]

14 The underlying cost function relies on a coupled penalty function, which combines the latent sharp image estimate with a separate blur kernel and noise variance associated with each observed image. [sent-32, score-0.906]

15 Theoretical analysis reveals that this penalty provides a useful agency for balancing the effects of observations with varying quality, while at the same time avoiding suboptimal local minima. [sent-33, score-0.323]

16 Additionally, when only a single observa1 1 10 0 05 4 4 9 1 9 tion is present, the method reduces to a principled, singleimage blind deconvolution algorithm with an image penalty that adaptively interpolates between the ℓ0 and ℓ1 norms. [sent-35, score-0.796]

17 Section 2 briefly reviews existing multi-image blind deconvolution algorithms; we then introduce our alternative algorithm in Section 3. [sent-38, score-0.533]

18 Theoretical properties and analysis related to the proposed coupled penalty function are presented in Section 4, followed by empirical comparisons in Section 5. [sent-39, score-0.284]

19 Related Work Blind deblurring with a single image has been an active field with many new methods emerging recently with different sparse image priors [6, 14, 7, 9, 20, 2]. [sent-41, score-0.376]

20 In contrast, RavAcha and Peleg use two motion blurred images with different blur directions and show that restoration quality is superior than when using only a single image [12]. [sent-42, score-0.491]

21 Since this work, many other multi-image blind deblurring algorithms have been developed [4, 3, 13, 22]. [sent-43, score-0.67]

22 Most of these assume that only two blurry observations {y1, y2} are present. [sent-44, score-0.449]

23 In athdadtit oinonly t tow ooth belru rsrtyan odbasredr regularizers common rteos singleimage blind deconvolution algorithms, a ‘cross-blur’ penal- ty function given by 퐸(k1, k2) = ∥y1 ∥22, ∗ k2 − y2 ∗ k1 (2) is often included [4, 13]. [sent-45, score-0.601]

24 The rationale here is that, given the convolutional model from (1), 퐸(k1 , k2) should be nearly zero if the noise levels are low and the correct kernels have been estimated. [sent-46, score-0.28]

25 This penalty also implicitly relies on the coprimeness assumption, meaning that the blur kernels can only share a scalar constant [13]. [sent-47, score-0.675]

26 Once the unknown kernels are estimated, the sharp image x may be recovered using a separate non-blind step if necessary. [sent-48, score-0.306]

27 One reason is that if the noise level is relatively high, it can dominate the minimization of 퐸(k1 , k2), leading to kernel estimates that are themselves blurry, which may then either produce ringing artifacts or lost detail in the deblurred image [22]. [sent-50, score-0.35]

28 To mitigate some of these problems, a sparse penalty on the blur kernel may be integrated into the estimation objective directly [4, 13] or applied via post-processing [22]. [sent-53, score-0.642]

29 modified (2) and incorporated a sparsitypromoting kernel prior based on a rectified ℓ1-norm [13]. [sent-57, score-0.154]

30 The blur kernels are first estimated using [13] with (2) incorporated. [sent-60, score-0.439]

31 For this purpose, , are treated as tpwaros blurry images owrh tohsise sharp analogues are produced by minimizing n{ rkˆe1f nkˆe2d} ∑2 퐸(k1,k2,h) = ∑2 ∑∥k푙 ∗ h −kˆ푖∥22+ 훼∑∥k푙∥푝푝 푙∑ ∑= ∑1 (4) 푙∑= ∑1 over k1, k2, and h, with 푝 ≤ 1producing a sparse ℓ푝 norm over the ker,ne alnsd. [sent-62, score-0.55]

32 Although these approaches are all effective to some extent, the sparsity level of the blur kernels may require tuning. [sent-64, score-0.499]

33 While using multiple images generally has the potential to outperform the single-image methods by fusing complementary information [12, 18, 4, 13], a principled approach that applies across a wide range of scenarios with little user-involvement or parameter tuning is still somewhat lacking. [sent-66, score-0.147]

34 Our algorithm, which applies to any number of both noisy or blurry images without explicit trade-off parameters, is one attempt to fill this void. [sent-67, score-0.465]

35 1 Because convolution is a commutative operator, the blur kernels are unaltered. [sent-70, score-0.443]

36 In this re- gard, it is well-known that the gradients of sharp natural images tend to exhibit sparsity [10, 6, 9], meaning that many elements equal (or nearly equal) zero, while a few values remain large. [sent-75, score-0.285]

37 The basic idea, which naturally extends to the blind deconvolution problem, is to first integrate out x, and then optimize over k, 휸, as well as the noise level 휆. [sent-85, score-0.586]

38 The final latent sharp image can then be recovered using the estimated kernel and noise level with standard non-blind deblurring algorithms. [sent-86, score-0.775]

39 This leads to the simplified penalty function 픤(x,k,휆) =휸 m≥in0∑푖[훾푥푖푖2+ log(휆 + 훾푖∥k¯∥22)], (8) where ∥k¯∥22 ≜ ∑푗 푘푗2퐼¯푗푖 and I¯ is an indicator matrix with wtheh 푗-th row reco∑rding the (column) positions where the 푗th element of k appears in H. [sent-91, score-0.218]

40 Assuming that all observations y푙 are blurry and/or noisy measurements of the same underlying image x, then we may justifiably postulate that 휸 is shared across all 푙. [sent-95, score-0.535]

41 This then leads to the revised, multiimage optimization problem x,{km푙,휆in푙≥0}푙∑=퐿1휆1푙∥y푙− k푙∗ x∥22+ 픤(x,{k푙,휆푙}), (9) where the multi-image penalty function is now naturally defined as 픤(x, {k푙 , 휆푙 }) ≜ 휸m≥in0푙∑=퐿1푖∑=푚1[훾푥푖푖2+ log(휆푙+ 훾푖∥k¯푙∥22)]. [sent-96, score-0.195]

42 The input can be a set of blurry or noisy observations hweit ihnopuutt specifying tthe o degradation type bofs eeravcahti example; the algorithm will automatically estimate the blur kernel and the noise level for each one. [sent-104, score-1.015]

43 We note that in the case of a single observation, the proposed method reduces to a robust single image blind deblurring model. [sent-105, score-0.67]

44 The penalty function 픤 couples the latent image, blur keTrhneelpse, naanldty ynfouinsect ileonve픤lsc oinu a principled way. [sent-106, score-0.632]

45 gThe,is b lleuradksto a number of interesting properties, including an inherent mechanism for scoring the relative quality each observed image during the recovery process and using this score to adaptively adjust the sparsity of the image regularizer. [sent-107, score-0.147]

46 Input: blurry images {y푙 }푙퐿=1 IInniptiuat:liz bel:u r brylur im mkaergneesls { y{k}푙 }, noise levels {휆푙 } WInihtiialel stopping kcerirtneerlisa {isk n}o,t nsoaitissefi leedv,e ldso { ∙ Update x: x [∑푙퐿=1H퐿푙푇휆H푙푙+ Γ−1]−1∑푙퐿=1H퐿푙푇휆y푙푙 ← where H푙 is the c[o∑nvolution matrix of] ]k푙. [sent-111, score-0.432]

47 Penalty Function Analysis This section will examine theoretical properties of the penalty function (10). [sent-114, score-0.259]

48 2 Then by noting the separability across pixels, (/1∥0) can be reexpressed as ∑푚 픤(x,{k푙,휆푙}) = ∑ℎ(푥푖,흆) 푖∑= ∑1 ∑퐿 + 푚∑log∥k¯푙∥22, (12) 푙∑= ∑1 which partitions image and kernel penalties into a more familiar form. [sent-121, score-0.192]

49 The second term in (12) is similar to many common kernel penalties in the literature, and we will not consider it further here. [sent-122, score-0.192]

50 However, the image penalty ℎ(푥, 흆) is quite unique and we evaluate some of its relevant properties via two Theorems below followed by further discussion and analysis. [sent-123, score-0.225]

51 on 푖; however Theorem 1(Concavity) The penalty function ℎ(푥, 흆) is a concave non-decreasing function of ∣푥∣. [sent-125, score-0.231]

52 Theorem 1 explicitly stipulates that a strong, sparsity promoting x penalty is produced by our framework, since concavity with respect to coefficient magnitudes is a well-known, signature property of sparse penalties [16]. [sent-127, score-0.529]

53 For this purpose we must look deeper and examine how 흆 modulates the effective penalty on x. [sent-131, score-0.24]

54 ≥ Theorem 2 (Relative Sparsity) The ℎ(푥, 흆) is such that: penalty function 1. [sent-138, score-0.195]

55 In both cases, the penalty function shape becomes nearly convex, which is highly desirable because it avoids ∑ 3For a given value of ∑푖 푘푙푖, a delta kernel maximizes ∥k¯푙 ∥22, while kernel with equal-valued ∑element,s a provides tnheel mmianxiimmuizmes. [sent-150, score-0.553]

56 In contrast, for cases where at least one image has a small 휌푙 value, the effective penalty on x magnitudes becomes highly concave (sparsity favoring), even approaching a scaled (approximate) version of the ℓ0 norm (in the sense described in [17]). [sent-152, score-0.284]

57 This is because the existence of a single good kernel/noise estimation pair (meaning the associated 휌푙 is small) necessitates that in all likelihood a good overall solution is nearby (even if some blur kernel/noise pairs associated with other observations are large). [sent-153, score-0.398]

58 Fortunately, the log(훾 + 휌푙) term associated with the 푙-th image will dominate the variational formation of ℎ(푥, 흆), and a highly sparse, concave penalty function on x will ensue, allowing fine-grained kernel structures to be resolved. [sent-154, score-0.4]

59 The shape-adaptiveness of the coupled penalty function is the key factor leading the algorithm to success. [sent-156, score-0.278]

60 Both the noise and blur dependency allow the algorithm to naturally possess a ‘coarse-to-fine’ estimation strategy, recovering large scale structures using a less aggressive (more convex) sparse penalty in the beginning, while later increasing its aggressiveness for recovering the small details. [sent-157, score-0.656]

61 In so doing it can largely avoid being trapped in a local minima while recovering the blur kernel progressively. [sent-158, score-0.454]

62 Please see [17, 19] for more details regarding why additional image sparsity is sometimes needed, as well as more information about how the proposed penalty function operates, including why it sometimes favors sparse kernels. [sent-159, score-0.352]

63 Interestingly, one auxiliary benefit of this procedure is that, given a set of corrupted image observations, and provided that at least one of them is reasonably good, the existence of other more highly degraded observations should not in theory present a significant disruption to the algorithm. [sent-160, score-0.203]

64 Recovered image and blur kernels of ˇSroubek et al. [sent-171, score-0.463]

65 , the first image and kernels 1m–e4th hfrodom [1 3Le]v ainnd e ot uarl. [sent-174, score-0.138]

66 [22] for blurry observations as well as Yuan et al. [sent-179, score-0.501]

67 [8] for evaluation, which consists of 4 images of size 255 255 and 8 different blur kernels, giving a total of 2325 blurry images. [sent-185, score-0.624]

68 e blurry images, ground-truth images, taond 2 t7he × ground-truth kernels are also provided. [sent-188, score-0.489]

69 Following the experimental settings in [13], we construct a multiobservation test set with 퐿 = 4 blurry images by dividing the whole kernel set into two halves: 푏1 = {1⋅ ⋅ ⋅ 4} and t푏h2e = w h{o5l ⋅e e⋅ k⋅ e8r}n}el. [sent-189, score-0.479]

70 Wdoei tgh,e n8 perform belrvinadti deblurring using different algorithms on each set. [sent-191, score-0.33]

71 Results are shown in Figure 2, where the proposed method generates deblurring results 4http://zoi. [sent-195, score-0.33]

72 The recovered image and blur kernels from both methods for the first test set are shown in Figure 3. [sent-207, score-0.476]

73 Here we observe that the kernels recovered by ˇSroubek et al. [sent-208, score-0.255]

74 are overly blur- ry, thus leading to inferior image restoration quality (e. [sent-209, score-0.247]

75 In contrast, our approach can recover the blur kernels with high quality without using any explicit sparse prior over the kernel. [sent-212, score-0.571]

76 Overall, the more refined kernel estimates obtained via the proposed approach translate into more details recovered in the latent images. [sent-214, score-0.302]

77 Evaluation on Real-World Images Blind restoration using multiple observations is a ubiquitous problem, with many potential applications. [sent-217, score-0.217]

78 , using two motionblurried observations for joint blind deblurring [12, 3, 4, 13, 22]; and blurry/noisy pair restoration, i. [sent-220, score-0.795]

79 , using a shortexposure noisy and long-exposure blurry image pair for joint restoration [18, 15]. [sent-222, score-0.556]

80 , and our method are also shown in Figure 4, with the estimated blur kernels displayed in the top-right corner of each image. [sent-232, score-0.439]

81 We observe that the kernel estimates produced by ˇSroubek et al. [sent-233, score-0.245]

82 may be overly diffuse, at least not precise enough for generating a crisp deblurring result with limited ringing. [sent-234, score-0.375]

83 While we do not have access to the ground-truth kernel for real-world images, the relatively compact support of our kernel appears to be reasonable given the high quality of the estimated sharp image. [sent-237, score-0.469]

84 [22] attempts to refine the estimated blur kernels from ˇSroubek et al. [sent-244, score-0.491]

85 One potential reason for this is that the kernel refining step of Zhu et al. [sent-253, score-0.207]

86 relies purely on the kernels estimated via ˇSroubek et al. [sent-254, score-0.218]

87 Therefore, although the estimated blur kernels do become less diffuse, they are not necessarily consistent with the observed data, as any error generated in the original kernel estimation step will be inevitably transferred during the kernel refining process. [sent-256, score-0.722]

88 In contrast, our approach can implicitly determine the proper kernel sparsity directly from the data without any secondary rectifications or an explicit sparse prior for the kernel; it therefore appears to be more reliable on these test images. [sent-257, score-0.34]

89 Although the existing dual-motion deblurring algorithms tested above are no longer directly applicable, alternative approaches have been specifically tailored to work only with a blurry and noisy pair [18, 15], and hence provide a benchmark for comparison. [sent-271, score-0.792]

90 can generate a restoration result that is of higher quality compared to the result obtained from a single blurry image and Cho et al. [sent-277, score-0.581]

91 Yet the image recovered via our approach is of relatively similar quality to that of Yuan et al. [sent-279, score-0.176]

92 It is also interesting to point out that the blur kernel estimated for the noisy image is a delta kernel as would be expected if the correct solution were to be found. [sent-281, score-0.644]

93 ’s non-uniform method does not produce a typical 2D kernel per the standard convolutional model (1), and hence no blur kernel is shown. [sent-287, score-0.565]

94 Again, we observe that our algorithm, without resorting to more complicated observation models or special tuning, performs competitively with algorithms specifically designed to work with a known blurry and noisy pair. [sent-288, score-0.41]

95 Moreover, it automatically adapts to the quality of each observed image, allowing higher quality images to dominate the estimation process when appropriate. [sent-291, score-0.159]

96 For future work, we would like to generalize our algorithm to areas such as video deblurring and non-uniform deblurring. [sent-293, score-0.33]

97 At least for single images, we have already found that our method performs well with nonuniform camera shake provided appropriate basis functions 1 1 10 0 05 5 5 7 5 blurry image [5]. [sent-294, score-0.38]

98 (c) Non-uniform deblurring result from Whyte et al. [sent-298, score-0.382]

99 Close the loop: Joint blind image restoration and recognition with sparse representation prior. [sent-458, score-0.505]

100 Deconvolving PSFs for a better motion deblurring using multiple images. [sent-470, score-0.37]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('sroubek', 0.379), ('blurry', 0.351), ('blind', 0.34), ('deblurring', 0.33), ('blur', 0.273), ('penalty', 0.195), ('deconvolution', 0.193), ('kernels', 0.138), ('kernel', 0.128), ('restoration', 0.119), ('wipf', 0.105), ('sharp', 0.103), ('observations', 0.098), ('whyte', 0.091), ('sparsity', 0.088), ('yuan', 0.081), ('degraded', 0.072), ('meaning', 0.069), ('singleimage', 0.068), ('zhu', 0.068), ('latent', 0.068), ('recovered', 0.065), ('cai', 0.065), ('penalties', 0.064), ('noisy', 0.059), ('quality', 0.059), ('coupled', 0.059), ('levin', 0.057), ('couples', 0.057), ('noise', 0.053), ('degradation', 0.053), ('diffuse', 0.053), ('et', 0.052), ('etal', 0.051), ('ults', 0.051), ('concavity', 0.049), ('desirable', 0.049), ('sparse', 0.046), ('cho', 0.046), ('modulates', 0.045), ('overly', 0.045), ('dominate', 0.041), ('estimates', 0.041), ('exposure', 0.041), ('motion', 0.04), ('principled', 0.039), ('theorem', 0.038), ('msra', 0.038), ('promoting', 0.036), ('concave', 0.036), ('convolutional', 0.036), ('log', 0.036), ('theoretical', 0.034), ('technically', 0.034), ('dual', 0.034), ('corrupted', 0.033), ('convolution', 0.032), ('ringing', 0.032), ('defocus', 0.031), ('ze', 0.031), ('deblurred', 0.031), ('siggraph', 0.03), ('properties', 0.03), ('recovering', 0.03), ('suboptimal', 0.03), ('possess', 0.029), ('explicit', 0.029), ('seamlessly', 0.029), ('shake', 0.029), ('derivative', 0.028), ('tuning', 0.028), ('delta', 0.028), ('levels', 0.028), ('estimated', 0.028), ('pair', 0.027), ('somewhat', 0.027), ('scenarios', 0.027), ('refining', 0.027), ('magnitudes', 0.027), ('underlying', 0.027), ('synthetic', 0.027), ('prior', 0.026), ('squared', 0.026), ('norm', 0.026), ('applies', 0.026), ('blurring', 0.025), ('convenient', 0.025), ('update', 0.025), ('tailored', 0.025), ('nearly', 0.025), ('coded', 0.024), ('variances', 0.024), ('durand', 0.024), ('leading', 0.024), ('extent', 0.024), ('produced', 0.024), ('appears', 0.023), ('minima', 0.023), ('ease', 0.023), ('favors', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

Author: Haichao Zhang, David Wipf, Yanning Zhang

Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.

2 0.55344868 265 cvpr-2013-Learning to Estimate and Remove Non-uniform Image Blur

Author: Florent Couzinié-Devy, Jian Sun, Karteek Alahari, Jean Ponce

Abstract: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multilabel energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa ’s method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al. [5].

3 0.44433734 131 cvpr-2013-Discriminative Non-blind Deblurring

Author: Uwe Schmidt, Carsten Rother, Sebastian Nowozin, Jeremy Jancsary, Stefan Roth

Abstract: Non-blind deblurring is an integral component of blind approaches for removing image blur due to camera shake. Even though learning-based deblurring methods exist, they have been limited to the generative case and are computationally expensive. To this date, manually-defined models are thus most widely used, though limiting the attained restoration quality. We address this gap by proposing a discriminative approach for non-blind deblurring. One key challenge is that the blur kernel in use at test time is not known in advance. To address this, we analyze existing approaches that use half-quadratic regularization. From this analysis, we derive a discriminative model cascade for image deblurring. Our cascade model consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields. We train our model by loss minimization and use synthetically generated blur kernels to generate training data. Our experiments show that the proposed approach is efficient and yields state-of-the-art restoration quality on images corrupted with synthetic and real blur.

4 0.42629659 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

Author: Lin Zhong, Sunghyun Cho, Dimitris Metaxas, Sylvain Paris, Jue Wang

Abstract: State-of-the-art single image deblurring techniques are sensitive to image noise. Even a small amount of noise, which is inevitable in low-light conditions, can degrade the quality of blur kernel estimation dramatically. The recent approach of Tai and Lin [17] tries to iteratively denoise and deblur a blurry and noisy image. However, as we show in this work, directly applying image denoising methods often partially damages the blur information that is extracted from the input image, leading to biased kernel estimation. We propose a new method for handling noise in blind image deconvolution based on new theoretical and practical insights. Our key observation is that applying a directional low-pass filter to the input image greatly reduces the noise level, while preserving the blur information in the orthogonal direction to the filter. Based on this observation, our method applies a series of directional filters at different orientations to the input image, and estimates an accurate Radon transform of the blur kernel from each filtered image. Finally, we reconstruct the blur kernel using inverse Radon transform. Experimental results on synthetic and real data show that our algorithm achieves higher quality results than previous approaches on blurry and noisy images. 1

5 0.41477334 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring

Author: Li Xu, Shicheng Zheng, Jiaya Jia

Abstract: We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unifiedframeworkfor both uniform andnon-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.

6 0.35120651 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

7 0.32150841 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

8 0.25398514 412 cvpr-2013-Stochastic Deconvolution

9 0.21913378 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution

10 0.21121402 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform

11 0.14727588 65 cvpr-2013-Blind Deconvolution of Widefield Fluorescence Microscopic Data by Regularization of the Optical Transfer Function (OTF)

12 0.078428656 166 cvpr-2013-Fast Image Super-Resolution Based on In-Place Example Regression

13 0.075075343 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features

14 0.070753731 312 cvpr-2013-On a Link Between Kernel Mean Maps and Fraunhofer Diffraction, with an Application to Super-Resolution Beyond the Diffraction Limit

15 0.070061415 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

16 0.06999585 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation

17 0.069417119 266 cvpr-2013-Learning without Human Scores for Blind Image Quality Assessment

18 0.068572558 164 cvpr-2013-Fast Convolutional Sparse Coding

19 0.065368749 346 cvpr-2013-Real-Time No-Reference Image Quality Assessment Based on Filter Learning

20 0.06387233 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.172), (1, 0.184), (2, -0.064), (3, 0.177), (4, -0.151), (5, 0.56), (6, 0.07), (7, -0.041), (8, 0.043), (9, 0.012), (10, -0.01), (11, -0.035), (12, -0.045), (13, -0.023), (14, -0.032), (15, 0.011), (16, 0.1), (17, 0.002), (18, 0.018), (19, 0.021), (20, 0.007), (21, 0.023), (22, 0.02), (23, 0.057), (24, -0.009), (25, -0.022), (26, 0.029), (27, 0.006), (28, 0.006), (29, 0.008), (30, 0.038), (31, 0.004), (32, -0.022), (33, -0.02), (34, 0.023), (35, 0.01), (36, 0.003), (37, 0.011), (38, 0.032), (39, -0.036), (40, -0.01), (41, 0.009), (42, 0.019), (43, -0.019), (44, 0.025), (45, 0.026), (46, 0.005), (47, -0.019), (48, -0.024), (49, 0.004)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.95034587 265 cvpr-2013-Learning to Estimate and Remove Non-uniform Image Blur

Author: Florent Couzinié-Devy, Jian Sun, Karteek Alahari, Jean Ponce

Abstract: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multilabel energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa ’s method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al. [5].

same-paper 2 0.95009929 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

Author: Haichao Zhang, David Wipf, Yanning Zhang

Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.

3 0.93293941 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

Author: Lin Zhong, Sunghyun Cho, Dimitris Metaxas, Sylvain Paris, Jue Wang

Abstract: State-of-the-art single image deblurring techniques are sensitive to image noise. Even a small amount of noise, which is inevitable in low-light conditions, can degrade the quality of blur kernel estimation dramatically. The recent approach of Tai and Lin [17] tries to iteratively denoise and deblur a blurry and noisy image. However, as we show in this work, directly applying image denoising methods often partially damages the blur information that is extracted from the input image, leading to biased kernel estimation. We propose a new method for handling noise in blind image deconvolution based on new theoretical and practical insights. Our key observation is that applying a directional low-pass filter to the input image greatly reduces the noise level, while preserving the blur information in the orthogonal direction to the filter. Based on this observation, our method applies a series of directional filters at different orientations to the input image, and estimates an accurate Radon transform of the blur kernel from each filtered image. Finally, we reconstruct the blur kernel using inverse Radon transform. Experimental results on synthetic and real data show that our algorithm achieves higher quality results than previous approaches on blurry and noisy images. 1

4 0.91321468 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform

Author: Yi Zhang, Keigo Hirakawa

Abstract: We propose a notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. DDWT greatly enhances our ability to analyze, detect, and process blur kernels and blurry images—the proposed framework handles both global and spatially varying blur kernels seamlessly, and unifies the treatment of blur caused by object motion, optical defocus, and camera shake. To illustrate the potential of DDWT in computer vision and image processing, we develop example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction.

5 0.9105832 131 cvpr-2013-Discriminative Non-blind Deblurring

Author: Uwe Schmidt, Carsten Rother, Sebastian Nowozin, Jeremy Jancsary, Stefan Roth

Abstract: Non-blind deblurring is an integral component of blind approaches for removing image blur due to camera shake. Even though learning-based deblurring methods exist, they have been limited to the generative case and are computationally expensive. To this date, manually-defined models are thus most widely used, though limiting the attained restoration quality. We address this gap by proposing a discriminative approach for non-blind deblurring. One key challenge is that the blur kernel in use at test time is not known in advance. To address this, we analyze existing approaches that use half-quadratic regularization. From this analysis, we derive a discriminative model cascade for image deblurring. Our cascade model consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields. We train our model by loss minimization and use synthetically generated blur kernels to generate training data. Our experiments show that the proposed approach is efficient and yields state-of-the-art restoration quality on images corrupted with synthetic and real blur.

6 0.88201046 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring

7 0.80867738 412 cvpr-2013-Stochastic Deconvolution

8 0.79921454 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution

9 0.75895756 65 cvpr-2013-Blind Deconvolution of Widefield Fluorescence Microscopic Data by Regularization of the Optical Transfer Function (OTF)

10 0.73095918 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

11 0.6296106 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

12 0.41541159 312 cvpr-2013-On a Link Between Kernel Mean Maps and Fraunhofer Diffraction, with an Application to Super-Resolution Beyond the Diffraction Limit

13 0.40227085 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation

14 0.34719518 266 cvpr-2013-Learning without Human Scores for Blind Image Quality Assessment

15 0.32401198 346 cvpr-2013-Real-Time No-Reference Image Quality Assessment Based on Filter Learning

16 0.29452839 195 cvpr-2013-HDR Deghosting: How to Deal with Saturation?

17 0.28176397 35 cvpr-2013-Adaptive Compressed Tomography Sensing

18 0.26156893 238 cvpr-2013-Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices

19 0.25959489 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

20 0.25757518 41 cvpr-2013-An Iterated L1 Algorithm for Non-smooth Non-convex Optimization in Computer Vision


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.608), (16, 0.015), (26, 0.028), (33, 0.173), (67, 0.028), (69, 0.028), (87, 0.043)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.89836246 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

Author: Haichao Zhang, David Wipf, Yanning Zhang

Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.

2 0.88290197 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

Author: Chandramouli Paramanand, Ambasamudram N. Rajagopalan

Abstract: We address the problem of estimating the latent image of a static bilayer scene (consisting of a foreground and a background at different depths) from motion blurred observations captured with a handheld camera. The camera motion is considered to be composed of in-plane rotations and translations. Since the blur at an image location depends both on camera motion and depth, deblurring becomes a difficult task. We initially propose a method to estimate the transformation spread function (TSF) corresponding to one of the depth layers. The estimated TSF (which reveals the camera motion during exposure) is used to segment the scene into the foreground and background layers and determine the relative depth value. The deblurred image of the scene is finally estimated within a regularization framework by accounting for blur variations due to camera motion as well as depth.

3 0.87515622 154 cvpr-2013-Explicit Occlusion Modeling for 3D Object Class Representations

Author: M. Zeeshan Zia, Michael Stark, Konrad Schindler

Abstract: Despite the success of current state-of-the-art object class detectors, severe occlusion remains a major challenge. This is particularly true for more geometrically expressive 3D object class representations. While these representations have attracted renewed interest for precise object pose estimation, the focus has mostly been on rather clean datasets, where occlusion is not an issue. In this paper, we tackle the challenge of modeling occlusion in the context of a 3D geometric object class model that is capable of fine-grained, part-level 3D object reconstruction. Following the intuition that 3D modeling should facilitate occlusion reasoning, we design an explicit representation of likely geometric occlusion patterns. Robustness is achieved by pooling image evidence from of a set of fixed part detectors as well as a non-parametric representation of part configurations in the spirit of poselets. We confirm the potential of our method on cars in a newly collected data set of inner-city street scenes with varying levels of occlusion, and demonstrate superior performance in occlusion estimation and part localization, compared to baselines that are unaware of occlusions.

4 0.86928213 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello

Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.

5 0.86291599 90 cvpr-2013-Computing Diffeomorphic Paths for Large Motion Interpolation

Author: Dohyung Seo, Jeffrey Ho, Baba C. Vemuri

Abstract: In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff(Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff(Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff(Ω) to the quotient space Diff(M)/Diff(M)μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff(M)μ. This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Sec- ond, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).

6 0.84518749 386 cvpr-2013-Self-Paced Learning for Long-Term Tracking

7 0.82518929 3 cvpr-2013-3D R Transform on Spatio-temporal Interest Points for Action Recognition

8 0.82262212 186 cvpr-2013-GeoF: Geodesic Forests for Learning Coupled Predictors

9 0.78285831 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

10 0.78051418 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

11 0.75484937 462 cvpr-2013-Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines

12 0.71796489 324 cvpr-2013-Part-Based Visual Tracking with Online Latent Structural Learning

13 0.71154523 193 cvpr-2013-Graph Transduction Learning with Connectivity Constraints with Application to Multiple Foreground Cosegmentation

14 0.69457871 131 cvpr-2013-Discriminative Non-blind Deblurring

15 0.69252551 314 cvpr-2013-Online Object Tracking: A Benchmark

16 0.67430764 414 cvpr-2013-Structure Preserving Object Tracking

17 0.67165655 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking

18 0.66233331 312 cvpr-2013-On a Link Between Kernel Mean Maps and Fraunhofer Diffraction, with an Application to Super-Resolution Beyond the Diffraction Limit

19 0.66142005 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

20 0.66119313 360 cvpr-2013-Robust Estimation of Nonrigid Transformation for Point Set Registration