cvpr cvpr2013 cvpr2013-68 knowledge-graph by maker-knowledge-mining

68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform


Source: pdf

Author: Yi Zhang, Keigo Hirakawa

Abstract: We propose a notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. DDWT greatly enhances our ability to analyze, detect, and process blur kernels and blurry images—the proposed framework handles both global and spatially varying blur kernels seamlessly, and unifies the treatment of blur caused by object motion, optical defocus, and camera shake. To illustrate the potential of DDWT in computer vision and image processing, we develop example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu / ˜ I SL S Abstract We propose a notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. [sent-3, score-0.766]

2 DDWT greatly enhances our ability to analyze, detect, and process blur kernels and blurry images—the proposed framework handles both global and spatially varying blur kernels seamlessly, and unifies the treatment of blur caused by object motion, optical defocus, and camera shake. [sent-4, score-1.353]

3 To illustrate the potential of DDWT in computer vision and image processing, we develop example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction. [sent-5, score-0.417]

4 Introduction Image blur is caused by a pixel recording lights from multiple sources. [sent-7, score-0.412]

5 Illustrated in Figure 1 are three common types of blur: defocus, camera shake and object motion. [sent-8, score-0.102]

6 Defocus blur is caused by a wide aperture that prevents light rays originating from the same point from converging. [sent-9, score-0.424]

7 Camera motion during the exposure produces global motion blur where the same point on the scene is observed by multiple moving pixel sensors. [sent-10, score-0.567]

8 Object motion causes each pixel to observe multiple points on the scene that produces spatially-variant motion blur. [sent-11, score-0.145]

9 Assuming Lambertian surfaces, blur is typically represented by the implied blur kernel that acts on the unobserved sharp in-focus image. [sent-12, score-0.817]

10 On one hand, long exposure is needed to overcome poor lighting conditions, but it increase the risk of camera shake and object motion blurs that severely deteriorate the sharpness of the image. [sent-14, score-0.209]

11 On the other hand, professional photographers use well-controlled blur to enhance the aesthetics of a photograph. [sent-18, score-0.401]

12 Thus the ability to manipulate blur in postprocessing would offer a greater flexibility in consumer and professional photography. [sent-19, score-0.401]

13 a scene with multiple mov- (a) Defocus(b) Camera shake(c) Object motion Figure 1. [sent-23, score-0.065]

14 We may learn the temporal state of the camera and the scene from blur caused by camera shake or object motion, respectively. [sent-28, score-0.536]

15 The defocus blur kernel varies with the object distance/depth, which can be useful for three dimensional scene retrieval from a single camera[23]. [sent-29, score-0.548]

16 Blur also interferes with recognition tasks, as feature extraction from blurry image is a real challenge. [sent-30, score-0.071]

17 In this paper, we propose a novel framework to address the analysis, detection, and processing of blur kernels and blurry images. [sent-31, score-0.472]

18 Central to this work is the notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. [sent-32, score-0.766]

19 We contrast DDWT with the work of [3], which regularizes image and blur kernel in terms of their sparsity in linear transform domain. [sent-33, score-0.465]

20 The major disadvantage of regularization approach is that the image/blur coefficients are not directly observed, hence requiring a computationally taxing search to minimize some “cost” function. [sent-34, score-0.088]

21 On the other hand, out DDWT provides a way to observe the wavelet coefficients of image and blur kernel directly. [sent-35, score-0.62]

22 This gives DDWT coefficients a very intuitive interpretation, and simplify the task of decoupling the blur from the signal, regardless of why the blur occurred (e. [sent-36, score-0.844]

23 object motion, defocus, and camera shake) or the type of blur (e. [sent-38, score-0.415]

24 Although the primary goal of this article is to develop the DDWT as an analytical tool, we also show example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction to illustrate the potential of DDWT. [sent-42, score-0.417]

25 dj) to the observed blurry image y, (b) is the interpretation we give to the DDWT coefficients. [sent-46, score-0.073]

26 Related Work Recent advancements on blind and non-blind deblurring have enabled us to handle complex uniform blur kernels (e. [sent-48, score-0.553]

27 By comparison, progress in blind and non-blind deblurring for spatially varying blur kernel (e. [sent-51, score-0.619]

28 [16, 17, 4, 6]) has been slow since there are limited data availability to support localized blur kernel. [sent-53, score-0.378]

29 Approaches to computational solutions include supervised [16] or unsupervised [17, 10] foreground/background segmentation, statistical modeling [4], homography based blur kernel modeling methods [24, 15] and partial differential equation (PDE) methods [6]. [sent-55, score-0.417]

30 In particular, sparsifying transforms have played key roles in the detection ofblur kernels—gradient operator [4, 6, 10, 9] and wavelet/framelet/curvelet transforms [3, 11] have been used for this purpose. [sent-56, score-0.075]

31 However, existing works have shortcomings, such as problems with ringing artifact in deblurring [20, 5] or inability to handle spatially varying blur [19, 12, 3, 2, 21, 22, 9]. [sent-57, score-0.565]

32 It is also common for deblurring algorithms to require iteration [3, 20, 5, 12, 24, 15], which is highly undesirable for many real-time applications. [sent-58, score-0.139]

33 Besides PDE, authors are unaware of any existing framework that unify analysis, detection, and processing of camera shake, object motion, defocus, global, and spatially varying blurs. [sent-59, score-0.101]

34 Definitions We begin by defining single discrete wavelet transform (DWT) and the proposed double discrete wavelet transform (DDWT). [sent-63, score-0.47]

35 Denote by dj : Z2 →→ RR a w aanv iemleatg analysis fainltder n nof ∈ jth Z subband. [sent-70, score-0.11]

36 y}(n) is the jth subband, nth location over-complete single discrete wavelet transform coefficient of an image y(n), where ? [sent-72, score-0.236]

37 The over-complete double discrete wavelet transform is defined by the relation vij(n) := {di ? [sent-75, score-0.296]

38 wj}(n), where vij (n) is the transform coefficient of the image y(n) in the (i, j)th subband and location n. [sent-76, score-0.291]

39 In the special case that dj (n) is a 1D horizontal wavelet analysis filter and di (n) is a vertical one, then vij (n) is an ordinary separable 2D wavelet transform. [sent-77, score-0.579]

40 In our work, however, we allow the possibility that dj (n) and di (n) to be arbitrarily defined (e. [sent-78, score-0.15]

41 Technically speaking, the DWT/DDWT definitions above may apply to nonwavelet transforms dj and di, so long as they are invertible. [sent-81, score-0.136]

42 DDWT Analysis Of Blurred Image Assuming Lambertian reflectance, let x : Z2 → R be latent sharp image, y : Z2 → R is the observe→d blurry image, and n ∈ Z2 is the pixel Rloc iastio thne i ondbesxer. [sent-84, score-0.094]

43 However, hn may take a parametric form in the case of motion blur (by object speed and direction) or defocus blur (by aperture radius or depth). [sent-90, score-1.036]

44 In order for the convolution model of (1) to hold, the Lambertian reflectance assumption is necessary since objects may be observed from a different angle (e. [sent-91, score-0.071]

45 When DDWT is applied to the observation y, the corresponding DDWT coefficients vij is related to the latent sharp image x and the blur kernel h by: vij(n) ={di ? [sent-96, score-0.708]

46 Example of DWT and DDWT coefficients using real camera sensor data. [sent-106, score-0.146]

47 The motion blur manifests itself as a double edge in DDWT, where the distance between the double edges (yellow arrows in (c)) correspond to the speed of the object. [sent-107, score-0.663]

48 Average velocity of the moving pixels in (d) is 38 pixels, and the direction of motion is 40 degrees above the horizontal. [sent-108, score-0.105]

49 h are the DWT decompositions of x and h, respectively; and ηij := di ? [sent-111, score-0.054]

50 By the commutativity and associativity of convolution, the processes in 2(a) and 2(b) are equivalent—2(a) is the direct result of applying DDWT on the observed blurry image,1 but 2(b) is the interpretation we give to the DDWT coefficients (though 2(b) is not directly computable). [sent-115, score-0.175]

51 Then DDWT coefficients vij is the result of applying a “sparse filter” qi to a “sparse signal” uj . [sent-117, score-0.568]

52 1 When K is small, vij is nothing more than a sum of a K DWT coefficients uj . [sent-126, score-0.448]

53 Thanks to sparsity, many of uj are already zeros, and so vij is actually a sum of only a few (far less than K) DWT coefficients uj . [sent-127, score-0.627]

54 In this paper, we call a DDWT coefficient aliased if vij is a sum of more than one “active” uj coefficients. [sent-128, score-0.393]

55 One can reduce the risk of aliasing when the choice of dj and di makes uj and qi as sparse as possible. [sent-129, score-0.465]

56 By symmetry, one may also interpret DDWT coefficients as vij = {qj ? [sent-130, score-0.269]

57 But in practice, the= “ {cqonfusio}n +” ηb etween (qi, uj) and (qj , ui) does not seem to be a concern for algorithm development when qi is more sparse than qj . [sent-132, score-0.155]

58 Recovery of uj from vij leads to image deblurring, while reconstructing qi is the blur kernel detection problem. [sent-133, score-0.897]

59 Clearly, it is easy to decouple uj and qi if vij is unaliased, and reasonably uncomplicated when uj and qi are sufficiently sparse. [sent-134, score-0.794]

60 One can circumvent this problem by using demosaicking method in [13] designed to recover DWT coefficients wj from color filter array data directly. [sent-136, score-0.155]

61 the power of DDWT by analyzing specific blur types and designing example applications. [sent-137, score-0.378]

62 Owing to page limit, we describe object motion blur processing at length. [sent-138, score-0.459]

63 DDWT treatment of other blur types are brief, but their details follow the examples of object motion processing closely. [sent-139, score-0.475]

64 DDWT Analysis Consider for the moment the horizontal motion blur kernel. [sent-144, score-0.461]

65 Assuming constant velocity of the object during exposure, the blur kernel can be modeled as: h(n) = step(n +? [sent-145, score-0.439]

66 Letting di denote a Haar wavelet transform [−1, 1], the DWT coefficient qi is just a wdifavfeerleentc tera onfs tfworom impulse ,fu thnect DioWnsT: q(n) = δ(n +? [sent-150, score-0.399]

67 Recall that DWT coefficients u typically captures directional image features (say vertical edges). [sent-163, score-0.088]

68 By (7), we intuitively expect DDWT of moving objects to yield “double edges” where the distance between the two edges2 correspond exactly to the speed of the moving object. [sent-164, score-0.058]

69 Indeed, detection 2Red lines denote positive DDWT coefficients while blue lines ative. [sent-165, score-0.088]

70 are neg- 1 1 10 0 09 9 913 1 (a) DWT (or DDWT with no blur) (b) DDWT with object motion blur (c) DDWT with defocus blur Figure 4. [sent-166, score-0.952]

71 of local object speed simplifies to the task of these detecting double edges. [sent-171, score-0.114]

72 Deblurring is equally straightforward: double edges in DDWT coefficients v are the copies of u. [sent-172, score-0.194]

73 For computational object motion detection, our ability to detect complex motion is limited primarily by the capabilities of the computer vision algorithms to extract local image features from DDWT, and find similar features that is located unknown (k pixel) distance away. [sent-176, score-0.13]

74 A more advanced image feature extraction strategy founded on computer vision principles would likely improve the overall performance of the object motion detection—we leave this as future research plan. [sent-178, score-0.093]

75 (8) On the other hand, in the presence of motion blur, Rv (? [sent-200, score-0.065]

76 hic =h produces smallest secondary aHuetnoccoer trheela ctiaonnd yields wtheh iechsti pmrao-tion of the blur kernel length: ± kˆ = arg? [sent-221, score-0.417]

77 aTuhsee autocorrelation function of (9) is essentially the indicator function for the double edges evidenced in Figure 3(c). [sent-227, score-0.179]

78 To estimate the local object motion, autocorrelation needs to be localized also. [sent-228, score-0.073]

79 ) that favors piecewiseconstant object motion speed over more complex ones. [sent-279, score-0.087]

80 For non-horizontal/vertical motion, we use image shearing to skew the input image y by angle φ ∈ [0, π] (compared iton image wrot tahteio inn,p shearing ayv boyid asn interpolation e (crroomr). [sent-280, score-0.102]

81 ) the autocorrelation function of v(n) in the sheared direction φ, we detect the local blur angle θ and length k by: (ˆθn,kˆn) = arg(φ,? [sent-282, score-0.471]

82 (12) Figure 3(d) shows the result of estimating the angle θ and the length k of the blur at every pixel location, corresponding to the input image Figure 3(a). [sent-285, score-0.413]

83 Object Motion Deblurring Complementary to the detection of qi from vij is the notion of recovering uj from vij . [sent-288, score-0.679]

84 In the discussion below, we assume that the motion blur angle θ and length k are already known via (12). [sent-292, score-0.463]

85 We first note that noise ηij can be accounted for by applying one of many standard wavelet shrinkage operators to DDWT coefficients vij [14]. [sent-295, score-0.384]

86 Result of object motion deblurring using real camera sensor data with (top row) global and (bottom row) spatially varying blurs. [sent-301, score-0.31]

87 The bottom row was rendered with average velocity of moving pixels for [5, 20, 22] and using Figure 3(d) for the proposed deblurring method. [sent-303, score-0.179]

88 Such procedure effectively removes noise ηij in vij to y? [sent-304, score-0.181]

89 that DWT coefficients u of a natural image x are indeed sp? [sent-345, score-0.088]

90 (13) Since this deblurring scheme improves if P[u(n) = 0] = ρ ≈ 1(i. [sent-402, score-0.139]

91 more sparse), the choice of sparsifying transform dρj ≈ ≈is 1th (ei. [sent-404, score-0.071]

92 determining fea),ct toher fcohro tihcee oefffe spcativrseinfyeisnsg go tfr athnsef proposed DDWT-based blur processing. [sent-405, score-0.408]

93 We highlight a few notable features of the proposed deblurring scheme. [sent-406, score-0.139]

94 First, the recovery of uj in (13) is simple, and works regardless of whether the blur kernel is global or spatially varying (simply replace k with kn). [sent-407, score-0.644]

95 Second, the deblurring technique in (13) is a single-pass method. [sent-408, score-0.139]

96 Finally, one can easily incorporate any wavelet domain denoising scheme into the design of the deblurring algorithm. [sent-417, score-0.254]

97 Reconstructions using real camera sensor data in Figure 5 shows superiority of the proposed DDWT approach. [sent-418, score-0.058]

98 Optical Defocus Blur In this section, we extend DDWT analysis to optical defocus blur. [sent-420, score-0.131]

99 The support of the defocus blur kernel takes the shape of the aperture opening, which is a circular disk in most typical cameras (supp{h} = {n : ? [sent-421, score-0.575]

100 r (14) 1 1 10 0 09 9 935 3 Letting di denote a Haar wavelet transform [−1, 1], the corresponding sparse ab Hluara kre wrnaevle qleit i tsr a(snseefo Figure 16,(1a]),) qi(n)≈⎨⎪ ⎧0π −r1 2 ioft ? [sent-428, score-0.234]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ddwt', 0.727), ('blur', 0.378), ('dwt', 0.281), ('vij', 0.181), ('uj', 0.179), ('deblurring', 0.139), ('defocus', 0.131), ('qi', 0.12), ('wavelet', 0.115), ('dj', 0.096), ('rv', 0.096), ('double', 0.092), ('coefficients', 0.088), ('autocorrelation', 0.073), ('motion', 0.065), ('shake', 0.065), ('ru', 0.063), ('blurry', 0.057), ('di', 0.054), ('transform', 0.048), ('kernel', 0.039), ('camera', 0.037), ('qj', 0.035), ('hn', 0.035), ('wj', 0.034), ('demosaicking', 0.033), ('inputproposedlucy', 0.033), ('coefficient', 0.033), ('reflectance', 0.03), ('sparsify', 0.029), ('subband', 0.029), ('spatially', 0.027), ('aperture', 0.027), ('exposure', 0.026), ('transforms', 0.026), ('discrete', 0.026), ('shearing', 0.026), ('ij', 0.025), ('lambertian', 0.025), ('pde', 0.024), ('sparsifying', 0.023), ('professional', 0.023), ('velocity', 0.022), ('sharp', 0.022), ('haar', 0.022), ('speed', 0.022), ('sensor', 0.021), ('kernels', 0.021), ('convolution', 0.021), ('blurred', 0.021), ('nk', 0.021), ('varying', 0.021), ('angle', 0.02), ('owing', 0.019), ('caused', 0.019), ('letting', 0.018), ('moving', 0.018), ('horizontal', 0.018), ('notion', 0.018), ('shan', 0.018), ('chan', 0.017), ('ioft', 0.017), ('understood', 0.017), ('omitted', 0.016), ('treatment', 0.016), ('interpretation', 0.016), ('processing', 0.016), ('bilateral', 0.016), ('risk', 0.016), ('claim', 0.016), ('depths', 0.016), ('pixel', 0.015), ('blind', 0.015), ('thnect', 0.015), ('struc', 0.015), ('arse', 0.015), ('hirakawa', 0.015), ('ield', 0.015), ('mtheer', 0.015), ('recon', 0.015), ('sku', 0.015), ('supp', 0.015), ('tahteio', 0.015), ('tainodn', 0.015), ('tfr', 0.015), ('toher', 0.015), ('ttoe', 0.015), ('uncomplicated', 0.015), ('wrot', 0.015), ('relation', 0.015), ('definitions', 0.014), ('principles', 0.014), ('edges', 0.014), ('jth', 0.014), ('interferes', 0.014), ('founded', 0.014), ('commutativity', 0.014), ('scripts', 0.014), ('tera', 0.014), ('thu', 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform

Author: Yi Zhang, Keigo Hirakawa

Abstract: We propose a notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. DDWT greatly enhances our ability to analyze, detect, and process blur kernels and blurry images—the proposed framework handles both global and spatially varying blur kernels seamlessly, and unifies the treatment of blur caused by object motion, optical defocus, and camera shake. To illustrate the potential of DDWT in computer vision and image processing, we develop example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction.

2 0.39086765 265 cvpr-2013-Learning to Estimate and Remove Non-uniform Image Blur

Author: Florent Couzinié-Devy, Jian Sun, Karteek Alahari, Jean Ponce

Abstract: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multilabel energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa ’s method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al. [5].

3 0.32453093 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

Author: Hee Seok Lee, Kuoung Mu Lee

Abstract: Motion blur frequently occurs in dense 3D reconstruction using a single moving camera, and it degrades the quality of the 3D reconstruction. To handle motion blur caused by rapid camera shakes, we propose a blur-aware depth reconstruction method, which utilizes a pixel correspondence that is obtained by considering the effect of motion blur. Motion blur is dependent on 3D geometry, thus parameterizing blurred appearance of images with scene depth given camera motion is possible and a depth map can be accurately estimated from the blur-considered pixel correspondence. The estimated depth is then converted intopixel-wise blur kernels, and non-uniform motion blur is easily removed with low computational cost. The obtained blur kernel is depth-dependent, thus it effectively addresses scene-depth variation, which is a challenging problem in conventional non-uniform deblurring methods.

4 0.25940481 131 cvpr-2013-Discriminative Non-blind Deblurring

Author: Uwe Schmidt, Carsten Rother, Sebastian Nowozin, Jeremy Jancsary, Stefan Roth

Abstract: Non-blind deblurring is an integral component of blind approaches for removing image blur due to camera shake. Even though learning-based deblurring methods exist, they have been limited to the generative case and are computationally expensive. To this date, manually-defined models are thus most widely used, though limiting the attained restoration quality. We address this gap by proposing a discriminative approach for non-blind deblurring. One key challenge is that the blur kernel in use at test time is not known in advance. To address this, we analyze existing approaches that use half-quadratic regularization. From this analysis, we derive a discriminative model cascade for image deblurring. Our cascade model consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields. We train our model by loss minimization and use synthetically generated blur kernels to generate training data. Our experiments show that the proposed approach is efficient and yields state-of-the-art restoration quality on images corrupted with synthetic and real blur.

5 0.23516805 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

Author: Chandramouli Paramanand, Ambasamudram N. Rajagopalan

Abstract: We address the problem of estimating the latent image of a static bilayer scene (consisting of a foreground and a background at different depths) from motion blurred observations captured with a handheld camera. The camera motion is considered to be composed of in-plane rotations and translations. Since the blur at an image location depends both on camera motion and depth, deblurring becomes a difficult task. We initially propose a method to estimate the transformation spread function (TSF) corresponding to one of the depth layers. The estimated TSF (which reveals the camera motion during exposure) is used to segment the scene into the foreground and background layers and determine the relative depth value. The deblurred image of the scene is finally estimated within a regularization framework by accounting for blur variations due to camera motion as well as depth.

6 0.21830252 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

7 0.21121402 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

8 0.20540941 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring

9 0.11682513 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution

10 0.082522601 88 cvpr-2013-Compressible Motion Fields

11 0.07997261 297 cvpr-2013-Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems

12 0.071914755 412 cvpr-2013-Stochastic Deconvolution

13 0.063092738 92 cvpr-2013-Constrained Clustering and Its Application to Face Clustering in Videos

14 0.060154859 86 cvpr-2013-Composite Statistical Inference for Semantic Segmentation

15 0.057781987 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

16 0.051114481 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera

17 0.050799329 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures

18 0.049615398 56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints

19 0.046529476 369 cvpr-2013-Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination

20 0.044995237 346 cvpr-2013-Real-Time No-Reference Image Quality Assessment Based on Filter Learning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.1), (1, 0.145), (2, -0.036), (3, 0.107), (4, -0.124), (5, 0.359), (6, 0.053), (7, -0.026), (8, 0.032), (9, 0.015), (10, -0.0), (11, -0.017), (12, -0.001), (13, 0.005), (14, 0.012), (15, -0.012), (16, 0.099), (17, 0.008), (18, -0.035), (19, 0.001), (20, -0.009), (21, -0.027), (22, 0.025), (23, 0.056), (24, -0.012), (25, -0.023), (26, 0.016), (27, 0.007), (28, -0.02), (29, -0.012), (30, 0.02), (31, 0.027), (32, 0.02), (33, 0.01), (34, 0.006), (35, 0.037), (36, -0.011), (37, 0.0), (38, 0.046), (39, -0.033), (40, -0.003), (41, -0.027), (42, 0.021), (43, 0.035), (44, 0.012), (45, -0.038), (46, -0.024), (47, 0.043), (48, -0.022), (49, -0.032)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95710045 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform

Author: Yi Zhang, Keigo Hirakawa

Abstract: We propose a notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. DDWT greatly enhances our ability to analyze, detect, and process blur kernels and blurry images—the proposed framework handles both global and spatially varying blur kernels seamlessly, and unifies the treatment of blur caused by object motion, optical defocus, and camera shake. To illustrate the potential of DDWT in computer vision and image processing, we develop example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction.

2 0.93092316 265 cvpr-2013-Learning to Estimate and Remove Non-uniform Image Blur

Author: Florent Couzinié-Devy, Jian Sun, Karteek Alahari, Jean Ponce

Abstract: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multilabel energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa ’s method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al. [5].

3 0.86928087 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

Author: Haichao Zhang, David Wipf, Yanning Zhang

Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.

4 0.84619534 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

Author: Lin Zhong, Sunghyun Cho, Dimitris Metaxas, Sylvain Paris, Jue Wang

Abstract: State-of-the-art single image deblurring techniques are sensitive to image noise. Even a small amount of noise, which is inevitable in low-light conditions, can degrade the quality of blur kernel estimation dramatically. The recent approach of Tai and Lin [17] tries to iteratively denoise and deblur a blurry and noisy image. However, as we show in this work, directly applying image denoising methods often partially damages the blur information that is extracted from the input image, leading to biased kernel estimation. We propose a new method for handling noise in blind image deconvolution based on new theoretical and practical insights. Our key observation is that applying a directional low-pass filter to the input image greatly reduces the noise level, while preserving the blur information in the orthogonal direction to the filter. Based on this observation, our method applies a series of directional filters at different orientations to the input image, and estimates an accurate Radon transform of the blur kernel from each filtered image. Finally, we reconstruct the blur kernel using inverse Radon transform. Experimental results on synthetic and real data show that our algorithm achieves higher quality results than previous approaches on blurry and noisy images. 1

5 0.83247501 131 cvpr-2013-Discriminative Non-blind Deblurring

Author: Uwe Schmidt, Carsten Rother, Sebastian Nowozin, Jeremy Jancsary, Stefan Roth

Abstract: Non-blind deblurring is an integral component of blind approaches for removing image blur due to camera shake. Even though learning-based deblurring methods exist, they have been limited to the generative case and are computationally expensive. To this date, manually-defined models are thus most widely used, though limiting the attained restoration quality. We address this gap by proposing a discriminative approach for non-blind deblurring. One key challenge is that the blur kernel in use at test time is not known in advance. To address this, we analyze existing approaches that use half-quadratic regularization. From this analysis, we derive a discriminative model cascade for image deblurring. Our cascade model consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields. We train our model by loss minimization and use synthetically generated blur kernels to generate training data. Our experiments show that the proposed approach is efficient and yields state-of-the-art restoration quality on images corrupted with synthetic and real blur.

6 0.82111853 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring

7 0.75848132 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

8 0.69490331 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

9 0.69001436 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution

10 0.67221129 412 cvpr-2013-Stochastic Deconvolution

11 0.61324471 65 cvpr-2013-Blind Deconvolution of Widefield Fluorescence Microscopic Data by Regularization of the Optical Transfer Function (OTF)

12 0.28673774 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation

13 0.27457392 312 cvpr-2013-On a Link Between Kernel Mean Maps and Fraunhofer Diffraction, with an Application to Super-Resolution Beyond the Diffraction Limit

14 0.26854524 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

15 0.26163128 88 cvpr-2013-Compressible Motion Fields

16 0.25141615 35 cvpr-2013-Adaptive Compressed Tomography Sensing

17 0.23992577 266 cvpr-2013-Learning without Human Scores for Blind Image Quality Assessment

18 0.2373427 176 cvpr-2013-Five Shades of Grey for Fast and Reliable Camera Pose Estimation

19 0.23530754 124 cvpr-2013-Determining Motion Directly from Normal Flows Upon the Use of a Spherical Eye Platform

20 0.23505148 84 cvpr-2013-Cloud Motion as a Calibration Cue


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.179), (16, 0.042), (26, 0.063), (33, 0.209), (56, 0.209), (67, 0.029), (69, 0.048), (77, 0.011), (87, 0.075), (96, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.83120233 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform

Author: Yi Zhang, Keigo Hirakawa

Abstract: We propose a notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. DDWT greatly enhances our ability to analyze, detect, and process blur kernels and blurry images—the proposed framework handles both global and spatially varying blur kernels seamlessly, and unifies the treatment of blur caused by object motion, optical defocus, and camera shake. To illustrate the potential of DDWT in computer vision and image processing, we develop example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction.

2 0.78957945 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello

Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.

3 0.78836113 154 cvpr-2013-Explicit Occlusion Modeling for 3D Object Class Representations

Author: M. Zeeshan Zia, Michael Stark, Konrad Schindler

Abstract: Despite the success of current state-of-the-art object class detectors, severe occlusion remains a major challenge. This is particularly true for more geometrically expressive 3D object class representations. While these representations have attracted renewed interest for precise object pose estimation, the focus has mostly been on rather clean datasets, where occlusion is not an issue. In this paper, we tackle the challenge of modeling occlusion in the context of a 3D geometric object class model that is capable of fine-grained, part-level 3D object reconstruction. Following the intuition that 3D modeling should facilitate occlusion reasoning, we design an explicit representation of likely geometric occlusion patterns. Robustness is achieved by pooling image evidence from of a set of fixed part detectors as well as a non-parametric representation of part configurations in the spirit of poselets. We confirm the potential of our method on cars in a newly collected data set of inner-city street scenes with varying levels of occlusion, and demonstrate superior performance in occlusion estimation and part localization, compared to baselines that are unaware of occlusions.

4 0.78746665 186 cvpr-2013-GeoF: Geodesic Forests for Learning Coupled Predictors

Author: Peter Kontschieder, Pushmeet Kohli, Jamie Shotton, Antonio Criminisi

Abstract: Conventional decision forest based methods for image labelling tasks like object segmentation make predictions for each variable (pixel) independently [3, 5, 8]. This prevents them from enforcing dependencies between variables and translates into locally inconsistent pixel labellings. Random field models, instead, encourage spatial consistency of labels at increased computational expense. This paper presents a new and efficient forest based model that achieves spatially consistent semantic image segmentation by encoding variable dependencies directly in the feature space the forests operate on. Such correlations are captured via new long-range, soft connectivity features, computed via generalized geodesic distance transforms. Our model can be thought of as a generalization of the successful Semantic Texton Forest, Auto-Context, and Entangled Forest models. A second contribution is to show the connection between the typical Conditional Random Field (CRF) energy and the forest training objective. This analysis yields a new objective for training decision forests that encourages more accurate structured prediction. Our GeoF model is validated quantitatively on the task of semantic image segmentation, on four challenging and very diverse image datasets. GeoF outperforms both stateof-the-art forest models and the conventional pairwise CRF.

5 0.78654253 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

Author: Jeremie Papon, Alexey Abramov, Markus Schoeler, Florentin Wörgötter

Abstract: Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as superpixels, is a widely used preprocessing step in segmentation algorithms. Superpixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that superpixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent superpixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.

6 0.78578824 90 cvpr-2013-Computing Diffeomorphic Paths for Large Motion Interpolation

7 0.78511953 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters

8 0.77984911 386 cvpr-2013-Self-Paced Learning for Long-Term Tracking

9 0.77834433 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

10 0.77803558 324 cvpr-2013-Part-Based Visual Tracking with Online Latent Structural Learning

11 0.77674711 462 cvpr-2013-Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines

12 0.77617246 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking

13 0.77606112 414 cvpr-2013-Structure Preserving Object Tracking

14 0.77604836 360 cvpr-2013-Robust Estimation of Nonrigid Transformation for Point Set Registration

15 0.77577198 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases

16 0.77538073 248 cvpr-2013-Learning Collections of Part Models for Object Recognition

17 0.77473688 312 cvpr-2013-On a Link Between Kernel Mean Maps and Fraunhofer Diffraction, with an Application to Super-Resolution Beyond the Diffraction Limit

18 0.77465254 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

19 0.77326429 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior

20 0.77264714 3 cvpr-2013-3D R Transform on Spatio-temporal Interest Points for Action Recognition