cvpr cvpr2013 cvpr2013-307 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Chandramouli Paramanand, Ambasamudram N. Rajagopalan
Abstract: We address the problem of estimating the latent image of a static bilayer scene (consisting of a foreground and a background at different depths) from motion blurred observations captured with a handheld camera. The camera motion is considered to be composed of in-plane rotations and translations. Since the blur at an image location depends both on camera motion and depth, deblurring becomes a difficult task. We initially propose a method to estimate the transformation spread function (TSF) corresponding to one of the depth layers. The estimated TSF (which reveals the camera motion during exposure) is used to segment the scene into the foreground and background layers and determine the relative depth value. The deblurred image of the scene is finally estimated within a regularization framework by accounting for blur variations due to camera motion as well as depth.
Reference: text
sentIndex sentText sentNum sentScore
1 com, Abstract We address the problem of estimating the latent image of a static bilayer scene (consisting of a foreground and a background at different depths) from motion blurred observations captured with a handheld camera. [sent-5, score-0.583]
2 Since the blur at an image location depends both on camera motion and depth, deblurring becomes a difficult task. [sent-7, score-0.868]
3 The estimated TSF (which reveals the camera motion during exposure) is used to segment the scene into the foreground and background layers and determine the relative depth value. [sent-9, score-0.58]
4 The deblurred image of the scene is finally estimated within a regularization framework by accounting for blur variations due to camera motion as well as depth. [sent-10, score-0.762]
5 Introduction A common problem faced while capturing photographs is the occurrence of motion blur due to camera shake. [sent-12, score-0.628]
6 The extent of blurring at a point in an image varies according to the camera motion as well as the depth value resulting in space-variant blur. [sent-13, score-0.534]
7 Traditionally, motion blur due to camera shake is modeled as the convolution of the latent image with a blur kernel. [sent-16, score-1.203]
8 A lot of approaches for deblurring space-invariant blur exist in the literature [7, 16, 24]. [sent-17, score-0.691]
9 Recent techniques model the non-uniformly motion blurred image by considering the transformations undergone by the image plane rather than using a point spread function (PSF) [21, 23]. [sent-19, score-0.474]
10 have proposed a deblurring scheme for the projective blur model based on Richardson Lucy deconvolution in [21]. [sent-21, score-0.739]
11 in [23] propose an image restoration technique for motion blur arising due to non-uniform camera rotations. [sent-27, score-0.662]
12 The motion blurred image is modeled by considering the camera motion to be comprised of 2D translations and in-plane rotation. [sent-32, score-0.57]
13 It must be mentioned that none of the above methods can deal with changes in blur when there are depth variations. [sent-35, score-0.666]
14 For the case of pure in-plane translations alone, [17] and [25] estimate the depth map and the restored image using two blurred observations. [sent-36, score-0.6]
15 In this paper, we develop a method to estimate the latent image of a bilayer scene using two motion blurred observations. [sent-37, score-0.509]
16 Such an approach has been followed for the scenario of non-uniform blur as well [3]. [sent-41, score-0.451]
17 We relate each of the two blurred observations with the original image of the scene through a TSF (which denotes its corresponding camera motion) and the depth of the scene. [sent-43, score-0.641]
18 We treat the camera motion to consist of 2D translations as well as in-plane rotations since this motion is more typical of camera shakes [8, 14]. [sent-46, score-0.512]
19 Also, camera translations along the optical axis is usually ignored because only large translations cause noticeable blur [8, 12]. [sent-48, score-0.725]
20 When objects in the scene are near to the camera, the parallax effect can cause significant variations in blur [17]. [sent-49, score-0.602]
21 Deblurring cannot be achieved unless the variation of blur due to depth is accounted for [12, 20]. [sent-50, score-0.69]
22 From the two blurred observations, we initially propose to determine the TSFs using blur kernels that are estimated at randomly chosen points across the image. [sent-53, score-0.916]
23 We follow the multichannel blind deconvolution technique in [18] which can accurately determine blur kernels from two image patches. [sent-54, score-0.79]
24 These blur kernels can be from either of the two depth layers. [sent-55, score-0.855]
25 We derive a constraint to check whether any two given blur kernels of an observation are from the same depth layer. [sent-56, score-0.872]
26 Based on this, we obtain a set of blur kernels corresponding to a depth layer. [sent-57, score-0.855]
27 From the blurred observations, and their reference TSFs we estimate the depth map which enables us to segment the scene into two layers and estimate the relative depth of the other layer with respect to the depth of the reference layer. [sent-62, score-1.15]
28 Contributions: This work presents significant contributions over recent works in motion deblurring: i) The performance of the state-of-the-art non-uniform deblurring works [23, 8, 10] is much superior than those that employ convolution model for motion blur. [sent-67, score-0.431]
29 ii) Techniques that do consider the effect of the scene depth [17, 25] restrict the camera motion to only in-plane translations whereas our model allows for in- plane rotations in addition to camera translations. [sent-70, score-0.677]
30 Our approach of using blur kernels for TSF estimation enables us to address parallax effects for a bilayer scene. [sent-72, score-0.857]
31 From a set of PSFs estimated at random points across the image, we develop a method to automatically select the blur kernels from a single depth layer. [sent-73, score-0.884]
32 We arrive at the TSFs of the two observations that explains the local blur kernels of one layer. [sent-74, score-0.737]
33 Motion blur model Initially, we consider the scene to be of constant depth and relate the blurred observation to the latent image in terms of a TSF. [sent-76, score-0.989]
34 For a bilayer scene, we subsequently modify the model to take the parallax effect into account and relate the blurred image with the depth map and the TSF. [sent-77, score-0.712]
35 f (x − u)h(x − u,u)du (2) where h (x, u) denotes the blur kernel at the image point x as a function of the independent variable u. [sent-91, score-0.507]
36 Bilayer scenes: the parallax effect When there are depth variations in the scene, the transformation undergone by an image point is not only due to camera motion but also varies as a function of depth. [sent-108, score-0.653]
37 We express the transformation undergone by a point (i, j) (having depth d (i, j)) in terms of the relative depth k (i,j) = and the parameters of the transformation λ. [sent-126, score-0.656]
38 (2)), wherein the PSF h depends on the camera motion (denoted by the TSF ωo) and the depth map d (according to Eqn. [sent-135, score-0.447]
39 Let h (i, j, ; ) denote the discrete blur kernel at a pixel p = (i, j). [sent-140, score-0.49]
40 Image deblurring Consider two blurred observations g1 and g2 of a bilayer scene which are related to the original image f through the TSFs ω1o and ω2o, respectively, and the depth map d. [sent-158, score-0.918]
41 We devise a method which uses blur kernels estimated at different points in the image in order to determine the TSFs. [sent-160, score-0.697]
42 Based on the estimated TSFs ω1o and ω2o, and the blurred observations, we determine the relative depth map k. [sent-161, score-0.525]
43 Reference TSF estimation Our initial step is to estimate blur kernels at different image points. [sent-166, score-0.663]
44 Following [24], we determine a subset of image points which are suitable for blur kernel estimation (regions with wide edges). [sent-167, score-0.518]
45 Within each pair of patches (gi1 and g2i), we assume the blur to be space-invariant and use the blind deconvolution technique in [18] (which uses two blurred observations) to get the blur kernels hi1 and hi2. [sent-171, score-1.432]
46 We found in our experiments that the estimates of blur kernels from the method in [18] are quite accurate. [sent-172, score-0.659]
47 Out of the Ns kernels of an observation, given any two PSFs, we 1 1 1 1 1 1 1 1 157 5 compare the possible transformations of the kernels and determine whether the two kernels belong to the same depth layer. [sent-183, score-0.894]
48 We refer to a set of transformations Λb1 as the support of the blur kernel hb1 . [sent-187, score-0.574]
49 In other words, Λb1 contains all possible transformations from T that shift the point p1 to a position at which the blur kernel hb1 has a positive entry. [sent-195, score-0.616]
50 However, the cardinality of Λb1 is limited because a motion blur kernel is sparse. [sent-198, score-0.573]
51 Given two blur kernels, hb1 and hb2 corresponding to locations p1 and p2, respectively, we determine the supports of the blur kernels Λb1 and Λb2. [sent-199, score-1.139]
52 If p1 and p2 were at the same depth layer, the blur kernels hb1 and hb2 would be related to a single TSF, and there would be a common set of transformations between Λb1 and Λb2 that would include the support of the TSF. [sent-200, score-0.939]
53 If hb1 has positive entries at locations other than those in or hb2 has positive entries at locations other than − − hˆ1b hˆ1b, hˆ2b, × those inhˆb2, then we can conclude that there are no common transformations between Λb1 and Λb2 that can generate both the blur kernels. [sent-204, score-0.535]
54 It must be noted that, only when the effect of parallax is significant, the transformations in the supports of the two blur kernels would be different. [sent-206, score-0.86]
55 2 TSF from PSFs Using our method, out of Ns blur kernels, we select Np kernels that are at the same depth. [sent-209, score-0.64]
56 Our objective is to estimate the TSF ωbo that concurs with the Np observed blur . [sent-210, score-0.474]
57 (6), we see that at a pixel (i, j), each component of the blur kernel h (i, j;m, n), is a weighted sum of the components of the TSF ω. [sent-218, score-0.49]
58 Consequently, the blur kernel hbi can be expressed as hbi = Mbiωbo for i= 1. [sent-219, score-0.568]
59 Np, where Mbi is a matrix whose entries are determined by the location of the blur kernel and the bilinear interpolation coefficients. [sent-222, score-0.49]
60 If the number of elements in the blur kernel is Nh, then the size of the matrix Mbi will be Nh NT. [sent-223, score-0.49]
61 By stacking all the × Np blur kernels as a vector hb, a×ndN suitably concatenating the matrices Mbi for i = 1. [sent-224, score-0.64]
62 Np, the blur kernels can be related to the TSF as hb = Mbωbo (8) The matrix Mb is of size NpNh NT. [sent-227, score-0.694]
63 To get an estimate of the TSF that is consistent with× ×th Ne observed blur kernels and is sparse as well, we minimize the following cost (separately for b = 1and 2) using the toolbox available at [15]. [sent-228, score-0.682]
64 can be incidental shifts of small magnitude in the estimated blur kernels with respect to the ‘true’ blur kernels (which are induced at a point as a result of blurring the latent image with the true TSF). [sent-256, score-1.56]
65 Hence, we need to determine the shifts of blur kernels corresponding to only one of the two observations. [sent-259, score-0.707]
66 The TSF ω1o estimated from the aligned blur kernels should have a low value of − M1ω1o? [sent-260, score-0.669]
67 We need to determine two translation parameters for each ofthe other blur kernels. [sent-272, score-0.5]
68 For all possible combination of the translations, we shift the blur kernels h12, h13, and evaluate the solution of the Eqn. [sent-273, score-0.665]
69 Since the magnitude of the shifts are generally small, and the number of blur kernels used is typically low (around 4), finding the optimum shifts (that minimize the . [sent-275, score-0.718]
70 The alignment step provides one possible solution to the shifts of the blur kernels that leads to a TSF. [sent-293, score-0.679]
71 1), the shift in the estimated blur kernels needs to be accounted for. [sent-296, score-0.718]
72 Given two blur kernel hb1 and hb2, we verify the condition discussed in section 4. [sent-297, score-0.49]
73 1 for all possible shifted versions of one of the blur kernels. [sent-299, score-0.451]
74 Depth estimation and segmentation We determine the relative depth k (i, j) at every pixel from the blurred observations g1 and g2, and the TSFs ω1o and ω2o through a MAP-MRF framework. [sent-302, score-0.534]
75 The blur kernel hb (i, j, ; ) at a point (i, j) is given by hb(i,j;m,n) =? [sent-305, score-0.561]
76 , Consider the blur kernel hc (i, j;) which is defined as the convolution of h1 (i, j;) and h2 (i, j;). [sent-314, score-0.542]
77 , Let hbk (i, j;m, n) denote the blur kernel generated from the reference TSF ωbo by assuming the relative depth at (i, j) to be k (according to Eqn. [sent-332, score-0.765]
78 (12) would be significantly large due to incorrect scaling of the blur kernels h1k (i, j;) and h2k (i, j;). [sent-351, score-0.64]
79 A similar cost has been used to get an initial estimate of the depth map from blurred images in [17] for the case oftranslational blur. [sent-352, score-0.491]
80 We further partition the segmented depth map which contains Nd different depth layers into two segments using k-means clustering. [sent-360, score-0.516]
81 Thus, we arrive at a relative depth map with two distinct depth values. [sent-361, score-0.508]
82 Hence, we segment the depth map to two layers so that within each depth layer, the TSF model can be used. [sent-396, score-0.516]
83 5 metres so that the variation of blur with respect to depth due to camera translations is perceivable. [sent-400, score-0.85]
84 The observations generated using the space-variant blur model of Eqn. [sent-411, score-0.523]
85 For deblurring, blur kernels were estimated using image patches that were randomly selected across the image (marked in Figs. [sent-414, score-0.708]
86 Out of six blur kernels of an observation, we found that two of the blur kernels were from one of the layers and the remaining four kernels were from the other layer (please refer to the supplementary material). [sent-416, score-1.577]
87 The layer with the four blur kernels was chosen as the reference. [sent-417, score-0.677]
88 The TSFs of the two observations were determined from the corresponding blur kernels after alignment step. [sent-418, score-0.712]
89 To verify our estimate of the TSFs, we generated blur kernels at the image points from where the patches were cropped and found that they were close to the true blur kernels. [sent-419, score-1.153]
90 We show the blur kernels of the patches in the supplementary material. [sent-420, score-0.698]
91 In the fourth row, the last patch shows the result of deblurring when depth variations were ignored. [sent-434, score-0.524]
92 The result of deblurring when parallax was ignored is shown in the fourth patch, wherein we see that deblurring is far from perfect. [sent-446, score-0.638]
93 The blur kernels and the depth maps of the real experiments are shown in the supplementary material. [sent-449, score-0.874]
94 Conclusions We addressed the problem of restoring a bilayered scene when the captured observations are space-variantly motion blurred due to incidental camera shake. [sent-458, score-0.552]
95 For estimating the TSF corresponding to either the foreground or background, we proposed to use local blur kernels. [sent-460, score-0.476]
96 We developed a method to automatically group the blur kernels corresponding to their depth layers. [sent-461, score-0.855]
97 This enables us to estimate a sparse TSF that is consistent with the observed blur kernels of a depth layer. [sent-462, score-0.878]
98 Fourth row, last image: result of deblurring when depth changes were ignored. [sent-471, score-0.455]
99 When tested on different synthetic and real experiments, our results reveal that the proposed non-uniform motion deblurring scheme is quite effective in accounting for parallax effects in bilayered scenes. [sent-473, score-0.507]
100 Non-uniform motion deblurring for camera shakes using image registration. [sent-493, score-0.44]
wordName wordTfidf (topN-words)
[('tsf', 0.607), ('blur', 0.451), ('deblurring', 0.24), ('depth', 0.215), ('tsfs', 0.207), ('blurred', 0.2), ('kernels', 0.189), ('blurring', 0.125), ('bilayer', 0.122), ('parallax', 0.095), ('camera', 0.094), ('translations', 0.09), ('transformations', 0.084), ('motion', 0.083), ('psfs', 0.076), ('observations', 0.072), ('psf', 0.066), ('undergone', 0.066), ('transformation', 0.062), ('bo', 0.06), ('blind', 0.054), ('hb', 0.054), ('mbi', 0.053), ('layers', 0.052), ('deconvolution', 0.048), ('exposure', 0.047), ('latent', 0.046), ('rotations', 0.045), ('bilayered', 0.044), ('vhb', 0.044), ('deblurred', 0.044), ('fourth', 0.042), ('reference', 0.041), ('hbi', 0.039), ('kernel', 0.039), ('shifts', 0.039), ('patches', 0.039), ('restored', 0.038), ('layer', 0.037), ('scene', 0.035), ('restoration', 0.034), ('map', 0.034), ('ty', 0.034), ('shake', 0.033), ('adjoint', 0.033), ('il', 0.033), ('tx', 0.03), ('gbi', 0.03), ('np', 0.029), ('estimated', 0.029), ('determine', 0.028), ('hc', 0.027), ('patch', 0.027), ('paramanand', 0.026), ('rnt', 0.026), ('burst', 0.026), ('accounting', 0.026), ('gb', 0.026), ('arrive', 0.025), ('relate', 0.025), ('convolution', 0.025), ('shift', 0.025), ('foreground', 0.025), ('spread', 0.024), ('rows', 0.024), ('accounted', 0.024), ('ns', 0.024), ('incidental', 0.024), ('displaced', 0.024), ('vh', 0.024), ('mb', 0.024), ('tai', 0.024), ('deno', 0.023), ('shakes', 0.023), ('estimate', 0.023), ('kq', 0.022), ('stabilization', 0.022), ('translation', 0.021), ('whyte', 0.021), ('wherein', 0.021), ('effect', 0.021), ('arrived', 0.02), ('multichannel', 0.02), ('supports', 0.02), ('cho', 0.02), ('modeled', 0.02), ('relative', 0.019), ('functional', 0.019), ('supplementary', 0.019), ('canon', 0.019), ('initially', 0.019), ('quite', 0.019), ('cost', 0.019), ('ij', 0.018), ('nt', 0.018), ('consequently', 0.018), ('operation', 0.017), ('observation', 0.017), ('point', 0.017), ('kp', 0.017)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes
Author: Chandramouli Paramanand, Ambasamudram N. Rajagopalan
Abstract: We address the problem of estimating the latent image of a static bilayer scene (consisting of a foreground and a background at different depths) from motion blurred observations captured with a handheld camera. The camera motion is considered to be composed of in-plane rotations and translations. Since the blur at an image location depends both on camera motion and depth, deblurring becomes a difficult task. We initially propose a method to estimate the transformation spread function (TSF) corresponding to one of the depth layers. The estimated TSF (which reveals the camera motion during exposure) is used to segment the scene into the foreground and background layers and determine the relative depth value. The deblurred image of the scene is finally estimated within a regularization framework by accounting for blur variations due to camera motion as well as depth.
2 0.53051031 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera
Author: Hee Seok Lee, Kuoung Mu Lee
Abstract: Motion blur frequently occurs in dense 3D reconstruction using a single moving camera, and it degrades the quality of the 3D reconstruction. To handle motion blur caused by rapid camera shakes, we propose a blur-aware depth reconstruction method, which utilizes a pixel correspondence that is obtained by considering the effect of motion blur. Motion blur is dependent on 3D geometry, thus parameterizing blurred appearance of images with scene depth given camera motion is possible and a depth map can be accurately estimated from the blur-considered pixel correspondence. The estimated depth is then converted intopixel-wise blur kernels, and non-uniform motion blur is easily removed with low computational cost. The obtained blur kernel is depth-dependent, thus it effectively addresses scene-depth variation, which is a challenging problem in conventional non-uniform deblurring methods.
3 0.49068743 265 cvpr-2013-Learning to Estimate and Remove Non-uniform Image Blur
Author: Florent Couzinié-Devy, Jian Sun, Karteek Alahari, Jean Ponce
Abstract: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multilabel energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa ’s method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al. [5].
4 0.42456988 131 cvpr-2013-Discriminative Non-blind Deblurring
Author: Uwe Schmidt, Carsten Rother, Sebastian Nowozin, Jeremy Jancsary, Stefan Roth
Abstract: Non-blind deblurring is an integral component of blind approaches for removing image blur due to camera shake. Even though learning-based deblurring methods exist, they have been limited to the generative case and are computationally expensive. To this date, manually-defined models are thus most widely used, though limiting the attained restoration quality. We address this gap by proposing a discriminative approach for non-blind deblurring. One key challenge is that the blur kernel in use at test time is not known in advance. To address this, we analyze existing approaches that use half-quadratic regularization. From this analysis, we derive a discriminative model cascade for image deblurring. Our cascade model consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields. We train our model by loss minimization and use synthetically generated blur kernels to generate training data. Our experiments show that the proposed approach is efficient and yields state-of-the-art restoration quality on images corrupted with synthetic and real blur.
5 0.34464976 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring
Author: Li Xu, Shicheng Zheng, Jiaya Jia
Abstract: We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unifiedframeworkfor both uniform andnon-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.
6 0.32150841 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior
7 0.2942872 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters
8 0.23516805 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform
9 0.19849938 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras
10 0.16619553 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera
11 0.1609146 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution
12 0.14637218 412 cvpr-2013-Stochastic Deconvolution
13 0.11587848 117 cvpr-2013-Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera
14 0.10959827 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures
15 0.097639538 115 cvpr-2013-Depth Super Resolution by Rigid Body Self-Similarity in 3D
16 0.096750326 111 cvpr-2013-Dense Reconstruction Using 3D Object Shape Priors
17 0.095040433 232 cvpr-2013-Joint Geodesic Upsampling of Depth Images
18 0.088057742 56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints
19 0.087927796 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image
20 0.085164845 357 cvpr-2013-Revisiting Depth Layers from Occlusions
topicId topicWeight
[(0, 0.163), (1, 0.279), (2, -0.017), (3, 0.156), (4, -0.193), (5, 0.49), (6, 0.049), (7, 0.027), (8, 0.055), (9, 0.001), (10, -0.027), (11, -0.029), (12, 0.017), (13, 0.08), (14, 0.024), (15, -0.039), (16, -0.023), (17, 0.046), (18, -0.091), (19, -0.029), (20, 0.01), (21, -0.046), (22, -0.001), (23, -0.003), (24, 0.022), (25, -0.002), (26, 0.02), (27, 0.036), (28, 0.002), (29, 0.02), (30, -0.003), (31, 0.054), (32, 0.036), (33, 0.008), (34, 0.037), (35, -0.046), (36, -0.02), (37, 0.025), (38, 0.02), (39, -0.005), (40, 0.006), (41, 0.003), (42, -0.002), (43, 0.069), (44, -0.009), (45, -0.04), (46, -0.029), (47, -0.006), (48, 0.006), (49, 0.016)]
simIndex simValue paperId paperTitle
same-paper 1 0.9452647 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes
Author: Chandramouli Paramanand, Ambasamudram N. Rajagopalan
Abstract: We address the problem of estimating the latent image of a static bilayer scene (consisting of a foreground and a background at different depths) from motion blurred observations captured with a handheld camera. The camera motion is considered to be composed of in-plane rotations and translations. Since the blur at an image location depends both on camera motion and depth, deblurring becomes a difficult task. We initially propose a method to estimate the transformation spread function (TSF) corresponding to one of the depth layers. The estimated TSF (which reveals the camera motion during exposure) is used to segment the scene into the foreground and background layers and determine the relative depth value. The deblurred image of the scene is finally estimated within a regularization framework by accounting for blur variations due to camera motion as well as depth.
2 0.91033161 265 cvpr-2013-Learning to Estimate and Remove Non-uniform Image Blur
Author: Florent Couzinié-Devy, Jian Sun, Karteek Alahari, Jean Ponce
Abstract: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multilabel energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa ’s method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al. [5].
3 0.90416807 68 cvpr-2013-Blur Processing Using Double Discrete Wavelet Transform
Author: Yi Zhang, Keigo Hirakawa
Abstract: We propose a notion of double discrete wavelet transform (DDWT) that is designed to sparsify the blurred image and the blur kernel simultaneously. DDWT greatly enhances our ability to analyze, detect, and process blur kernels and blurry images—the proposed framework handles both global and spatially varying blur kernels seamlessly, and unifies the treatment of blur caused by object motion, optical defocus, and camera shake. To illustrate the potential of DDWT in computer vision and image processing, we develop example applications in blur kernel estimation, deblurring, and near-blur-invariant image feature extraction.
4 0.87736893 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera
Author: Hee Seok Lee, Kuoung Mu Lee
Abstract: Motion blur frequently occurs in dense 3D reconstruction using a single moving camera, and it degrades the quality of the 3D reconstruction. To handle motion blur caused by rapid camera shakes, we propose a blur-aware depth reconstruction method, which utilizes a pixel correspondence that is obtained by considering the effect of motion blur. Motion blur is dependent on 3D geometry, thus parameterizing blurred appearance of images with scene depth given camera motion is possible and a depth map can be accurately estimated from the blur-considered pixel correspondence. The estimated depth is then converted intopixel-wise blur kernels, and non-uniform motion blur is easily removed with low computational cost. The obtained blur kernel is depth-dependent, thus it effectively addresses scene-depth variation, which is a challenging problem in conventional non-uniform deblurring methods.
5 0.8372556 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior
Author: Haichao Zhang, David Wipf, Yanning Zhang
Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.
6 0.8248719 449 cvpr-2013-Unnatural L0 Sparse Representation for Natural Image Deblurring
7 0.81665468 131 cvpr-2013-Discriminative Non-blind Deblurring
8 0.80950713 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters
9 0.66213268 17 cvpr-2013-A Machine Learning Approach for Non-blind Image Deconvolution
10 0.65915906 412 cvpr-2013-Stochastic Deconvolution
12 0.49123374 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera
13 0.42442399 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras
14 0.4097259 232 cvpr-2013-Joint Geodesic Upsampling of Depth Images
15 0.40339142 56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints
16 0.40069184 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns
17 0.39468724 117 cvpr-2013-Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera
18 0.38453454 115 cvpr-2013-Depth Super Resolution by Rigid Body Self-Similarity in 3D
19 0.35749194 407 cvpr-2013-Spatio-temporal Depth Cuboid Similarity Feature for Activity Recognition Using Depth Camera
20 0.35562703 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures
topicId topicWeight
[(10, 0.564), (16, 0.014), (26, 0.021), (33, 0.192), (67, 0.025), (69, 0.019), (87, 0.049)]
simIndex simValue paperId paperTitle
1 0.9195078 295 cvpr-2013-Multi-image Blind Deblurring Using a Coupled Adaptive Sparse Prior
Author: Haichao Zhang, David Wipf, Yanning Zhang
Abstract: This paper presents a robust algorithm for estimating a single latent sharp image given multiple blurry and/or noisy observations. The underlying multi-image blind deconvolution problem is solved by linking all of the observations together via a Bayesian-inspired penalty function which couples the unknown latent image, blur kernels, and noise levels together in a unique way. This coupled penalty function enjoys a number of desirable properties, including a mechanism whereby the relative-concavity or shape is adapted as a function of the intrinsic quality of each blurry observation. In this way, higher quality observations may automatically contribute more to the final estimate than heavily degraded ones. The resulting algorithm, which requires no essential tuning parameters, can recover a high quality image from a set of observations containing potentially both blurry and noisy examples, without knowing a priorithe degradation type of each observation. Experimental results on both synthetic and real-world test images clearly demonstrate the efficacy of the proposed method.
same-paper 2 0.90596741 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes
Author: Chandramouli Paramanand, Ambasamudram N. Rajagopalan
Abstract: We address the problem of estimating the latent image of a static bilayer scene (consisting of a foreground and a background at different depths) from motion blurred observations captured with a handheld camera. The camera motion is considered to be composed of in-plane rotations and translations. Since the blur at an image location depends both on camera motion and depth, deblurring becomes a difficult task. We initially propose a method to estimate the transformation spread function (TSF) corresponding to one of the depth layers. The estimated TSF (which reveals the camera motion during exposure) is used to segment the scene into the foreground and background layers and determine the relative depth value. The deblurred image of the scene is finally estimated within a regularization framework by accounting for blur variations due to camera motion as well as depth.
3 0.89767158 154 cvpr-2013-Explicit Occlusion Modeling for 3D Object Class Representations
Author: M. Zeeshan Zia, Michael Stark, Konrad Schindler
Abstract: Despite the success of current state-of-the-art object class detectors, severe occlusion remains a major challenge. This is particularly true for more geometrically expressive 3D object class representations. While these representations have attracted renewed interest for precise object pose estimation, the focus has mostly been on rather clean datasets, where occlusion is not an issue. In this paper, we tackle the challenge of modeling occlusion in the context of a 3D geometric object class model that is capable of fine-grained, part-level 3D object reconstruction. Following the intuition that 3D modeling should facilitate occlusion reasoning, we design an explicit representation of likely geometric occlusion patterns. Robustness is achieved by pooling image evidence from of a set of fixed part detectors as well as a non-parametric representation of part configurations in the spirit of poselets. We confirm the potential of our method on cars in a newly collected data set of inner-city street scenes with varying levels of occlusion, and demonstrate superior performance in occlusion estimation and part localization, compared to baselines that are unaware of occlusions.
4 0.89320076 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?
Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello
Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.
5 0.88701844 90 cvpr-2013-Computing Diffeomorphic Paths for Large Motion Interpolation
Author: Dohyung Seo, Jeffrey Ho, Baba C. Vemuri
Abstract: In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff(Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff(Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff(Ω) to the quotient space Diff(M)/Diff(M)μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff(M)μ. This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Sec- ond, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).
6 0.87081945 386 cvpr-2013-Self-Paced Learning for Long-Term Tracking
7 0.85211998 3 cvpr-2013-3D R Transform on Spatio-temporal Interest Points for Action Recognition
8 0.85020781 186 cvpr-2013-GeoF: Geodesic Forests for Learning Coupled Predictors
9 0.81279939 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters
10 0.81073356 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds
11 0.78646636 462 cvpr-2013-Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines
12 0.7499702 324 cvpr-2013-Part-Based Visual Tracking with Online Latent Structural Learning
13 0.74617922 193 cvpr-2013-Graph Transduction Learning with Connectivity Constraints with Application to Multiple Foreground Cosegmentation
14 0.72916764 131 cvpr-2013-Discriminative Non-blind Deblurring
15 0.72578245 314 cvpr-2013-Online Object Tracking: A Benchmark
16 0.7079125 414 cvpr-2013-Structure Preserving Object Tracking
17 0.70585233 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking
18 0.69657183 360 cvpr-2013-Robust Estimation of Nonrigid Transformation for Point Set Registration
19 0.69620305 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems