iccv iccv2013 iccv2013-293 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Tomer Michaeli, Michal Irani
Abstract: Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function ‘PSF’ of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for “blind” super resolution. In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. This leads to significant improvement in SR results.
Reference: text
sentIndex sentText sentNum sentScore
1 of Computer Science and Applied Mathematics Weizmann Institute of Science, Israel Abstract Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function ‘PSF’ of the camera, or some default low-pass filter, e. [sent-2, score-0.604]
2 However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. [sent-5, score-0.548]
3 In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. [sent-7, score-0.563]
4 (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. [sent-8, score-0.62]
5 This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). [sent-9, score-0.345]
6 In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. [sent-10, score-0.6]
7 When the PSF is unknown, the blur kernel is assumed to be some standard low-pass filter (LPF) like a Gaussian or a bicubic kernel. [sent-18, score-0.621]
8 Relying on the wrong blur kernel may lead to low-quality SR results, as demonstrated in Fig. [sent-20, score-0.532]
9 Moreover, we show that unlike the common belief, even if the PSF is known, it is the wrong blur kernel to use in SR algorithms! [sent-22, score-0.532]
10 We further show how to obtain the optimal SR blur kernel directly from the low-resolution image. [sent-23, score-0.544]
11 A very limited amount of work has been dedicated to “blind SR”, namely SR in which the blur kernel is not assumed known. [sent-24, score-0.584]
12 Most methods in this category assume some parametric model for the kernel (e. [sent-25, score-0.339]
13 A nonparametric kernel recovery method was presented in Low-resolutionimageDefaultkernelRecover dkernel Figure 1: Blind SR on an old low-quality image (end of World War II) downloaded from the Internet. [sent-29, score-0.492]
14 The blur kernel was recovered directly from the low-res image (see Sec. [sent-30, score-0.593]
15 This method assumes that the kernel has a single peak, which is a restrictive assumption in the presence ofmotion blur. [sent-34, score-0.312]
16 In [18, 9], methods for jointly estimating the high-res image and a nonparametric kernel were developed. [sent-35, score-0.358]
17 Our method is based on the universal property that natural image patches tend to recur abundantly, both across scales of the same image, as well as in an external database of other natural images [22, 8, 7]. [sent-38, score-0.641]
18 First, we address the question: What is the optimal blur kernel relating the unknown high-res image to the input low-res image? [sent-40, score-0.662]
19 As mentioned above, we analytically show that, in contrast to the common belief, the optimal blur kernel k is not the PSF. [sent-41, score-0.544]
20 Our second contribution is the observation that k can be estimated by relying on patch recurrence across scales of the input low-res image, a property which has been previously used for (non-blind) single-image SR [8, 4]. [sent-43, score-0.323]
21 In partic994455 ular, we show that the kernel that maximizes the similarity of recurring patches across scales of the low-res image, is also the optimal SR kernel. [sent-44, score-0.696]
22 Many example-based SR algorithms rely on an external database of low-res and high-res pairs of patches extracted from a large collection of high-quality example images [20, 21, 7, 2]. [sent-46, score-0.504]
23 They too assume that the blur kernel k is known a-priori (and use it to generate the low-res versions of the high-res examples). [sent-47, score-0.525]
24 We show how our kernel estimation algorithm can be modified to work with an external database ofimages, recovering the optimal kernel relating the low-res image to the external high-res examples. [sent-48, score-1.335]
25 Our last contribution is a proof that our algorithm computes the MAP estimate of the kernel, as opposed to the joint MAP (over the kernel and high-res image) strategy of [19, 10, 18, 9]. [sent-49, score-0.312]
26 We show that plugging our estimated kernel into existing super-resolution algorithms results in improved reconstructions that are comparable to using the ground-truth kernel. [sent-51, score-0.36]
27 In fact this is the SR blur kernel we are interested in. [sent-79, score-0.505]
28 While k is often assumed to resemble the PSF, we show next that the optimal SR kernel is not a simple discretization nor approximation of the PSF. [sent-80, score-0.394]
29 In other words, bL is a linear combination of translated versions of bH, and the coefficients of this representation constitute the SR kernel k. [sent-87, score-0.352]
30 Counter-intuitively, in certain settings the optimal blur kernel k relating h and l does not share much resemblance to the PSF bL. [sent-91, score-0.618]
31 The physical interpretation of the kernel k can be intuitively understood from Fig. [sent-95, score-0.341]
32 If the PSF is an ideal low-pass filter (a sinc in the image domain; a rect in the Fourier domain), then the kernel kc in- deed equals to the PSF bL, because rect(ω)/ rect(ω/α) = rect(ω). [sent-98, score-0.551]
33 Consequently, the division by BL(ω/α) amplifies the high frequencies in Kc(ω) with respect to BL(ω), implying that the optimal kernel is usually narrower than the PSF. [sent-100, score-0.412]
34 (5) are known to samples of a func- type to 994466 Discretiza on fthebkLPnS(axiFve)[n] Optimalburkcomputkedopftriom al([5n)] Figure 3: The optimal blur kernel is not a simple discretization of the low-res PSF bL(x) (computedfor α = 2). [sent-107, score-0.544]
35 Kernel estimation using internal examples Recurrence of small image patches (e. [sent-111, score-0.278]
36 We next show how the recurrence of patches across scales can also be exploited to recover the correct blur kernel k relating the unknown high-res image h with the low-res image l. [sent-116, score-1.041]
37 In fact, we show that the kernel which maximizes similarity of recurring patches across scales of the low-res image l, is also the optimal kernel k between images h and l. [sent-117, score-1.008]
38 The observation that small image patches recur across scales of an image, implies that small patterns recur in the continuous scene at multiple sizes. [sent-118, score-0.449]
39 These two continuous patterns are observed in the low-res image by two discrete patterns ofdifferent sizes, contained in patches q and r (see Fig. [sent-123, score-0.278]
40 We next show that the low-res patches q and r are related to each other by blur and subsampling with the (unknown) optimal blur kernel k derived in Eq. [sent-125, score-0.913]
41 This observation forms the basis for our kernel recovery algorithm. [sent-129, score-0.423]
42 (10) entails that these low-res patches are related by the unknown optimal SR kernel k: q = (r ∗ k) ↓α. [sent-147, score-0.571]
43 If the coarse image is generated with the kernel k, then rα = q. [sent-149, score-0.312]
44 Our claim is that q corresponds to a down-sampled version of r with the optimal SR kernel k derived in Sec. [sent-158, score-0.401]
45 (10) induces a linear constraint on the unknown coefficients of the kernel k. [sent-167, score-0.356]
46 (10) implies that the correct blur kernel k is also the one which maximizes similarity of NNs across scales in the low-res image l. [sent-176, score-0.632]
47 We use this property in our algorithm to obtain the optimal kernel k. [sent-177, score-0.376]
48 Next, for each small patch qi in l we find a few nearest neighbors (NNs) in lα and regard the large patches right above them as the candidate “parents” of qi. [sent-180, score-0.369]
49 Note that the leastsquares step does not recover the initial kernel we use to down-sample the image, but rather a kernel that is closer to the true k. [sent-182, score-0.668]
50 For example, recovering a 7 7 discrete kernel k relating high-res h with low-res l (49 unknowns) may be done with as little as one good 7 7 patch recurring in scales land lα (providing 49 equations). [sent-191, score-0.617]
51 1,5,9 show examples of single-image SR with the method of [8], once using their default (bicubic) kernel, and once using our kernel recovered from the low-res image. [sent-198, score-0.499]
52 Kernel estimation using external examples Many example-based SR algorithms rely on an external database of high-res patches extracted from a large collection of high-res images [20, 21, 7, 2]. [sent-200, score-0.765]
53 They too assume that the blur kernel k is known a-priori, and use it to downsample the images in the database in order to obtain pairs of low-res and high-res patches. [sent-201, score-0.55]
54 We first explain the physical interpretation of the optimal kernel k when using an external database ofexamples, and then show how to estimate this optimal k. [sent-203, score-0.725]
55 Let us assume, for simplicity, that all the high-res images in the external database were taken by a single camera with a single PSF. [sent-204, score-0.337]
56 Since the external images form examples of the high-res patches in SR, this implicitly induces that the high-res PSF bH is the PSF of the external camera. [sent-205, score-0.698]
57 The external camera, however, is most likely not the same as the camera imaging the low-res input image l (the “internal” camera). [sent-206, score-0.292]
58 Namely the kernel k relating the high-res and low-res images is still given by Eqs. [sent-212, score-0.386]
59 (6), the intuitive understanding of the optimal kernel k when using external high-res examples is (in the Fourier domain): Kc(ω) =BBHL((ωω))=PPSSFFEInxteterrnnaall((ωω)). [sent-215, score-0.612]
60 (11) Thus, in SR from external examples the high-res patches correspond to the PSF bH of the external camera, and the low-res patches generated from them by downsampling with k should correspond, by construction, to the low-res PSF bL of the internal camera. [sent-216, score-0.976]
61 This k is generally unknown (and is assumed by external SR methods to be some default kernel, like a Guassian, or a bicubic kernel). [sent-217, score-0.52]
62 Determining the optimal kernel k for external SR can be done in the same manner as for internal SR (Sec. [sent-218, score-0.714]
63 1), with the only exception that the “parent” patches {ri} are now sought in an external database rather than within the input image. [sent-220, score-0.482]
64 As before, we start with an initial guess to the kernel k. [sent-221, score-0.369]
65 We down-sample the external patches {ri} with to obtain their low-res versions {rαi}. [sent-222, score-0.457]
66 These “parent-child” pairs (q, r) are used to recover a more accurate kernel via a least-squares solution to a system of linear equations. [sent-224, score-0.335]
67 5 provides an example of external SR with the algorithm of [21], once using their default (bicubic) kernel k, and once using our kernel k estimated from their external examples. [sent-227, score-1.27]
68 Interpretation as MAP estimation We next show that both our approaches to kernel estimation (internal and external) can be viewed as a principled Maximum a Posteriori (MAP) estimation. [sent-229, score-0.312]
69 Some existing blind SR approaches attempt to simultaneously estimate the high-res image h and the kernel k [19, 10, 18, 9]. [sent-230, score-0.486]
70 4 we used a collection of patches {ri} from an external database as candidates for constituting “parents” to small patches from the input image l. [sent-238, score-0.68]
71 Common to both approaches, therefore, is the use of a set of patches {ri} which constitute a good nonparametric representation of the probability distribution of high-res patches acquired with the PSF bH. [sent-241, score-0.418]
72 Then, every patch qi in lcan be expressed in terms of the corresponding high-res patch hi in h as qi = Khi + ni . [sent-247, score-0.445]
73 (16) on k we note that the term Krj can be equivalently written as Rjk, where k is the column-vector representation of the kernel and Rj is a matrix corresponding to convolution with rj and sub-sampling by α. [sent-289, score-0.374]
74 A kernel k achieving good score is such that if it is used to down-sample the training patches in the database, then each of our query patches {qi} should find as many good nearest neighbors (NNs) as possible. [sent-306, score-0.685]
75 Minimizing (17) can be done in an iterative manner, whereby in each step we down-sample the training patches using the current estimate of k, find nearest neighbors to the query patches {qi}, and update by solving a weighted least-squares problem. [sent-307, score-0.373]
76 Note that in practice, only those patches in the database whose distance to q is small (not much larger than kˆ kˆ ××× σ) are assigned non-negligible weights, so that the number of required NNs per low-res patch is typically small. [sent-311, score-0.282]
77 Experimental results We validated the benefit of using our kernel estimation in SR algorithms both empirically (on low-res images generated with ground-truth data), as well as visually on real images. [sent-315, score-0.312]
78 We use the method of [8] as a representative of SR methods that rely on internal patch recurrence, and the algorithm of [21] as representative of SR methods that train on an external database of examples. [sent-316, score-0.469]
79 For the external kernel recovery we used a database of 30 natural images downloaded from the Internet (most likely captured by different cameras). [sent-321, score-0.731]
80 To quantify the effect of our estimated kernel on SR algorithms, we use two measures. [sent-324, score-0.337]
81 Values close to 1indicate that the estimated kernel kˆ is nearly as good as the ground-truth k. [sent-330, score-0.337]
82 The Error Ratio to Default (ERD) measure quantifies the benefit of using the estimated kernel kˆ over the default (bicubic) kernel kd, and is defined as ERD = ? [sent-331, score-0.748]
83 5 shows that plugging our recovered kernel kˆ into the SR methods of [8] and [21] leads to substantial improvement in the resulting high-res image over using their assumed default (bicubic) kernel. [sent-338, score-0.565]
84 Indeed, [8] with internal kernel recovery achieves ERD = 0. [sent-341, score-0.505]
85 02 and [21] with external kernel recovery achieves ERD = 0. [sent-343, score-0.664]
86 6 shows the convergence of the internal kernel estimation algorithm applied to the low-res input image of Fig. [sent-347, score-0.414]
87 This demonstrates that our algorithm indeed maximizes the similarity of recurring patches across scales, as expected. [sent-351, score-0.296]
88 This implies that the estimated kernel converges to SR performance of the ground-truth kernel. [sent-354, score-0.337]
89 We ran 30 iterations of both the internal and the external 9 9 Kernels (shown magnified): Low-res input Ground-truth [8] with default kernel Default Int. [sent-358, score-0.774]
90 recovery [21] with default kernel × [8] with internal kernel [21] with external kernel recovery recovery Figure 5: SR 2 with default vs. [sent-360, score-1.77]
91 schemes, each time with a different initial kernel (green). [sent-367, score-0.333]
92 As can be seen, SR with our estimated kernel performs similarly to SR with the ground-truth kernel (ERGT ≈ 1) and better than with the default kernel (ERD < 1). [sent-382, score-1.06]
93 The improvement over the default kernel is more significant in the method of [21]. [sent-383, score-0.411]
94 Surprisingly, sometimes the estimated kernels produce better results in [21] than the ground-truth kernel (ERGT < 1). [sent-384, score-0.381]
95 When we introduced this consistency constraint (using back-projection), their recovery both with the estimated kernel and with the ground-truth kernel typically improved substantially, but more so with the ground-truth kernel (in which case the ERGT increases to approx. [sent-386, score-1.052]
96 In all cases, SR with our estimated kernel is visually superior to SR with the default kernel. [sent-393, score-0.436]
97 The recovered kernels suggest that the original low-res images suffered from slight motion blur and defocus. [sent-398, score-0.325]
98 Summary We showed that contrary to the common belief, the PSF of the camera is the wrong blur kernel k to use in SR algorithms. [sent-406, score-0.563]
99 2 [21] with external kernel recovery Figure 8: Error distribution for SR with [8, 21] using internal and external kernel recovery (statistics collected on hundreds of images see text). [sent-419, score-1.43]
100 – rence of small image patches (either at coarser scales of the same image, or in an external database of images). [sent-420, score-0.582]
wordName wordTfidf (topN-words)
[('sr', 0.411), ('psf', 0.341), ('kernel', 0.312), ('bl', 0.273), ('external', 0.261), ('bh', 0.208), ('blur', 0.193), ('nns', 0.188), ('ergt', 0.176), ('patches', 0.176), ('blind', 0.174), ('qi', 0.132), ('recurrence', 0.122), ('mapk', 0.118), ('kc', 0.102), ('internal', 0.102), ('default', 0.099), ('recovery', 0.091), ('recovered', 0.088), ('rect', 0.076), ('erd', 0.076), ('relating', 0.074), ('bicubic', 0.073), ('recurring', 0.069), ('recur', 0.064), ('patch', 0.061), ('narrower', 0.061), ('sinc', 0.061), ('parents', 0.06), ('bbhl', 0.059), ('deblurring', 0.055), ('coarser', 0.051), ('scales', 0.049), ('fourier', 0.048), ('nonparametric', 0.046), ('database', 0.045), ('unknown', 0.044), ('kernels', 0.044), ('assumed', 0.043), ('optimal', 0.039), ('argkmaxp', 0.039), ('eerrgdt', 0.039), ('mkaxp', 0.039), ('tank', 0.039), ('rj', 0.037), ('guess', 0.036), ('hi', 0.036), ('namely', 0.036), ('ri', 0.035), ('zmann', 0.035), ('zontak', 0.035), ('oscillatory', 0.035), ('deconvolution', 0.033), ('bicycles', 0.032), ('recovering', 0.031), ('camera', 0.031), ('stars', 0.03), ('maximizes', 0.03), ('super', 0.03), ('continuous', 0.029), ('interpretation', 0.029), ('dx', 0.028), ('belief', 0.028), ('version', 0.028), ('iphone', 0.028), ('wrong', 0.027), ('correct', 0.027), ('parametric', 0.027), ('tog', 0.026), ('patterns', 0.026), ('convolution', 0.025), ('israel', 0.025), ('property', 0.025), ('estimated', 0.025), ('heads', 0.024), ('superresolution', 0.024), ('plugging', 0.023), ('ni', 0.023), ('recover', 0.023), ('irani', 0.022), ('delta', 0.022), ('spacing', 0.022), ('corresponds', 0.022), ('downloaded', 0.022), ('collection', 0.022), ('marks', 0.021), ('discrete', 0.021), ('old', 0.021), ('rmse', 0.021), ('zoom', 0.021), ('depict', 0.021), ('initial', 0.021), ('across', 0.021), ('query', 0.021), ('versions', 0.02), ('levin', 0.02), ('observation', 0.02), ('orange', 0.02), ('constitute', 0.02), ('width', 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 293 iccv-2013-Nonparametric Blind Super-resolution
Author: Tomer Michaeli, Michal Irani
Abstract: Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function ‘PSF’ of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for “blind” super resolution. In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. This leads to significant improvement in SR results.
2 0.37003887 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution
Author: Netalee Efrat, Daniel Glasner, Alexander Apartsin, Boaz Nadler, Anat Levin
Abstract: Over the past decade, single image Super-Resolution (SR) research has focused on developing sophisticated image priors, leading to significant advances. Estimating and incorporating the blur model, that relates the high-res and low-res images, has received much less attention, however. In particular, the reconstruction constraint, namely that the blurred and downsampled high-res output should approximately equal the low-res input image, has been either ignored or applied with default fixed blur models. In this work, we examine the relative importance ofthe imageprior and the reconstruction constraint. First, we show that an accurate reconstruction constraint combined with a simple gradient regularization achieves SR results almost as good as those of state-of-the-art algorithms with sophisticated image priors. Second, we study both empirically and theoretically the sensitivity of SR algorithms to the blur model assumed in the reconstruction constraint. We find that an accurate blur model is more important than a sophisticated image prior. Finally, using real camera data, we demonstrate that the default blur models of various SR algorithms may differ from the camera blur, typically leading to over- smoothed results. Our findings highlight the importance of accurately estimating camera blur in reconstructing raw low- res images acquired by an actual camera.
3 0.24186793 129 iccv-2013-Dynamic Scene Deblurring
Author: Tae Hyun Kim, Byeongjoo Ahn, Kyoung Mu Lee
Abstract: Most conventional single image deblurring methods assume that the underlying scene is static and the blur is caused by only camera shake. In this paper, in contrast to this restrictive assumption, we address the deblurring problem of general dynamic scenes which contain multiple moving objects as well as camera shake. In case of dynamic scenes, moving objects and background have different blur motions, so the segmentation of the motion blur is required for deblurring each distinct blur motion accurately. Thus, we propose a novel energy model designed with the weighted sum of multiple blur data models, which estimates different motion blurs and their associated pixelwise weights, and resulting sharp image. In this framework, the local weights are determined adaptively and get high values when the corresponding data models have high data fidelity. And, the weight information is used for the segmentation of the motion blur. Non-local regularization of weights are also incorporated to produce more reliable segmentation results. A convex optimization-based method is used for the solution of the proposed energy model. Exper- imental results demonstrate that our method outperforms conventional approaches in deblurring both dynamic scenes and static scenes.
4 0.23106012 103 iccv-2013-Deblurring by Example Using Dense Correspondence
Author: Yoav Hacohen, Eli Shechtman, Dani Lischinski
Abstract: This paper presents a new method for deblurring photos using a sharp reference example that contains some shared content with the blurry photo. Most previous deblurring methods that exploit information from other photos require an accurately registered photo of the same static scene. In contrast, our method aims to exploit reference images where the shared content may have undergone substantial photometric and non-rigid geometric transformations, as these are the kind of reference images most likely to be found in personal photo albums. Our approach builds upon a recent method for examplebased deblurring using non-rigid dense correspondence (NRDC) [11] and extends it in two ways. First, we suggest exploiting information from the reference image not only for blur kernel estimation, but also as a powerful local prior for the non-blind deconvolution step. Second, we introduce a simple yet robust technique for spatially varying blur estimation, rather than assuming spatially uniform blur. Unlike the aboveprevious method, which hasproven successful only with simple deblurring scenarios, we demonstrate that our method succeeds on a variety of real-world examples. We provide quantitative and qualitative evaluation of our method and show that it outperforms the state-of-the-art.
5 0.20592007 174 iccv-2013-Forward Motion Deblurring
Author: Shicheng Zheng, Li Xu, Jiaya Jia
Abstract: We handle a special type of motion blur considering that cameras move primarily forward or backward. Solving this type of blur is of unique practical importance since nearly all car, traffic and bike-mounted cameras follow out-ofplane translational motion. We start with the study of geometric models and analyze the difficulty of existing methods to deal with them. We also propose a solution accounting for depth variation. Homographies associated with different 3D planes are considered and solved for in an optimization framework. Our method is verified on several natural image examples that cannot be satisfyingly dealt with by previous methods.
6 0.19270654 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions
7 0.16716771 428 iccv-2013-Translating Video Content to Natural Language Descriptions
8 0.15397637 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties
9 0.13686912 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding
10 0.12397157 101 iccv-2013-DCSH - Matching Patches in RGBD Images
11 0.11276551 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition
12 0.10901572 257 iccv-2013-Log-Euclidean Kernels for Sparse Representation and Dictionary Learning
13 0.10372724 51 iccv-2013-Anchored Neighborhood Regression for Fast Example-Based Super-Resolution
14 0.093822852 81 iccv-2013-Combining the Right Features for Complex Event Recognition
15 0.08289285 440 iccv-2013-Video Event Understanding Using Natural Language Descriptions
16 0.079295963 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild
17 0.077894121 85 iccv-2013-Compositional Models for Video Event Detection: A Multiple Kernel Learning Latent Variable Approach
18 0.074587107 116 iccv-2013-Directed Acyclic Graph Kernels for Action Recognition
19 0.073652215 394 iccv-2013-Single-Patch Low-Rank Prior for Non-pointwise Impulse Noise Removal
20 0.070489012 408 iccv-2013-Super-resolution via Transform-Invariant Group-Sparse Regularization
topicId topicWeight
[(0, 0.144), (1, -0.027), (2, -0.036), (3, -0.005), (4, -0.08), (5, 0.058), (6, 0.011), (7, -0.096), (8, 0.023), (9, -0.139), (10, -0.048), (11, -0.241), (12, 0.142), (13, -0.238), (14, -0.087), (15, 0.092), (16, 0.055), (17, -0.134), (18, -0.062), (19, -0.132), (20, -0.067), (21, 0.098), (22, 0.074), (23, 0.008), (24, -0.013), (25, 0.069), (26, 0.011), (27, -0.039), (28, -0.222), (29, -0.042), (30, -0.067), (31, -0.044), (32, 0.072), (33, -0.033), (34, -0.003), (35, -0.004), (36, -0.014), (37, -0.002), (38, 0.013), (39, -0.036), (40, 0.005), (41, -0.017), (42, -0.028), (43, -0.029), (44, 0.003), (45, -0.037), (46, -0.053), (47, -0.052), (48, 0.0), (49, 0.0)]
simIndex simValue paperId paperTitle
same-paper 1 0.97563517 293 iccv-2013-Nonparametric Blind Super-resolution
Author: Tomer Michaeli, Michal Irani
Abstract: Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function ‘PSF’ of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for “blind” super resolution. In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. This leads to significant improvement in SR results.
2 0.94070524 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution
Author: Netalee Efrat, Daniel Glasner, Alexander Apartsin, Boaz Nadler, Anat Levin
Abstract: Over the past decade, single image Super-Resolution (SR) research has focused on developing sophisticated image priors, leading to significant advances. Estimating and incorporating the blur model, that relates the high-res and low-res images, has received much less attention, however. In particular, the reconstruction constraint, namely that the blurred and downsampled high-res output should approximately equal the low-res input image, has been either ignored or applied with default fixed blur models. In this work, we examine the relative importance ofthe imageprior and the reconstruction constraint. First, we show that an accurate reconstruction constraint combined with a simple gradient regularization achieves SR results almost as good as those of state-of-the-art algorithms with sophisticated image priors. Second, we study both empirically and theoretically the sensitivity of SR algorithms to the blur model assumed in the reconstruction constraint. We find that an accurate blur model is more important than a sophisticated image prior. Finally, using real camera data, we demonstrate that the default blur models of various SR algorithms may differ from the camera blur, typically leading to over- smoothed results. Our findings highlight the importance of accurately estimating camera blur in reconstructing raw low- res images acquired by an actual camera.
3 0.72174656 103 iccv-2013-Deblurring by Example Using Dense Correspondence
Author: Yoav Hacohen, Eli Shechtman, Dani Lischinski
Abstract: This paper presents a new method for deblurring photos using a sharp reference example that contains some shared content with the blurry photo. Most previous deblurring methods that exploit information from other photos require an accurately registered photo of the same static scene. In contrast, our method aims to exploit reference images where the shared content may have undergone substantial photometric and non-rigid geometric transformations, as these are the kind of reference images most likely to be found in personal photo albums. Our approach builds upon a recent method for examplebased deblurring using non-rigid dense correspondence (NRDC) [11] and extends it in two ways. First, we suggest exploiting information from the reference image not only for blur kernel estimation, but also as a powerful local prior for the non-blind deconvolution step. Second, we introduce a simple yet robust technique for spatially varying blur estimation, rather than assuming spatially uniform blur. Unlike the aboveprevious method, which hasproven successful only with simple deblurring scenarios, we demonstrate that our method succeeds on a variety of real-world examples. We provide quantitative and qualitative evaluation of our method and show that it outperforms the state-of-the-art.
4 0.71620429 129 iccv-2013-Dynamic Scene Deblurring
Author: Tae Hyun Kim, Byeongjoo Ahn, Kyoung Mu Lee
Abstract: Most conventional single image deblurring methods assume that the underlying scene is static and the blur is caused by only camera shake. In this paper, in contrast to this restrictive assumption, we address the deblurring problem of general dynamic scenes which contain multiple moving objects as well as camera shake. In case of dynamic scenes, moving objects and background have different blur motions, so the segmentation of the motion blur is required for deblurring each distinct blur motion accurately. Thus, we propose a novel energy model designed with the weighted sum of multiple blur data models, which estimates different motion blurs and their associated pixelwise weights, and resulting sharp image. In this framework, the local weights are determined adaptively and get high values when the corresponding data models have high data fidelity. And, the weight information is used for the segmentation of the motion blur. Non-local regularization of weights are also incorporated to produce more reliable segmentation results. A convex optimization-based method is used for the solution of the proposed energy model. Exper- imental results demonstrate that our method outperforms conventional approaches in deblurring both dynamic scenes and static scenes.
5 0.67931074 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions
Author: Chih-Yuan Yang, Ming-Hsuan Yang
Abstract: The goal of single-image super-resolution is to generate a high-quality high-resolution image based on a given low-resolution input. It is an ill-posed problem which requires exemplars or priors to better reconstruct the missing high-resolution image details. In this paper, we propose to split the feature space into numerous subspaces and collect exemplars to learn priors for each subspace, thereby creating effective mapping functions. The use of split input space facilitates both feasibility of using simple functionsfor super-resolution, and efficiency ofgenerating highresolution results. High-quality high-resolution images are reconstructed based on the effective learned priors. Experimental results demonstrate that theproposed algorithmperforms efficiently and effectively over state-of-the-art methods.
6 0.64390606 174 iccv-2013-Forward Motion Deblurring
7 0.52342492 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties
8 0.50224566 408 iccv-2013-Super-resolution via Transform-Invariant Group-Sparse Regularization
9 0.47567713 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild
10 0.47512883 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding
11 0.45644534 257 iccv-2013-Log-Euclidean Kernels for Sparse Representation and Dictionary Learning
12 0.42002934 428 iccv-2013-Translating Video Content to Natural Language Descriptions
13 0.41178215 117 iccv-2013-Discovering Details and Scene Structure with Hierarchical Iconoid Shift
14 0.4046219 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning
15 0.40450147 112 iccv-2013-Detecting Irregular Curvilinear Structures in Gray Scale and Color Imagery Using Multi-directional Oriented Flux
16 0.39462808 101 iccv-2013-DCSH - Matching Patches in RGBD Images
17 0.37289664 51 iccv-2013-Anchored Neighborhood Regression for Fast Example-Based Super-Resolution
18 0.3719174 394 iccv-2013-Single-Patch Low-Rank Prior for Non-pointwise Impulse Noise Removal
19 0.37099096 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition
20 0.36542666 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map
topicId topicWeight
[(2, 0.052), (7, 0.019), (26, 0.089), (31, 0.058), (42, 0.14), (54, 0.227), (64, 0.022), (73, 0.045), (89, 0.225)]
simIndex simValue paperId paperTitle
1 0.87884676 389 iccv-2013-Shortest Paths with Curvature and Torsion
Author: Petter Strandmark, Johannes Ulén, Fredrik Kahl, Leo Grady
Abstract: This paper describes a method of finding thin, elongated structures in images and volumes. We use shortest paths to minimize very general functionals of higher-order curve properties, such as curvature and torsion. Our globally optimal method uses line graphs and its runtime is polynomial in the size of the discretization, often in the order of seconds on a single computer. To our knowledge, we are the first to perform experiments in three dimensions with curvature and torsion regularization. The largest graphs we process have almost one hundred billion arcs. Experiments on medical images and in multi-view reconstruction show the significance and practical usefulness of regularization based on curvature while torsion is still only tractable for small-scale problems.
same-paper 2 0.86124015 293 iccv-2013-Nonparametric Blind Super-resolution
Author: Tomer Michaeli, Michal Irani
Abstract: Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function ‘PSF’ of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for “blind” super resolution. In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. This leads to significant improvement in SR results.
3 0.85508442 68 iccv-2013-Camera Alignment Using Trajectory Intersections in Unsynchronized Videos
Author: Thomas Kuo, Santhoshkumar Sunderrajan, B.S. Manjunath
Abstract: This paper addresses the novel and challenging problem of aligning camera views that are unsynchronized by low and/or variable frame rates using object trajectories. Unlike existing trajectory-based alignment methods, our method does not require frame-to-frame synchronization. Instead, we propose using the intersections of corresponding object trajectories to match views. To find these intersections, we introduce a novel trajectory matching algorithm based on matching Spatio-Temporal Context Graphs (STCGs). These graphs represent the distances between trajectories in time and space within a view, and are matched to an STCG from another view to find the corresponding trajectories. To the best of our knowledge, this is one of the first attempts to align views that are unsynchronized with variable frame rates. The results on simulated and real-world datasets show trajectory intersections area viablefeatureforcamera alignment, and that the trajectory matching method performs well in real-world scenarios.
4 0.84474707 407 iccv-2013-Subpixel Scanning Invariant to Indirect Lighting Using Quadratic Code Length
Author: Nicolas Martin, Vincent Couture, Sébastien Roy
Abstract: We present a scanning method that recovers dense subpixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. Subpixel accuracy is achieved by considering several zero-crossings defined by the difference between pairs of unstructured patterns. We use gray-level band-pass white noise patterns that increase robustness to indirect lighting and scene discontinuities. Simulated and experimental results show that our method recovers scene geometry with high subpixel precision, and that it can handle many challenges of active reconstruction systems. We compare our results to state of the art methods such as micro phase shifting and modulated phase shifting.
5 0.80418217 147 iccv-2013-Event Recognition in Photo Collections with a Stopwatch HMM
Author: Lukas Bossard, Matthieu Guillaumin, Luc Van_Gool
Abstract: The task of recognizing events in photo collections is central for automatically organizing images. It is also very challenging, because of the ambiguity of photos across different event classes and because many photos do not convey enough relevant information. Unfortunately, the field still lacks standard evaluation data sets to allow comparison of different approaches. In this paper, we introduce and release a novel data set of personal photo collections containing more than 61,000 images in 807 collections, annotated with 14 diverse social event classes. Casting collections as sequential data, we build upon recent and state-of-the-art work in event recognition in videos to propose a latent sub-event approach for event recognition in photo collections. However, photos in collections are sparsely sampled over time and come in bursts from which transpires the importance of specific moments for the photographers. Thus, we adapt a discriminative hidden Markov model to allow the transitions between states to be a function of the time gap between consecutive images, which we coin as Stopwatch Hidden Markov model (SHMM). In our experiments, we show that our proposed model outperforms approaches based only on feature pooling or a classical hidden Markov model. With an average accuracy of 56%, we also highlight the difficulty of the data set and the need for future advances in event recognition in photo collections.
6 0.78276581 349 iccv-2013-Regionlets for Generic Object Detection
7 0.78249276 196 iccv-2013-Hierarchical Data-Driven Descent for Efficient Optimal Deformation Estimation
8 0.78196961 122 iccv-2013-Distributed Low-Rank Subspace Segmentation
9 0.7812469 66 iccv-2013-Building Part-Based Object Detectors via 3D Geometry
10 0.780577 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions
11 0.78053612 79 iccv-2013-Coherent Object Detection with 3D Geometric Context from a Single Image
12 0.78042221 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution
13 0.77926338 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables
14 0.77914417 208 iccv-2013-Image Co-segmentation via Consistent Functional Maps
15 0.77904934 199 iccv-2013-High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination
16 0.77877516 339 iccv-2013-Rank Minimization across Appearance and Shape for AAM Ensemble Fitting
17 0.77870345 223 iccv-2013-Joint Noise Level Estimation from Personal Photo Collections
18 0.77816534 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering
19 0.77778888 327 iccv-2013-Predicting an Object Location Using a Global Image Representation
20 0.77750361 300 iccv-2013-Optical Flow via Locally Adaptive Fusion of Complementary Data Costs