cvpr cvpr2013 cvpr2013-431 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Bastian Goldluecke, Sven Wanner
Abstract: Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.
Reference: text
sentIndex sentText sentNum sentScore
1 In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. [sent-2, score-1.236]
2 We derive differential constraints on this vector field to enable consistent disparity map regularization. [sent-3, score-0.868]
3 Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. [sent-4, score-1.573]
4 This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space. [sent-5, score-0.605]
5 Introduction In 2006, Marc Levoy, computer graphics professor at Stanford university and one of the leading experts in com- putational imaging, predicted in a survey article that “in 25 years, most consumer photographic cameras will be light field cameras” [9]. [sent-7, score-0.574]
6 Whether or not he will ultimately be right, the first plenoptic cameras are now commercially available [12, 13], and several new technologies to capture light fields are under development. [sent-8, score-0.655]
7 Because of their structure, light fields are particularly well suited to variational methods - indeed, we believe that variational methods might be key to getting the most out of this kind of data. [sent-10, score-0.642]
8 The goal of this work is therefore to systematically develop theory and algorithms of a flexible and efficient variational framework which is built upon the concept of light fields instead of traditional 2D images. [sent-11, score-0.576]
9 Regularization which leverages the variational structure of the light field leads to superior results in inverse problems like denoising and inpainting as well as multi-label segmentation. [sent-13, score-1.072]
10 Light fields and computational imaging For the purpose of this paper, the 4D light field of a scene can be understood as a collection of views of a scene with densely sampled view points, see figure 2. [sent-14, score-0.919]
11 In image-based rendering, novel views of the scene are generated by means of a re-sampling of the light field [7, 10]. [sent-16, score-0.657]
12 The selling point of the first consumer plenoptic camera is refocusing of the light field to different depth planes [12]. [sent-17, score-0.816]
13 Since plenoptic cameras trade off sensor resolution for capturing multiple views, it is not surprising that superresolution techniques are a focus of research in light field analysis. [sent-19, score-0.725]
14 While the reconstruction can be based on traditional stereo matching techniques [2], recent works exploit the structure of light fields and epipolar plane images directly to estimate disparity [14]. [sent-23, score-1.369]
15 The 2D image in the plane of the cut is called an epipolar plane image (EPI). [sent-27, score-0.414]
16 The structure of light fields The fundamental difference between having a light field and just a simple collection of views of a scene available is that the view points in a light field lie much closer together and form a specific, usually simply rectangular pattern. [sent-28, score-1.771]
17 In fact, it becomes possible to assume that the space of view points forms a continuous space, and for example, take derivatives of the light field with respect to the viewing direction. [sent-29, score-0.698]
18 The proposed methods will further embrace these ideas, in that they allow to systematically leverage the differential light field structure in complex inverse problems. [sent-33, score-0.688]
19 Contributions From a technical point of view, we will construct convex priors for light fields which are designed in a way that they preserve the epipolar plane image structure, and solutions satisfy constraints related to object depth and occlusion ordering. [sent-34, score-0.931]
20 In this way, they enable the regularization of arbitrary vector-valued functions on ray space while respecting the light field geometry. [sent-35, score-1.024]
21 Furthermore, we contribute an optimization framework for inverse problems on ray space which makes use of these priors, and show a number of examples, in particular light field denoising, inpainting, and in a related work [16], ray space labeling. [sent-36, score-1.413]
22 As far as we are aware, this is the first time a systematic way to deal with inverse problems on ray space has been proposed, and we believe that it can serve as a solid foundation for the future development of light field analysis. [sent-37, score-1.039]
23 Disparity in a light field In a rectified stereo pair, disparity is the coordinate difference of the two projections of a 3D scene point. [sent-39, score-1.194]
24 In a light field, where a scene point is visible in many views, this would not make a very useful definition, since its value would depend on the pair of views chosen. [sent-40, score-0.53]
25 Furthermore, in a light field, the space of view points is a continuous space, so it makes more sense to think of disparity as a differential quantity: the infinitesimal shift of the projection under an infinitesimal shift of the view point. [sent-41, score-1.332]
26 In the following, we will systematically introduce what we call the disparity field. [sent-43, score-0.601]
27 Ray space A 4D light field or Lumigraph is defined on a ray space R, tAhe 4 sDet l gofh rays passing through tsw doe planes nΠ a arandy sΩp aicne eR R3,, where each ray can be uniquely identified by its two intersection points. [sent-44, score-1.413]
28 The parametrization for ray space we choose is slightly different from the standard one for a Lumigraph [7], and inspired by [3]. [sent-46, score-0.419]
29 This means that R[s, t, 0, 0] is the ray which passes through the focal point (s, t) and the center of projection in the image plane, i. [sent-49, score-0.418]
30 1 1 10 0 0 0 0 24 2 Light fields and epipolar plane images A light field L can now simply be defined as a function on ray space, either scalar or vector-valued for gray scale or color, respectively. [sent-54, score-1.295]
31 Of particular interest are the images which emerge when ray space is restricted to a 2D plane. [sent-55, score-0.396]
32 They can be interpreted as horizontal or vertical cuts through a horizontal or vertical stack of the views in the light field, see figure 2, and have a rich structure which looks like it consists mainly of straight lines. [sent-59, score-0.488]
33 The rate of change ρ(P) := is independent of the choice of s1, we call it the disparity of P. [sent-67, score-0.57]
34 Note that (2) shows that disparity in our sense is indeed a derivative. [sent-71, score-0.601]
35 Disparity maps and their constraints For stereo pairs, it is customary to compute disparity maps, i. [sent-73, score-0.629]
36 In light fields, we need to assign a disparity to each ray. [sent-76, score-0.934]
37 In the case of opacity, to each ray R[x, y, s, t] can be assigned a closest point P(x, y, s, t) where the ray intersects the scene. [sent-78, score-0.758]
38 We define the disparity map ρ as a function on ray space via ρ(x, y, s, t) := ρ(P(x, y, s, t)) . [sent-79, score-0.994]
39 Unlike the definition of disparity for a single (virtual) 3D point, the disparity map depends on the scene geometry. [sent-84, score-1.207]
40 As we will see later, disparity and regularization of maps on ray space are intimately linked to each other. [sent-85, score-1.032]
41 In particular, this shows that 3D scene reconstruction from light field data is actually a pre-requisite to correctly deal with inverse problems on ray space. [sent-86, score-1.025]
42 Like any map on ray space, one can restrict the disparity map to epipolar plane images. [sent-87, score-1.296]
43 the question of which maps are valid disparity maps. [sent-90, score-0.57]
44 From its disparity ρ0 = ρ(x0, s0), one can compute the scene point P which projects onto (x0 , s0) by means of (2). [sent-95, score-0.659]
45 Since it occludes everything beh+ind σ )i,t,σ σthi ∈s means tthhaet EatP no point on tohccisl luidnees, the disparity can be smaller than ρ0. [sent-97, score-0.598]
46 (4) Because in this way, the disparity in a point restricts the disparity at arbitrarily far away locations, it is prohibitively expensive to enforce this constraint globally. [sent-99, score-1.215]
47 A valid disparity map satisfies the local constraints ∇±d ρ(x0, s0) ≥ 0 for all (x0, s0), (5) where the disparity vector field d is defined as a unit vector field in direction [ρ 1]T, and ∇±d are the directional 1 1 10 0 0 0 0 35 3 derivatives in direction +d or −d, respectively. [sent-102, score-1.637]
48 labeling of rays When we compute a disparity map, we label rays with a quantity which is ultimately a property of scene points. [sent-114, score-0.732]
49 The above constraints will ensure that the labeling is consistent, in that all projections of the same point are labeled with the same disparity value. [sent-115, score-0.683]
50 For a start, a light field itself is an assignment of a color to each ray. [sent-117, score-0.541]
51 sWiset assume vthecatt Ur-v eanluceodde msa a property →of R a scene surface which is independent of viewing direction, all scene surfaces are opaque and the disparity map ρ on ray space is known. [sent-121, score-1.146]
52 Constraints and EPI regularization The disparity vector field d can be interpreted in a slightly different way as a transport field. [sent-122, score-0.813]
53 In particular, the function U should be constant in the direction of d, except at disparity discontinuities. [sent-124, score-0.597]
54 As in the previous section, we fix an epipolar plane image with coordinates (y∗ , t∗), and define for the restriction Uy∗ ,t∗ the regularizer ? [sent-126, score-0.47]
55 The additional weight function g is optional, and can for example be used to decrease the penalty at disparity discontinuities. [sent-136, score-0.57]
56 The constant α controls the degree of anisotropy, small values imply that smoothing is focused into the direction of the disparity field d. [sent-138, score-0.795]
57 Since the ray space regularizer defined later already explicitly includes regularization in the spatial domain, we set α = 0 in all experiments. [sent-139, score-0.591]
58 The work [14] presented a consistent disparity regularization framework on EPIs based on labeling, but it requires discretization of disparity values and is prohibitively slow. [sent-141, score-1.228]
59 Regularization of vector-valued functions The final regularizer Jλμ (U) for a vector field U : R → Rn on ray space can now Ube) w forirtat en v as othre f sum Uof : :c Rontr →ibutions for the regularizers on all epipolar plane images as well as all the views, + μJyt(U) + λJst(U) with Jxs(U) =? [sent-142, score-1.069]
60 JV(Us∗,t∗) d(s∗,t∗), where λ > 0 and μ > 0 are user-defined constants which adjust the amount of smoothing on the separate views and 1 1 10 0 0 0 0 46 4 epipolar plane images, respectively. [sent-145, score-0.403]
61 The ray space regularizer defined in (8) gives a means to tackle any inverse problem on ray space. [sent-148, score-0.947]
62 Note that the regularizer Jλμ for vectorial functions on ray space, given by (8), is convex and closed as the sum of convex and closed functionals, but not differentiable. [sent-152, score-0.735]
63 When choosing an algorithm, the main limitation which arises is that at reasonable resolutions, each field on ray space takes up a lot of memory. [sent-154, score-0.602]
64 Note that according to (8), the complete regularizer which is defined on 4D space decomposes into a sum of regularizers on 2D images - the epipolar plane images and individual views, respectively. [sent-157, score-0.527]
65 We have also investigated ray space multi-labeling as a more involved application, which we explore in detail in a related paper [16]. [sent-174, score-0.396]
66 Light field denoising We first show how to perform denoising of light field data. [sent-176, score-0.846]
67 For this, let F be a vector-valued function on ray space which is degraded with Gaussian noise of standard deviation σ independently for each ray. [sent-178, score-0.396]
68 Denoising on ray space leads to significantly better quality than denoising of single views. [sent-183, score-0.46]
69 L2-denoising schemes which respect the light field structure are superior to single view denoising. [sent-195, score-0.645]
70 Top: light field recorded with a Raytrix plenoptic camera, bottom: synthetic light field rendered with Blender. [sent-197, score-1.233]
71 Figures (b,e) show the results for optimal parameter choice using spatial regularization only, and figures (c,f) the optimal results for a denoising scheme on ray space, as described in section 6. [sent-198, score-0.495]
72 Optimal parameters were determined using brute force search, the disparity map was estimated from the input light field using the algorithm in [14]. [sent-199, score-1.139]
73 Light field inpainting As a second example, we discuss inpainting on ray space. [sent-200, score-1.074]
74 Let Γ ⊂ R be a region in ray space where the input light fLieetld Γ ΓF ⊂ ⊂is R Run bkeno aw reng. [sent-201, score-0.76]
75 Second, a new situation arises when disparity is also unknown in the inpainting domain. [sent-206, score-0.888]
76 In the latter case, the regularizer on the epipolar plane images is undefined. [sent-207, score-0.434]
77 We will discuss how to infer unknown disparity fields in the next section. [sent-208, score-0.677]
78 For the light field inpainting experiments presented in figures 7 and 8, we assumed disparity to already be reconstructed. [sent-209, score-1.377]
79 In general, light field inpainting leads to much improved results with visually sharper boundary transitions. [sent-210, score-0.836]
80 Note that figure 8 also demonstrates that light field inpainting can be considered as a novel method for view interpolation. [sent-211, score-0.887]
81 Disparity map regularization If disparity itself is the unknown field to be recovered, then the general model (9) is not convex anymore, since the regularizer in turn depends on knowledge the disparity field. [sent-212, score-1.626]
82 If possible, we start with a reasonable initialization, in the case of view 1 1 10 0 0 0 0 68 6 (a) Damaged input (b) Spatial inpainting (TV) (c) Light field inpainting (d) After 5 iterations (e) After 10 iterations (f) After 15 iterations (g) After 20 iterations Figure 7. [sent-214, score-0.789]
83 interpolation this can for example be a disparity field obtained by linear interpolation. [sent-220, score-0.785]
84 Otherwise, we initialize with a disparity of zero or close to the expected median of disparity values. [sent-221, score-1.14]
85 We then iteratively solve (9) for a new disparity field, and update the regularizer with the new disparity field after every iteration. [sent-222, score-1.446]
86 Further improvements are possible if one includes the local constraints (5) on the disparity field. [sent-224, score-0.607]
87 We include this non-linear set of constraints in the energy minimization using Lagrange multipliers in order to optimize for a consistent disparity field. [sent-225, score-0.607]
88 Experiments show that this way, it is possible to obtain better estimates for disparity in e. [sent-226, score-0.57]
89 Conclusion For solving inverse problems on ray space, priors are required which respect the directional structure on epipolar plane images induced by disparity. [sent-230, score-0.8]
90 In this work, we introduce a general variational framework for solving inverse problems on ray space with arbitrary differentiable and convex data terms, like for denoising, inpainting and segmentation. [sent-232, score-0.902]
91 We demonstrate that these fundamental applications in image analysis can be solved more accurately on a light field structure than individual images. [sent-233, score-0.565]
92 This way, the proposed method contributes a solid foundation for the future development of variational light field analysis. [sent-234, score-0.66]
93 The light field camera: Extended depth of field, aliasing, and superresolution. [sent-244, score-0.565]
94 Intermediate views in the upsampled light field were marked as unknown regions before solving the inpainting model (12). [sent-288, score-0.907]
95 standard light field rendering, a novel view generated by inpainting with interpolated disparity quantities, and finally a novel view generated by inpainting with disparity quantities also recovered via inpainting. [sent-291, score-2.395]
96 In the bottom row, one can compare disparity fields in the unknown regions generated with different methods. [sent-292, score-0.677]
97 In particular, we can see that the optimal way to infer disparity is via inpainting and additional observation of the local constraints (5). [sent-293, score-0.873]
98 Note: thesis led to commercial light field camera, see also www . [sent-318, score-0.565]
99 Spatial and angular variational super-resolution of 4D light fields. [sent-337, score-0.461]
100 Globally consistent multi-label assignment on the ray space of 4D light fields. [sent-344, score-0.76]
wordName wordTfidf (topN-words)
[('disparity', 0.57), ('ray', 0.365), ('light', 0.364), ('inpainting', 0.266), ('epipolar', 0.196), ('field', 0.177), ('plenoptic', 0.151), ('regularizer', 0.129), ('epi', 0.115), ('plane', 0.109), ('variational', 0.097), ('fields', 0.084), ('view', 0.08), ('views', 0.077), ('wanner', 0.077), ('vectorial', 0.071), ('damaged', 0.067), ('regularization', 0.066), ('denoising', 0.064), ('convex', 0.063), ('regularizers', 0.062), ('pinhole', 0.059), ('inverse', 0.057), ('epis', 0.043), ('gaus', 0.043), ('planes', 0.043), ('scene', 0.039), ('zf', 0.038), ('jst', 0.038), ('jxs', 0.038), ('jyt', 0.038), ('lumigraph', 0.038), ('interpolation', 0.038), ('constraints', 0.037), ('rays', 0.037), ('imaging', 0.037), ('subgradient', 0.036), ('restriction', 0.036), ('infinitesimal', 0.036), ('goldluecke', 0.036), ('differential', 0.035), ('oise', 0.033), ('cameras', 0.033), ('inpainted', 0.032), ('anisotropic', 0.032), ('ly', 0.031), ('systematically', 0.031), ('space', 0.031), ('sense', 0.031), ('jv', 0.031), ('ddt', 0.031), ('origins', 0.03), ('camera', 0.029), ('arises', 0.029), ('proximity', 0.029), ('sharper', 0.029), ('map', 0.028), ('opaque', 0.028), ('levoy', 0.028), ('point', 0.028), ('conference', 0.028), ('operator', 0.027), ('direction', 0.027), ('labeling', 0.026), ('descent', 0.026), ('slope', 0.026), ('uy', 0.026), ('priors', 0.026), ('projection', 0.025), ('enforce', 0.025), ('ux', 0.024), ('surfaces', 0.024), ('rendering', 0.024), ('depth', 0.024), ('derivatives', 0.024), ('www', 0.024), ('photography', 0.024), ('structure', 0.024), ('problems', 0.023), ('aperture', 0.023), ('stack', 0.023), ('unknown', 0.023), ('parametrization', 0.023), ('ultimately', 0.023), ('onto', 0.022), ('projections', 0.022), ('interpolated', 0.022), ('prohibitively', 0.022), ('viewing', 0.022), ('stereo', 0.022), ('understood', 0.022), ('development', 0.022), ('shift', 0.022), ('visible', 0.022), ('closed', 0.022), ('enable', 0.021), ('smoothing', 0.021), ('hardware', 0.021), ('international', 0.021), ('closer', 0.021)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields
Author: Bastian Goldluecke, Sven Wanner
Abstract: Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.
2 0.69602066 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields
Author: Sven Wanner, Christoph Straehle, Bastian Goldluecke
Abstract: Wepresent thefirst variationalframeworkfor multi-label segmentation on the ray space of 4D light fields. For traditional segmentation of single images, , features need to be extractedfrom the 2Dprojection ofa three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light fields available online with complete depth and segmentation ground truth data where available, and thus establish the first benchmark data set for light field analysis to facilitate competitive further development of algorithms.
3 0.32160082 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures
Author: Yuichi Takeda, Shinsaku Hiura, Kosuke Sato
Abstract: In this paper we propose a novel depth measurement method by fusing depth from defocus (DFD) and stereo. One of the problems of passive stereo method is the difficulty of finding correct correspondence between images when an object has a repetitive pattern or edges parallel to the epipolar line. On the other hand, the accuracy of DFD method is inherently limited by the effective diameter of the lens. Therefore, we propose the fusion of stereo method and DFD by giving different focus distances for left and right cameras of a stereo camera with coded apertures. Two types of depth cues, defocus and disparity, are naturally integrated by the magnification and phase shift of a single point spread function (PSF) per camera. In this paper we give the proof of the proportional relationship between the diameter of defocus and disparity which makes the calibration easy. We also show the outstanding performance of our method which has both advantages of two depth cues through simulation and actual experiments.
4 0.27389959 147 cvpr-2013-Ensemble Learning for Confidence Measures in Stereo Vision
Author: Ralf Haeusler, Rahul Nair, Daniel Kondermann
Abstract: With the aim to improve accuracy of stereo confidence measures, we apply the random decision forest framework to a large set of diverse stereo confidence measures. Learning and testing sets were drawnfrom the recently introduced KITTI dataset, which currently poses higher challenges to stereo solvers than other benchmarks with ground truth for stereo evaluation. We experiment with semi global matching stereo (SGM) and a census dataterm, which is the best performing realtime capable stereo method known to date. On KITTI images, SGM still produces a significant amount of error. We obtain consistently improved area under curve values of sparsification measures in comparison to best performing single stereo confidence measures where numbers of stereo errors are large. More specifically, our method performs best in all but one out of 194 frames of the KITTI dataset.
5 0.25027436 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras
Author: Donald G. Dansereau, Oscar Pizarro, Stefan B. Williams
Abstract: Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedurefor lenselet-basedplenoptic cameras appropriatefor a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.
6 0.21189629 384 cvpr-2013-Segment-Tree Based Cost Aggregation for Stereo Matching
7 0.20934606 219 cvpr-2013-In Defense of 3D-Label Stereo
8 0.19313869 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?
9 0.19235854 155 cvpr-2013-Exploiting the Power of Stereo Confidences
10 0.18802066 362 cvpr-2013-Robust Monocular Epipolar Flow Estimation
11 0.1681502 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation
12 0.1660834 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation
13 0.13355289 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns
14 0.11677346 330 cvpr-2013-Photometric Ambient Occlusion
15 0.11418808 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display
17 0.098609127 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation
18 0.097363688 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition
19 0.094448082 117 cvpr-2013-Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera
20 0.090953507 230 cvpr-2013-Joint 3D Scene Reconstruction and Class Segmentation
topicId topicWeight
[(0, 0.16), (1, 0.26), (2, 0.011), (3, 0.081), (4, -0.0), (5, -0.102), (6, -0.084), (7, 0.045), (8, 0.032), (9, 0.098), (10, -0.034), (11, 0.107), (12, 0.289), (13, -0.065), (14, -0.406), (15, 0.192), (16, -0.085), (17, -0.104), (18, 0.089), (19, 0.056), (20, -0.06), (21, 0.13), (22, 0.198), (23, 0.065), (24, -0.044), (25, 0.049), (26, 0.043), (27, -0.071), (28, 0.031), (29, -0.054), (30, 0.067), (31, -0.044), (32, -0.04), (33, -0.082), (34, 0.045), (35, -0.04), (36, -0.035), (37, 0.004), (38, 0.011), (39, -0.05), (40, -0.008), (41, 0.063), (42, 0.036), (43, -0.041), (44, 0.015), (45, -0.055), (46, -0.052), (47, -0.003), (48, 0.089), (49, 0.016)]
simIndex simValue paperId paperTitle
same-paper 1 0.97569835 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields
Author: Bastian Goldluecke, Sven Wanner
Abstract: Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.
2 0.90790391 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields
Author: Sven Wanner, Christoph Straehle, Bastian Goldluecke
Abstract: Wepresent thefirst variationalframeworkfor multi-label segmentation on the ray space of 4D light fields. For traditional segmentation of single images, , features need to be extractedfrom the 2Dprojection ofa three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light fields available online with complete depth and segmentation ground truth data where available, and thus establish the first benchmark data set for light field analysis to facilitate competitive further development of algorithms.
3 0.71876329 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures
Author: Yuichi Takeda, Shinsaku Hiura, Kosuke Sato
Abstract: In this paper we propose a novel depth measurement method by fusing depth from defocus (DFD) and stereo. One of the problems of passive stereo method is the difficulty of finding correct correspondence between images when an object has a repetitive pattern or edges parallel to the epipolar line. On the other hand, the accuracy of DFD method is inherently limited by the effective diameter of the lens. Therefore, we propose the fusion of stereo method and DFD by giving different focus distances for left and right cameras of a stereo camera with coded apertures. Two types of depth cues, defocus and disparity, are naturally integrated by the magnification and phase shift of a single point spread function (PSF) per camera. In this paper we give the proof of the proportional relationship between the diameter of defocus and disparity which makes the calibration easy. We also show the outstanding performance of our method which has both advantages of two depth cues through simulation and actual experiments.
4 0.63983274 219 cvpr-2013-In Defense of 3D-Label Stereo
Author: Carl Olsson, Johannes Ulén, Yuri Boykov
Abstract: It is commonly believed that higher order smoothness should be modeled using higher order interactions. For example, 2nd order derivatives for deformable (active) contours are represented by triple cliques. Similarly, the 2nd order regularization methods in stereo predominantly use MRF models with scalar (1D) disparity labels and triple clique interactions. In this paper we advocate a largely overlooked alternative approach to stereo where 2nd order surface smoothness is represented by pairwise interactions with 3D-labels, e.g. tangent planes. This general paradigm has been criticized due to perceived computational complexity of optimization in higher-dimensional label space. Contrary to popular beliefs, we demonstrate that representing 2nd order surface smoothness with 3D labels leads to simpler optimization problems with (nearly) submodular pairwise interactions. Our theoretical and experimental re- sults demonstrate advantages over state-of-the-art methods for 2nd order smoothness stereo. 1
5 0.61806881 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras
Author: Donald G. Dansereau, Oscar Pizarro, Stefan B. Williams
Abstract: Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedurefor lenselet-basedplenoptic cameras appropriatefor a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.
6 0.61483634 147 cvpr-2013-Ensemble Learning for Confidence Measures in Stereo Vision
7 0.60367405 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display
8 0.59637541 155 cvpr-2013-Exploiting the Power of Stereo Confidences
9 0.58742392 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation
10 0.53824574 384 cvpr-2013-Segment-Tree Based Cost Aggregation for Stereo Matching
11 0.52676064 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?
12 0.52431935 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition
13 0.50556666 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation
14 0.50361669 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation
15 0.47864047 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging
16 0.46822509 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas
17 0.44085827 344 cvpr-2013-Radial Distortion Self-Calibration
18 0.4237082 362 cvpr-2013-Robust Monocular Epipolar Flow Estimation
20 0.39969268 330 cvpr-2013-Photometric Ambient Occlusion
topicId topicWeight
[(10, 0.151), (16, 0.019), (22, 0.032), (26, 0.041), (33, 0.223), (67, 0.036), (69, 0.026), (87, 0.133), (96, 0.237)]
simIndex simValue paperId paperTitle
Author: Stefan Harmeling, Michael Hirsch, Bernhard Schölkopf
Abstract: We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map. Using the Fraunhofer approximation, it identifies the kernel with the squared Fourier transform of the aperture. This allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of Fraunhofer diffraction, showing that imaging processes with arbitrarily small apertures can in principle be invertible, i.e., do not lose information, provided the objects to be imaged satisfy a generic condition. A real world experiment shows that we can super-resolve beyond the Rayleigh limit.
2 0.84634107 228 cvpr-2013-Is There a Procedural Logic to Architecture?
Author: Julien Weissenberg, Hayko Riemenschneider, Mukta Prasad, Luc Van_Gool
Abstract: Urban models are key to navigation, architecture and entertainment. Apart from visualizing fa ¸cades, a number of tedious tasks remain largely manual (e.g. compression, generating new fac ¸ade designs and structurally comparing fa c¸ades for classification, retrieval and clustering). We propose a novel procedural modelling method to automatically learn a grammar from a set of fa c¸ades, generate new fa ¸cade instances and compare fa ¸cades. To deal with the difficulty of grammatical inference, we reformulate the problem. Instead of inferring a compromising, onesize-fits-all, single grammar for all tasks, we infer a model whose successive refinements are production rules tailored for each task. We demonstrate our automatic rule inference on datasets of two different architectural styles. Our method supercedes manual expert work and cuts the time required to build a procedural model of a fa ¸cade from several days to a few milliseconds.
same-paper 3 0.83664191 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields
Author: Bastian Goldluecke, Sven Wanner
Abstract: Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.
4 0.83321536 218 cvpr-2013-Improving the Visual Comprehension of Point Sets
Author: Sagi Katz, Ayellet Tal
Abstract: Point sets are the standard output of many 3D scanning systems and depth cameras. Presenting the set of points as is, might “hide ” the prominent features of the object from which the points are sampled. Our goal is to reduce the number of points in a point set, for improving the visual comprehension from a given viewpoint. This is done by controlling the density of the reduced point set, so as to create bright regions (low density) and dark regions (high density), producing an effect of shading. This data reduction is achieved by leveraging a limitation of a solution to the classical problem of determining visibility from a viewpoint. In addition, we introduce a new dual problem, for determining visibility of a point from infinity, and show how a limitation of its solution can be leveraged in a similar way.
5 0.78620398 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields
Author: Sven Wanner, Christoph Straehle, Bastian Goldluecke
Abstract: Wepresent thefirst variationalframeworkfor multi-label segmentation on the ray space of 4D light fields. For traditional segmentation of single images, , features need to be extractedfrom the 2Dprojection ofa three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light fields available online with complete depth and segmentation ground truth data where available, and thus establish the first benchmark data set for light field analysis to facilitate competitive further development of algorithms.
6 0.76343518 298 cvpr-2013-Multi-scale Curve Detection on Surfaces
7 0.76275885 209 cvpr-2013-Hypergraphs for Joint Multi-view Reconstruction and Multi-object Tracking
8 0.76124907 39 cvpr-2013-Alternating Decision Forests
10 0.75907797 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities
11 0.75702208 239 cvpr-2013-Kernel Null Space Methods for Novelty Detection
12 0.75628543 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display
13 0.75476611 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery
14 0.75451249 393 cvpr-2013-Separating Signal from Noise Using Patch Recurrence across Scales
15 0.75444055 143 cvpr-2013-Efficient Large-Scale Structured Learning
16 0.75435299 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems
17 0.75339031 248 cvpr-2013-Learning Collections of Part Models for Object Recognition
18 0.75274104 107 cvpr-2013-Deformable Spatial Pyramid Matching for Fast Dense Correspondences
19 0.75113624 19 cvpr-2013-A Minimum Error Vanishing Point Detection Approach for Uncalibrated Monocular Images of Man-Made Environments
20 0.75108486 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path