cvpr cvpr2013 cvpr2013-354 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Eno Töppe, Claudia Nieuwenhuis, Daniel Cremers
Abstract: We introduce the concept of relative volume constraints in order to account for insufficient information in the reconstruction of 3D objects from a single image. The key idea is to formulate a variational reconstruction approach with shape priors in form of relative depth profiles or volume ratios relating object parts. Such shape priors can easily be derived either from a user sketch or from the object’s shading profile in the image. They can handle textured or shadowed object regions by propagating information. We propose a convex relaxation of the constrained optimization problem which can be solved optimally in a few seconds on graphics hardware. In contrast to existing single view reconstruction algorithms, the proposed algorithm provides substantially more flexibility to recover shape details such as self-occlusions, dents and holes, which are not visible in the object silhouette.
Reference: text
sentIndex sentText sentNum sentScore
1 The key idea is to formulate a variational reconstruction approach with shape priors in form of relative depth profiles or volume ratios relating object parts. [sent-2, score-1.396]
2 Such shape priors can easily be derived either from a user sketch or from the object’s shading profile in the image. [sent-3, score-0.845]
3 In contrast to existing single view reconstruction algorithms, the proposed algorithm provides substantially more flexibility to recover shape details such as self-occlusions, dents and holes, which are not visible in the object silhouette. [sent-6, score-0.396]
4 symmetry assumptions [4], topological constraints [12], planarity [5], minimal surfaces with volume constraints [14], learned shape priors [3] and others. [sent-16, score-0.59]
5 3D reconstruction result from the single car image on the left based on relative volume constraints. [sent-20, score-0.458]
6 Given a 2D image we infer the object geometry based on shape profiles and volume ratio constraints. [sent-21, score-0.959]
7 These are either imposed by the user or estimated from shading information. [sent-22, score-0.489]
8 We formulate a graph based optimization approach, which automatically computes object shape profiles from the image. [sent-28, score-0.722]
9 By estimating shape profiles from shading information we directly infer shape knowledge instead of computing dense normal maps. [sent-29, score-1.015]
10 Firstly, the computation of shape profiles is simpler than the computation of dense normal maps and thus less error prone. [sent-31, score-0.748]
11 Since the user only indicates profile lines in untextured regions without shadows, reasonable profile estimates can be computed and then propagated to textured and shadowed 111777777 a) Orignal b)Volumeconstraintc)Depthprofile d)Parti lresulte)Volumeratiof)Reconstruction Figure 2. [sent-35, score-1.072]
12 3D reconstruction of the watering can using absolute and relative volume constraints, see text for explanation. [sent-36, score-0.707]
13 For this task, flexible scalable shape profiles are much better suited than point-wise absolute normal information. [sent-38, score-0.769]
14 Finally, the volume and silhouette constraints restrict the reconstruction to valid, closed objects, which is not necessarily true for shape from shading approaches. [sent-39, score-0.819]
15 To give a clear idea of the reconstruction process based on the different volumetric constraints, we show an example of the reconstruction of the watering can in Figure 2. [sent-41, score-0.616]
16 If we look for a minimal surface that is consistent with the object silhouette and impose a constraint on the object volume we obtain the ball shaped reconstruction with flat handle in Figure 2 b). [sent-43, score-0.747]
17 To improve the result we introduce a depth profile constraint, which defines the rough shape of the object along a cross section. [sent-44, score-0.792]
18 In the example above, the profile in Figure 2 c) is imposed along the vertical cross section of the can indicated in red in Figure 2 a). [sent-45, score-0.624]
19 It can either be given by the user or estimated from shading information. [sent-46, score-0.37]
20 By imposing this profile we obtain the result with handle in Figure 2 d). [sent-47, score-0.414]
21 To further improve the reconstruction we apply a volume ratio constraint. [sent-50, score-0.436]
22 ’Volume ratio’ means that we restrict the object volume within the indicated pink region to a specific ratio of the full object volume, e. [sent-51, score-0.449]
23 Note that the imposed profile constraints define relative instead of absolute depth values, i. [sent-55, score-0.887]
24 the depth of one pixel is proportional to the depth of a reference pixel within the profile. [sent-57, score-0.398]
25 Since the depth values are relative the profiles and a) Input profile b) 20% volume c) 40% volume Figure 3. [sent-58, score-1.621]
26 Application of the shape profile in a) to a spherical 2D shape with b) 20% and c) 40% volume. [sent-59, score-0.58]
27 Only very few works compute exact reconstructions [5], but they can only do so by assuming piecewise planarity of the reconstruction and by the help of user interaction. [sent-69, score-0.495]
28 Instead we compute closed objects using semantic information such as the object silhouette, shape profiles and object volume. [sent-74, score-0.756]
29 Our approach differs from other existing shape-from-shading methods in the following points: Firstly, we do not seek dense reconstructions but use shading information only to extract semantic shape profiles. [sent-76, score-0.39]
30 Contributions We propose a 3D reconstruction approach from a single image, which comes with the following advantages: • We impose characteristic object shape by means of relaWteiv eim depth profiles ranisdti partial tv sohluapmee b rya mtioesa. [sent-84, score-1.101]
31 n 111777888 • • • We propose a method to automatically infer depth profWilees p frroopmos teh ae shading oin afuotrommaatitoicna lilny tihnfee image. [sent-85, score-0.401]
32 pFroorthe locations of the depth profiles we require homogeneous material with constant albedo. [sent-86, score-0.768]
33 In [14] for example the object volume V defined by the user is introduced as a hard or soft constraint Vol(S) = V . [sent-118, score-0.435]
34 It enables the user to interactively control the volume of the inflated object. [sent-124, score-0.375]
35 In the following sections we show how two types of additional depth constraints on object parts can be imposed to allow for diverse object shapes, which can be interactively determined by the user or derived automatically from shading information in the image. [sent-126, score-0.868]
36 Relative Depth Profiles Relative depth profiles indicate the shape of the object along a given cross section. [sent-131, score-0.962]
37 Such a profile consists of two ingredients: 1) the line which marks the location of the profile in the image plane (see the red line in Figure 2 a) ), 2) the desired qualitative (not absolute) depth values along the line (see the pink sketch in Figure 2 c) ). [sent-132, score-1.277]
38 The depth profile can either be sketched by the user or computed from shading information. [sent-133, score-0.949]
39 Let C ⊆ Σ denote the profile line across the object witLhient t hCe image plane, ew thheich p rionfdiilceat leins eth aec cdroessisre tdh eloc oabtjieocnt of the shape profile. [sent-134, score-0.534]
40 )L =et yth}e d depth trahetio r cy o∈f Rvo0+x ienlsdi wcahtiec hth per depth oonft tohe y object a Lt pixel y ewpitthh respect t∈o Rthat of a reference pixel, which can be picked arbitrarily from those within the profile C. [sent-136, score-0.855]
41 The relative depth constraints are linear and convex and can be introduced into the original energy (3) ED (u) = (4) • EV(u) s. [sent-138, score-0.405]
42 The different steps for extracting profiles from an input image using shading information: a) The user provides color samples of the reflection function by marking corresponding scribbles in the input image and on a sphere. [sent-144, score-0.98]
43 c) The user marks horizontal lines in the input image for which the height profiles will be estimated. [sent-146, score-0.813]
44 d) For estimating a single profile a shortest path is computed on the graph indicated. [sent-147, score-0.475]
45 e) Each shortest path corresponds to a depth profile which together determine the shape of the watering can. [sent-148, score-0.985]
46 User Drawn Profiles For simple object shapes, rough profile sketches can easily be outlined by the user, e. [sent-149, score-0.447]
47 We propose to apply the given depth profile along each object cross section parallel to the reference cross section. [sent-152, score-0.773]
48 When multiple relative depth profiles are indicated, we blend them linearly in To apply different profile constraints at different cross-sections we compute their linear combination. [sent-157, score-1.33]
49 Shading Based Profiles Rather than drawing the depth profiles by hand, which can be tedious, we propose to estimate them directly from the input image. [sent-158, score-0.789]
50 If no profile information can be estimated due to texture or shadow, shape information is propagated from neighboring profiles during surface reconstruction (see previous paragraph). [sent-164, score-1.332]
51 The proposed interactive approach for estimating the profile consists of two steps. [sent-165, score-0.409]
52 In the second step the user defines which profiles should be estimated by marking their respective locations in the input image. [sent-167, score-0.79]
53 Finally, relative depth along the profiles is computed automatically by finding the shortest path in a graph. [sent-168, score-0.979]
54 In the second step the user marks the profile lines in the input image for which relative depth will be estimated (Figure 4 c). [sent-181, score-0.881]
55 The lines are arbitrary as long as they start and end at contour points and the corresponding profiles do not contradict. [sent-182, score-0.654]
56 For each of the profile lines, we estimate the corresponding depth profile by computing a shortest path in a graph, which is described in the following. [sent-183, score-1.05]
57 , nN} ∈ R3 of uniformly sampled gn tohrem saelt dDire =cti {onns and the co}lo ∈r sequence along the profile line C = c1, c2 , . [sent-187, score-0.469]
58 The graph consists of a set of M connected domes∈ (h Ralf spheres), one dome for each pixel in the profile line C (see Figure 4 d) ). [sent-191, score-0.506]
59 Thus, the node vij in the graph represents the j-th sampled normal direction in dome ifor profile pixel i. [sent-193, score-0.61]
60 one normal direction for each pixel in the profile) representing one possible sequence of surface normals from the start to the end point of the profile line. [sent-197, score-0.665]
61 The start and end normals are known, since the start and end points of the profile line lie on the object contour. [sent-198, score-0.62]
62 We assume that the most likely path connecting the start and end normal is the one with minimal color difference between reflectance value and image color for each node and minimal surface curvature in the sequence. [sent-200, score-0.472]
63 1 In the case of symmetric profiles we can increase the stabil- ity and accuracy that each normal mirrored version half. [sent-215, score-0.67]
64 A volume ratio constraint defines a fixed volume ratio for an object part with respect to the whole object, e. [sent-220, score-0.611]
65 Then he specifies a volume ratio rp relative to the overall object volume V . [sent-224, score-0.565]
66 Each voxel in the reconstruction volume is then projected onto the viewing plane of the camera. [sent-225, score-0.408]
67 tW Te i ⊂ntr oRduce this constraint into the depth profile energy ED ER(u) = ED(u) s. [sent-227, score-0.652]
68 (6) Constraints on volume ratios can either be imposed as an additional constraint from the beginning or as a subsequent optimization problem after convergence of the original problem. [sent-232, score-0.413]
69 The constraints for the global volume, the depth profiles a]n. [sent-247, score-0.869]
70 Experiments In this section, we show 3D reconstruction results with imposed relative volume constraints, i. [sent-269, score-0.558]
71 The relative depth profiles are hand 111 888111 a) Imageb) Toep e et al. [sent-272, score-0.839]
72 [14] c) Reconstructions with depth profile constraints Figure 5. [sent-273, score-0.676]
73 3D reconstruction result b) without additional constraints [14] and c) with relative depth profile constraints. [sent-274, score-0.919]
74 The profile locations in the 2D image plane are marked in red, the corresponding depth curves in pink. [sent-275, score-0.615]
75 3D Reconstruction with Volume Constraints If no shape constraints such as depth profiles or volume ratios are applied the reconstruction fails in many situations. [sent-280, score-1.364]
76 The reconstructions fail due to self-occlusions such as the handle of the watering can or the tires of the car. [sent-283, score-0.456]
77 To improve on these failed reconstructions, in the following we will impose depth profiles and volume ratio constraints. [sent-285, score-1.081]
78 1 Relative Depth Profiles User Drawn Profiles Relative depth profiles determine the basic shape of the object along an arbitrary cross section. [sent-288, score-0.962]
79 Figure 5 shows several reconstruction results based on user drawn depth profiles. [sent-289, score-0.551]
80 Since the profiles scale with the volume, it suffices to indicate the profile line on the image plane (here in red) together with a rough sketch of the corresponding depth (here in pink). [sent-290, score-1.285]
81 The profile of the shoe, for example, indicates that the shoe is wider at the front and back and narrow in the middle. [sent-291, score-0.422]
82 The profile imposed on the vase makes it slimmer and a little more bulgy at the top. [sent-292, score-0.509]
83 To model the pyramid’s triangular shape we imposed the shape profile indicating a linear depth increase from the top to the bottom. [sent-295, score-0.85]
84 For the watering can we first imposed a user drawn vertical profile as shown in Figure 2. [sent-296, score-0.95]
85 We attenuated the depth profile constraint with increasing distance from the reference profile. [sent-297, score-0.652]
86 Shading Based Profiles Figure 6 shows reconstruction examples based on depth profiles which were estimated from shading information in the input image. [sent-298, score-1.154]
87 Note that we can estimate the depth profile equally well on shiny (mug) and diffuse (watering can) materials since we estimate the reflectance function of the target object prior to the shape. [sent-302, score-0.737]
88 The estimated depth profiles for the watering can are shown in Figure 4 e). [sent-303, score-1.04]
89 Figure 7 shows the reconstruction of a tuba with a zero volume ratio constraint for modeling the opening and a 30% volume constraint for 111888222 Figure6. [sent-307, score-0.73]
90 For the airplane example, without relative volume constraints [14] after reducing the weight g along the wings we obtain the result in the left image with rectangular wings, since the self-occlusions of the wings cannot be modeled. [sent-310, score-0.618]
91 By adding volume ratio constraints for the wings requiring the side wings to contain 25% and the tail wing 5% of the object volume we obtain the results with self-occluding wings on the right. [sent-311, score-0.928]
92 In the car example the reconstruction without additional volume constraints yields two very long tires instead of four normal ones, since the empty space between the parallel tires cannot be inferred without prior knowledge. [sent-312, score-0.699]
93 For the watering can we increased the thickness of the spout by adding a 4% volume ratio constraint. [sent-314, score-0.511]
94 Conclusion In this paper we proposed to introduce relative volume constraints into 3D reconstruction from a single image. [sent-328, score-0.54]
95 Two types of such constraints, relative depth profiles and volume ratios, allow to impose shape on the object. [sent-329, score-1.162]
96 We showed that shape profiles can be automatically derived from the shading information in the image. [sent-330, score-0.877]
97 Shape profiles along cross sections as well as protuberances, dents, occlusions and holes can be easily introduced by means of a linearly constrained variational approach with runtimes of several seconds only. [sent-331, score-0.735]
98 [14] c) Reconstructions with volume ratio constraints Figure 7. [sent-378, score-0.365]
99 3D reconstruction results a) without further constraints [14] and b) with application of volume ratio constraints. [sent-379, score-0.537]
100 Relative depth profiles are marked in red (location) and pink (depth function). [sent-380, score-0.852]
wordName wordTfidf (topN-words)
[('profiles', 0.583), ('profile', 0.39), ('watering', 0.247), ('volume', 0.196), ('shading', 0.189), ('depth', 0.185), ('reconstruction', 0.172), ('user', 0.156), ('toeppe', 0.144), ('reconstructions', 0.123), ('imposed', 0.119), ('wings', 0.111), ('constraints', 0.101), ('reflectance', 0.092), ('normal', 0.087), ('dome', 0.084), ('pink', 0.084), ('protuberances', 0.082), ('shape', 0.078), ('dents', 0.073), ('relative', 0.071), ('ratio', 0.068), ('normals', 0.066), ('tires', 0.062), ('prasad', 0.056), ('silhouette', 0.056), ('cross', 0.054), ('surface', 0.054), ('constraint', 0.049), ('impose', 0.049), ('ratios', 0.049), ('shadowed', 0.048), ('bv', 0.046), ('shortest', 0.045), ('planarity', 0.044), ('holes', 0.042), ('domes', 0.041), ('nieuwenhuis', 0.041), ('oswald', 0.041), ('rrefu', 0.041), ('ryu', 0.041), ('cremers', 0.041), ('plane', 0.04), ('path', 0.04), ('view', 0.039), ('drawn', 0.038), ('minimal', 0.037), ('shiny', 0.036), ('textured', 0.036), ('terzopoulos', 0.034), ('object', 0.034), ('spherical', 0.034), ('cy', 0.033), ('indicated', 0.033), ('curved', 0.033), ('assumptions', 0.033), ('line', 0.032), ('shoe', 0.032), ('sketch', 0.032), ('marks', 0.032), ('conform', 0.03), ('propagated', 0.03), ('sketched', 0.029), ('along', 0.028), ('variational', 0.028), ('reference', 0.028), ('energy', 0.028), ('closed', 0.027), ('harmonics', 0.027), ('div', 0.027), ('automatically', 0.027), ('marking', 0.026), ('color', 0.026), ('start', 0.025), ('indicator', 0.025), ('vij', 0.025), ('volumetric', 0.025), ('estimated', 0.025), ('handle', 0.024), ('node', 0.024), ('end', 0.024), ('barron', 0.024), ('interactively', 0.023), ('rough', 0.023), ('shaped', 0.023), ('lines', 0.022), ('pyramid', 0.022), ('du', 0.022), ('drawing', 0.021), ('absolute', 0.021), ('voxels', 0.02), ('interface', 0.02), ('multipliers', 0.02), ('oxford', 0.02), ('convex', 0.02), ('height', 0.02), ('car', 0.019), ('sequence', 0.019), ('ball', 0.019), ('interactive', 0.019)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000007 354 cvpr-2013-Relative Volume Constraints for Single View 3D Reconstruction
Author: Eno Töppe, Claudia Nieuwenhuis, Daniel Cremers
Abstract: We introduce the concept of relative volume constraints in order to account for insufficient information in the reconstruction of 3D objects from a single image. The key idea is to formulate a variational reconstruction approach with shape priors in form of relative depth profiles or volume ratios relating object parts. Such shape priors can easily be derived either from a user sketch or from the object’s shading profile in the image. They can handle textured or shadowed object regions by propagating information. We propose a convex relaxation of the constrained optimization problem which can be solved optimally in a few seconds on graphics hardware. In contrast to existing single view reconstruction algorithms, the proposed algorithm provides substantially more flexibility to recover shape details such as self-occlusions, dents and holes, which are not visible in the object silhouette.
2 0.30497423 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances
Author: Feng Lu, Yasuyuki Matsushita, Imari Sato, Takahiro Okabe, Yoichi Sato
Abstract: We propose an uncalibrated photometric stereo method that works with general and unknown isotropic reflectances. Our method uses a pixel intensity profile, which is a sequence of radiance intensities recorded at a pixel across multi-illuminance images. We show that for general isotropic materials, the geodesic distance between intensity profiles is linearly related to the angular difference of their surface normals, and that the intensity distribution of an intensity profile conveys information about the reflectance properties, when the intensity profile is obtained under uniformly distributed directional lightings. Based on these observations, we show that surface normals can be estimated up to a convex/concave ambiguity. A solution method based on matrix decomposition with missing data is developed for a reliable estimation. Quantitative and qualitative evaluations of our method are performed using both synthetic and real-world scenes.
3 0.19047734 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery
Author: Kevin Karsch, Zicheng Liao, Jason Rock, Jonathan T. Barron, Derek Hoiem
Abstract: Early work in computer vision considered a host of geometric cues for both shape reconstruction [11] and recognition [14]. However, since then, the vision community has focused heavily on shading cues for reconstruction [1], and moved towards data-driven approaches for recognition [6]. In this paper, we reconsider these perhaps overlooked “boundary” cues (such as self occlusions and folds in a surface), as well as many other established constraints for shape reconstruction. In a variety of user studies and quantitative tasks, we evaluate how well these cues inform shape reconstruction (relative to each other) in terms of both shape quality and shape recognition. Our findings suggest many new directions for future research in shape reconstruction, such as automatic boundary cue detection and relaxing assumptions in shape from shading (e.g. orthographic projection, Lambertian surfaces).
4 0.18328615 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image
Author: Jonathan T. Barron, Jitendra Malik
Abstract: In this paper we extend the “shape, illumination and reflectance from shading ” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images.
5 0.17347442 394 cvpr-2013-Shading-Based Shape Refinement of RGB-D Images
Author: Lap-Fai Yu, Sai-Kit Yeung, Yu-Wing Tai, Stephen Lin
Abstract: We present a shading-based shape refinement algorithm which uses a noisy, incomplete depth map from Kinect to help resolve ambiguities in shape-from-shading. In our framework, the partial depth information is used to overcome bas-relief ambiguity in normals estimation, as well as to assist in recovering relative albedos, which are needed to reliably estimate the lighting environment and to separate shading from albedo. This refinement of surface normals using a noisy depth map leads to high-quality 3D surfaces. The effectiveness of our algorithm is demonstrated through several challenging real-world examples.
6 0.16385348 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras
7 0.16138205 305 cvpr-2013-Non-parametric Filtering for Geometric Detail Extraction and Material Representation
8 0.13954248 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials
9 0.13602291 56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints
10 0.12496656 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera
11 0.12141155 423 cvpr-2013-Template-Based Isometric Deformable 3D Reconstruction with Sampling-Based Focal Length Self-Calibration
12 0.11586473 111 cvpr-2013-Dense Reconstruction Using 3D Object Shape Priors
13 0.11258654 230 cvpr-2013-Joint 3D Scene Reconstruction and Class Segmentation
14 0.10938377 117 cvpr-2013-Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera
15 0.10602254 196 cvpr-2013-HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences
16 0.099024139 465 cvpr-2013-What Object Motion Reveals about Shape with Unknown BRDF and Lighting
17 0.097485952 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera
18 0.096052773 232 cvpr-2013-Joint Geodesic Upsampling of Depth Images
19 0.088519014 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation
20 0.084684663 21 cvpr-2013-A New Perspective on Uncalibrated Photometric Stereo
topicId topicWeight
[(0, 0.16), (1, 0.216), (2, 0.022), (3, 0.071), (4, -0.015), (5, -0.103), (6, -0.096), (7, 0.103), (8, 0.042), (9, -0.016), (10, -0.063), (11, -0.122), (12, -0.092), (13, 0.05), (14, 0.059), (15, 0.008), (16, -0.019), (17, -0.007), (18, -0.044), (19, -0.059), (20, -0.023), (21, -0.01), (22, -0.055), (23, 0.025), (24, 0.09), (25, 0.044), (26, 0.047), (27, 0.0), (28, -0.006), (29, 0.006), (30, 0.066), (31, -0.061), (32, 0.018), (33, 0.062), (34, 0.032), (35, 0.063), (36, 0.021), (37, -0.02), (38, -0.034), (39, -0.081), (40, 0.015), (41, -0.032), (42, -0.051), (43, -0.002), (44, 0.041), (45, -0.034), (46, 0.056), (47, -0.008), (48, -0.058), (49, -0.111)]
simIndex simValue paperId paperTitle
same-paper 1 0.92844605 354 cvpr-2013-Relative Volume Constraints for Single View 3D Reconstruction
Author: Eno Töppe, Claudia Nieuwenhuis, Daniel Cremers
Abstract: We introduce the concept of relative volume constraints in order to account for insufficient information in the reconstruction of 3D objects from a single image. The key idea is to formulate a variational reconstruction approach with shape priors in form of relative depth profiles or volume ratios relating object parts. Such shape priors can easily be derived either from a user sketch or from the object’s shading profile in the image. They can handle textured or shadowed object regions by propagating information. We propose a convex relaxation of the constrained optimization problem which can be solved optimally in a few seconds on graphics hardware. In contrast to existing single view reconstruction algorithms, the proposed algorithm provides substantially more flexibility to recover shape details such as self-occlusions, dents and holes, which are not visible in the object silhouette.
2 0.84563732 56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints
Author: Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin
Abstract: We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations namely coarse shape reconstruction and poor accuracy on textureless surfaces that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to recover accurately from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces. – –
3 0.8280021 394 cvpr-2013-Shading-Based Shape Refinement of RGB-D Images
Author: Lap-Fai Yu, Sai-Kit Yeung, Yu-Wing Tai, Stephen Lin
Abstract: We present a shading-based shape refinement algorithm which uses a noisy, incomplete depth map from Kinect to help resolve ambiguities in shape-from-shading. In our framework, the partial depth information is used to overcome bas-relief ambiguity in normals estimation, as well as to assist in recovering relative albedos, which are needed to reliably estimate the lighting environment and to separate shading from albedo. This refinement of surface normals using a noisy depth map leads to high-quality 3D surfaces. The effectiveness of our algorithm is demonstrated through several challenging real-world examples.
4 0.80741203 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery
Author: Kevin Karsch, Zicheng Liao, Jason Rock, Jonathan T. Barron, Derek Hoiem
Abstract: Early work in computer vision considered a host of geometric cues for both shape reconstruction [11] and recognition [14]. However, since then, the vision community has focused heavily on shading cues for reconstruction [1], and moved towards data-driven approaches for recognition [6]. In this paper, we reconsider these perhaps overlooked “boundary” cues (such as self occlusions and folds in a surface), as well as many other established constraints for shape reconstruction. In a variety of user studies and quantitative tasks, we evaluate how well these cues inform shape reconstruction (relative to each other) in terms of both shape quality and shape recognition. Our findings suggest many new directions for future research in shape reconstruction, such as automatic boundary cue detection and relaxing assumptions in shape from shading (e.g. orthographic projection, Lambertian surfaces).
5 0.79423904 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image
Author: Jonathan T. Barron, Jitendra Malik
Abstract: In this paper we extend the “shape, illumination and reflectance from shading ” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images.
6 0.74705076 21 cvpr-2013-A New Perspective on Uncalibrated Photometric Stereo
7 0.69418103 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances
8 0.68108296 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials
9 0.66055357 305 cvpr-2013-Non-parametric Filtering for Geometric Detail Extraction and Material Representation
10 0.65191853 423 cvpr-2013-Template-Based Isometric Deformable 3D Reconstruction with Sampling-Based Focal Length Self-Calibration
11 0.64179862 466 cvpr-2013-Whitened Expectation Propagation: Non-Lambertian Shape from Shading and Shadow
12 0.61573976 435 cvpr-2013-Towards Contactless, Low-Cost and Accurate 3D Fingerprint Identification
13 0.5969559 230 cvpr-2013-Joint 3D Scene Reconstruction and Class Segmentation
14 0.57865727 465 cvpr-2013-What Object Motion Reveals about Shape with Unknown BRDF and Lighting
15 0.55746758 428 cvpr-2013-The Episolar Constraint: Monocular Shape from Shadow Correspondence
16 0.55419004 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation
17 0.55026293 286 cvpr-2013-Mirror Surface Reconstruction from a Single Image
18 0.54962611 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera
19 0.54956752 196 cvpr-2013-HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences
20 0.5362305 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns
topicId topicWeight
[(10, 0.084), (16, 0.017), (26, 0.034), (33, 0.198), (67, 0.015), (69, 0.025), (87, 0.532)]
simIndex simValue paperId paperTitle
1 0.8663891 274 cvpr-2013-Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization
Author: Marcus A. Brubaker, Andreas Geiger, Raquel Urtasun
Abstract: In this paper we propose an affordable solution to selflocalization, which utilizes visual odometry and road maps as the only inputs. To this end, we present a probabilistic model as well as an efficient approximate inference algorithm, which is able to utilize distributed computation to meet the real-time requirements of autonomous systems. Because of the probabilistic nature of the model we are able to cope with uncertainty due to noisy visual odometry and inherent ambiguities in the map (e.g., in a Manhattan world). By exploiting freely available, community developed maps and visual odometry measurements, we are able to localize a vehicle up to 3m after only a few seconds of driving on maps which contain more than 2,150km of drivable roads.
2 0.85814297 230 cvpr-2013-Joint 3D Scene Reconstruction and Class Segmentation
Author: Christian Häne, Christopher Zach, Andrea Cohen, Roland Angst, Marc Pollefeys
Abstract: Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being ’too noisy’. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other’s task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.
same-paper 3 0.8537159 354 cvpr-2013-Relative Volume Constraints for Single View 3D Reconstruction
Author: Eno Töppe, Claudia Nieuwenhuis, Daniel Cremers
Abstract: We introduce the concept of relative volume constraints in order to account for insufficient information in the reconstruction of 3D objects from a single image. The key idea is to formulate a variational reconstruction approach with shape priors in form of relative depth profiles or volume ratios relating object parts. Such shape priors can easily be derived either from a user sketch or from the object’s shading profile in the image. They can handle textured or shadowed object regions by propagating information. We propose a convex relaxation of the constrained optimization problem which can be solved optimally in a few seconds on graphics hardware. In contrast to existing single view reconstruction algorithms, the proposed algorithm provides substantially more flexibility to recover shape details such as self-occlusions, dents and holes, which are not visible in the object silhouette.
4 0.81685531 209 cvpr-2013-Hypergraphs for Joint Multi-view Reconstruction and Multi-object Tracking
Author: Martin Hofmann, Daniel Wolf, Gerhard Rigoll
Abstract: We generalize the network flow formulation for multiobject tracking to multi-camera setups. In the past, reconstruction of multi-camera data was done as a separate extension. In this work, we present a combined maximum a posteriori (MAP) formulation, which jointly models multicamera reconstruction as well as global temporal data association. A flow graph is constructed, which tracks objects in 3D world space. The multi-camera reconstruction can be efficiently incorporated as additional constraints on the flow graph without making the graph unnecessarily large. The final graph is efficiently solved using binary linear programming. On the PETS 2009 dataset we achieve results that significantly exceed the current state of the art.
5 0.80271733 125 cvpr-2013-Dictionary Learning from Ambiguously Labeled Data
Author: Yi-Chen Chen, Vishal M. Patel, Jaishanker K. Pillai, Rama Chellappa, P. Jonathon Phillips
Abstract: We propose a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label. The dictionary learning problem is solved using an iterative alternating algorithm. At each iteration of the algorithm, two alternating steps are performed: a confidence update and a dictionary update. The confidence of each sample is defined as the probability distribution on its ambiguous labels. The dictionaries are updated using either soft (EM-based) or hard decision rules. Extensive evaluations on existing datasets demonstrate that the proposed method performs significantly better than state-of-the-art ambiguously labeled learning approaches.
6 0.79249746 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display
7 0.78672683 39 cvpr-2013-Alternating Decision Forests
8 0.78158557 107 cvpr-2013-Deformable Spatial Pyramid Matching for Fast Dense Correspondences
10 0.7173382 396 cvpr-2013-Simultaneous Active Learning of Classifiers & Attributes via Relative Feedback
11 0.70579815 298 cvpr-2013-Multi-scale Curve Detection on Surfaces
12 0.65991902 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery
13 0.6500982 155 cvpr-2013-Exploiting the Power of Stereo Confidences
14 0.63356584 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities
15 0.62851572 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging
16 0.6264354 147 cvpr-2013-Ensemble Learning for Confidence Measures in Stereo Vision
17 0.62175769 467 cvpr-2013-Wide-Baseline Hair Capture Using Strand-Based Refinement
18 0.62101239 289 cvpr-2013-Monocular Template-Based 3D Reconstruction of Extensible Surfaces with Local Linear Elasticity
19 0.6207127 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances
20 0.61958218 373 cvpr-2013-SWIGS: A Swift Guided Sampling Method