cvpr cvpr2013 cvpr2013-56 knowledge-graph by maker-knowledge-mining

56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints


Source: pdf

Author: Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin

Abstract: We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations namely coarse shape reconstruction and poor accuracy on textureless surfaces that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to recover accurately from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces. – –

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 DFD suffers from important limitations namely coarse shape reconstruction and poor accuracy on textureless surfaces that can be overcome with the help of shading. [sent-2, score-0.127]

2 To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. [sent-5, score-0.797]

3 Introduction Depth-from-defocus (DFD) is a widely-used technique that utilizes the relationship between depth, focus setting, and image blur to passively estimate a range map. [sent-8, score-0.101]

4 A pair of images is typically acquired with different focus settings, and the differences between their local blur levels are then used to infer the depth of each scene point. [sent-9, score-0.271]

5 With the rising popularity of large format lenses for high resolution imaging, DFD may increase in application due to the shallow depth of field of such lenses. [sent-12, score-0.209]

6 Among these is the limited size oflens apertures, which leads to coarse depth resolution. [sent-14, score-0.213]

7 In addition to this, depth estimates can be severely degraded in areas with insufficient scene texture for measuring local blur levels. [sent-15, score-0.331]

8 We present in this paper a technique that aims to mitigate the aforementioned drawbacks of DFD through the use of shading information. [sent-16, score-0.352]

9 We therefore seek to capitalize on shading data to refine and correct the coarse depth maps obtained from DFD. [sent-18, score-0.565]

10 The utilization of shading in conjunction with DFD, however, poses a significant challenge in that the scene texture generally needed for DFD interferes with the operation of shapefrom-shading, which requires surfaces to be free of albedo variations. [sent-19, score-0.503]

11 Moreover, DFD and SFS may produce incongruous depth estimates that need to be reconciled. [sent-20, score-0.215]

12 To address these problems, we first propose a Bayesian formulation of DFD that incorporates shading constraints in a manner that locally emphasizes shading cues in areas where there are ambiguities in DFD. [sent-21, score-0.759]

13 To enable the use of shading constraints in textured scenes, this Bayesian DFD is combined in an iterative framework with a depth-guided intrinsic image decomposition that aims to separate shading from texture. [sent-22, score-0.881]

14 These two components mutually benefit each other in the iterative framework, as better depth estimates lead to improvements in depth-guided decomposition, while more accurate shading/texture decomposition amends the shading constraints and thus results in better depth estimates. [sent-23, score-0.876]

15 In this work, the object surface is assumed to be Lambertian, and the illumination environment is captured by imaging a sphere with a known reflectance. [sent-24, score-0.136]

16 Our experiments demonstrate that the performance of Bayesian DFD with shading constraints surpasses that of existing DFD techniques over both coarse and fine scales. [sent-25, score-0.401]

17 In addition, the use of shading information allows our Bayesian DFD to work effectively even for the case of untextured surfaces. [sent-26, score-0.352]

18 Since the in-focus intensity profile of these edges is known, their depth can be determined from the edge blur. [sent-29, score-0.208]

19 Later methods have instead assumed that object surfaces can be locally approximated by a plane parallel to the sensor [33, 26, 30], such that local depth variations can be disregarded in the 222111777 estimation. [sent-30, score-0.273]

20 Some techniques utilize structured illumination to deal with textureless surfaces and improve blur estimation [15, 14, 29]. [sent-31, score-0.203]

21 Defocus has also been modeled as a diffusion process that does not require recovery of the in-focus image when estimating depth [6]. [sent-33, score-0.283]

22 , Lambertian surfaces, uniform albedo, directional lighting, orthographic projection), and several works have aimed to broaden its applicability, such as to address perspective projection [19], non-Lambertian reflectance [16], and natural illumination [10, 8]. [sent-38, score-0.097]

23 Non- uniform albedo has been particularly challenging to overcome, and has been approached using smoothness and entropy priors on reflectance [3]. [sent-39, score-0.165]

24 Our work instead takes advantage of defocus information to improve estimation for textured surfaces. [sent-40, score-0.146]

25 Shape-from-shading has also been used to refine the depth data of uniform-albedo objects obtained by multi-view stereo [32]. [sent-41, score-0.206]

26 Intrinsic images Intrinsic image decomposition aims to separate an image into its reflectance and shading components. [sent-43, score-0.461]

27 Despite the existence of these different decomposition cues, the performance of intrinsic image algorithms has in general been rather limited [7]. [sent-46, score-0.101]

28 Inspired by this work, we also utilize depth information to aid intrinsic image decomposition. [sent-48, score-0.257]

29 However, our setting is considerably more challenging, since the depth information we start with is very rough, due to the coarse depth estimates of DFD and the problems of SFS when textures are present. [sent-49, score-0.428]

30 Approach In this section, we present our method for Bayesian DFD with shading constraints. [sent-51, score-0.352]

31 The effects of these focus settings on defocus blur will be described in terms of the quantities shown in Fig. [sent-58, score-0.202]

32 The light rays radiated from P to the camera are focused by the lens to a point Q according to the thin lens equation: d1+v1d=F1, (1) where vd is the distance of Q from the lens, and F is the focal length. [sent-61, score-0.155]

33 from this equation, there is a direct relationship between depth d and blur radius b for a given set of camera parameters. [sent-79, score-0.293]

34 The light intensity of P within the blur circle can be expressed as a distribution function known as the point spread function (PSF), which we denote by h. [sent-80, score-0.104]

35 In the preceding equations, it can be seen that the de−focσus difference, Δσ, is determined by the depth d and the two known focal settings v1 and v2, so Eq. [sent-86, score-0.238]

36 (6) can be represented as I2 (x, y) = I1(x, y) ∗ h(x, y, d) , (7) Δσ2 where d is the depth of pixel Px,y. [sent-87, score-0.192]

37 (7), most DFD algorithms solve for depth by minimizing the following energy function or some variant of it: argdmin(I1(x,y) ∗ h(x,y,d) − I2(x,y))2. [sent-89, score-0.192]

38 ,I(N2)) be the observations at the pixthe depth map D, and let els. [sent-107, score-0.192]

39 where P(d) is the prior distribution of depth map d, |d) is the likelihood of observations and L is the| log slik tehelih loikoedl iohfo oPd, i o. [sent-110, score-0.216]

40 (8), and the prior term as depth smoothness along the links [22]: P(I(1), I(2) I(1), I(2), L(I(1),I(2)|d) =? [sent-117, score-0.254]

41 We propose to use a more informative prior based on the shading observed in the DFD ×× image pair, which is helpful both for reconstructing surfaces with little texture content and for incorporating the finescale shape details that shading exhibits. [sent-130, score-0.838]

42 In this section, we consider the case of uniform-albedo surfaces, for which shading can be easily measured. [sent-131, score-0.352]

43 Lambertian shading can be modeled as a quadratic function of the surface normal [23, 10]: s(n) = nTMn, (12) where nT = (nx , ny, nz , 1) for surface normal n, and M is a symmetric 4 4 matrix that depends on the lighting envirao snymmemnet. [sent-135, score-0.525]

44 We also obtain the 3D coordinates for each point by re-projecting each pixel into the scene according to its image coordinates (x, y), depth value d from DFD, and the perspective projection model: ? [sent-137, score-0.192]

45 (a) Original image (synthesized so that ground truth depth is available). [sent-147, score-0.192]

46 (b/c) Close-up of depth estimates for the red/green box in (a). [sent-148, score-0.215]

47 DFD with this shading-based prior in place of the smoothness prior will be referred to as DFD with shading constraints. [sent-157, score-0.425]

48 In the practical application of this shading constraint, we have a pair of differently focused images from which to obtain the shading data. [sent-158, score-0.704]

49 This image is then used for surface normal estimation, with the lighting environment measured using a sphere placed in the scene as done in [10]. [sent-160, score-0.153]

50 2, the incorporation of shading constraints leads to improvements in DFD, especially in areas with little intensity variation. [sent-162, score-0.454]

51 Such areas have significant depth ambiguity in DFD, because the likelihood energy in Eq. [sent-163, score-0.219]

52 This problem arises because the brightness variations from shading and texture are intertwined in the image intensities. [sent-173, score-0.419]

53 To separate shading from texture, methods for intrin- sic image decomposition solve the following equation for each pixel p: ip = sp + rp , (14) where i, s and r are respectively the logarithms of the image intensity, shading value, and reflectance value. [sent-174, score-0.861]

54 In this paper, we decompose an image into its shading and reflectance components with the help of shape information provided by DFD. [sent-175, score-0.434]

55 The method we employ is based on the work in [12], which uses streams of video and depth maps captured by a moving Kinect camera. [sent-176, score-0.207]

56 Also, we are working with depth data that is often of much lower quality. [sent-178, score-0.192]

57 The decomposition utilizes the conventional Retinex model with additional constraints on non-local reflectance [24] and on similar shading among points that have the same surface normal direction. [sent-179, score-0.588]

58 Then the shading component of the image is computed through the following minimization: argsmin(p,? [sent-181, score-0.352]

59 s1ωs ioft (h1er −wi nsˆ epT ˆnq) < τr, c (16) (17) ˆn where denotes chromaticity, denotes surface normal, and ωr, ωnlr, ωs and ωnls are coefficients that balance the local and non-local reflectance constraints, and local and non-local shading constraints, respectively. [sent-199, score-0.449]

60 (b-d) Estimated shading (top) and depth (bottom) for (b) first iteration, (c) middle iteration, (d) final iteration. [sent-206, score-0.544]

61 Iterative algorithm The performance of depth-guided intrinsic image decomposition depends on the accuracy of the input depth. [sent-210, score-0.101]

62 Likewise, the utility of shading constraints in DFD rests on how well shading is extracted from the image. [sent-211, score-0.732]

63 Since DFD and intrinsic image decomposition facilitate each other, we use them in alternation within an iterative framework. [sent-212, score-0.124]

64 This cycle is repeated until the average change in depth within each local region (which is partitioned in our implementation by a 10x10 grid on the image) lies below a threshold. [sent-214, score-0.192]

65 We solved the MRF using a multi-scale refinement with 200 depth labels per depth range and reducing the range by 15% with each iteration. [sent-215, score-0.384]

66 We used 15 iterations which equivalently gives about 2000 depth labels in total. [sent-216, score-0.192]

67 Since the estimated shading and depth are less accurate in earlier iterations, the parameters in DFD and decomposition are set in each iteration to account for this. [sent-217, score-0.596]

68 (13) for DFD and the non-local shading coefficient ωnls in Eq. [sent-219, score-0.352]

69 3, the iterations bring improvements to both the estimated depth and shading. [sent-226, score-0.206]

70 The depth estimates of our method are compared to those of three previous tech- (a)(b)(c)(d) Figure 4. [sent-230, score-0.215]

71 niques: standard MRF-based DFD with smoothness constraints, DFD via diffusion [6], and the single-image SIRFS method [3]1. [sent-244, score-0.094]

72 In these experiments, a foreground mask is used to discard the background, and depth maps are scaled to the range of [0,1] for visualization. [sent-245, score-0.192]

73 The defocus pair is rendered with blur according to Eq. [sent-250, score-0.17]

74 The two focal settings are chosen such that their focal planes bound the ground truth depth map, and random Gaussian noise with a standard deviation of 1. [sent-268, score-0.266]

75 The benefits of utilizing shading information with DFD are illustrated in Fig. [sent-270, score-0.352]

76 Here, the normal maps are constructed from gradients in the estimated depth maps. [sent-272, score-0.229]

77 The uncertainty of DFD in areas with little brightness variation is shown to be resolved by the shading constraints. [sent-273, score-0.42]

78 Our depth estimation results are exhibited together with those of the comparison techniques in Fig. [sent-275, score-0.208]

79 With the information in a defocus pair, our method can obtain results more reliable than that of the single-image SIRFS technique. [sent-279, score-0.105]

80 In comparison to the two 2DFD by diffusion does not work as well as standard DFD on our objects because its preconditioning is less effective when the intensity variations are not large. [sent-280, score-0.104]

81 As with the synthetic data, the comparison methods are SIRFS [3], DFD via diffusion [6], and standard DFD. [sent-285, score-0.095]

82 In order to use our shading constraints, we first calibrate the natural illumination using a white Lambertian sphere, and then use the known surface normals of the sphere to solve the shading matrix in Eq. [sent-288, score-0.819]

83 Because the albedo of the sphere may differ from those of our target objects, we estimate the relative albedo between target objects and the sphere simply by comparing the brightness of manually identified local areas that have a similar normal orientation. [sent-290, score-0.328]

84 For objects with surface texture, the albedo of the local area used in this comparison is used as the reference albedo for the object. [sent-291, score-0.206]

85 With the SIRFS method, the depth variations on the body correctly follow the object shape, but the head is shown to be closer than it actually is. [sent-299, score-0.231]

86 The depth estimates of DFD via diffusion and standard DFD are both generally accurate for the head and body. [sent-300, score-0.304]

87 Our result conforms most closely to the actual object, with shading information to provide shape details and help resolve DFD uncertainties. [sent-302, score-0.377]

88 The general depth trends shown with SIRFS are accurate, but the albedo change and shape details are missed. [sent-305, score-0.3]

89 For the turtle in the third row, the depth estimates of our method show greater accuracy. [sent-309, score-0.215]

90 It also shows the shell and neck at the same depth, and a smooth depth transition from the head to the shell. [sent-311, score-0.212]

91 DFD via diffusion does not exhibit the gradual changes of depth over the object, while standard DFD displays incorrect depth variations in areas with little texture. [sent-312, score-0.553]

92 The depth of the closer arm, however, is off, and the left foot is not shown to be closer. [sent-315, score-0.192]

93 Both this result and the one of DFD via diffusion exhibit less shape detail than our depth estimate. [sent-316, score-0.33]

94 Conclusion In this paper, we presented a method to enhance depthfrom-defocus by incorporating shading constraints. [sent-319, score-0.352]

95 To effectively utilize the shading information on objects with varying albedo, we proposed an iterative technique that uses DFD and shading estimation in manner in which they facilitate each other. [sent-320, score-0.759]

96 Our experiments demonstrate that the use of shading constraints brings greater accuracy and detail to DFD, especially in areas without clear DFD solutions. [sent-321, score-0.433]

97 In future work, we plan to investigate ways to increase 222222333 the accuracy of our depth estimates. [sent-322, score-0.192]

98 Optimal selection of camera parameters for recovery of depth from defocused images. [sent-483, score-0.278]

99 An mrf model-based approach to simultaneous recovery of depth and restoration from defocused images. [sent-489, score-0.299]

100 Highquality shape from multi-view stereo and shading under general illumination. [sent-566, score-0.391]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('dfd', 0.837), ('shading', 0.352), ('depth', 0.192), ('sirfs', 0.127), ('defocus', 0.105), ('sfs', 0.084), ('albedo', 0.083), ('diffusion', 0.069), ('blur', 0.065), ('reflectance', 0.057), ('decomposition', 0.052), ('intrinsic', 0.049), ('bayesian', 0.046), ('surfaces', 0.044), ('defocused', 0.042), ('surface', 0.04), ('nls', 0.038), ('textureless', 0.037), ('normal', 0.037), ('sphere', 0.037), ('sq', 0.036), ('rajagopalan', 0.035), ('watanabe', 0.032), ('mrf', 0.03), ('lambertian', 0.029), ('argdmaxp', 0.029), ('argdmin', 0.029), ('nlr', 0.029), ('subbarao', 0.029), ('focal', 0.028), ('constraints', 0.028), ('vd', 0.028), ('lens', 0.027), ('jun', 0.027), ('areas', 0.027), ('detail', 0.026), ('synthetic', 0.026), ('smoothness', 0.025), ('shuochen', 0.025), ('sp', 0.025), ('shape', 0.025), ('textured', 0.025), ('illumination', 0.025), ('pages', 0.025), ('brightness', 0.024), ('texture', 0.024), ('psf', 0.024), ('prior', 0.024), ('iterative', 0.023), ('ip', 0.023), ('estimates', 0.023), ('light', 0.023), ('recovery', 0.022), ('utilizes', 0.022), ('camera', 0.022), ('coarse', 0.021), ('retinex', 0.021), ('environment', 0.02), ('head', 0.02), ('irradiance', 0.019), ('displays', 0.019), ('lighting', 0.019), ('variations', 0.019), ('sensor', 0.018), ('exhibit', 0.018), ('settings', 0.018), ('lenses', 0.017), ('markov', 0.017), ('little', 0.017), ('barron', 0.017), ('adelson', 0.017), ('inclusion', 0.017), ('tong', 0.017), ('iq', 0.016), ('utilize', 0.016), ('intensity', 0.016), ('nayar', 0.016), ('estimation', 0.016), ('pinhole', 0.016), ('tappen', 0.015), ('orthographic', 0.015), ('gr', 0.015), ('streams', 0.015), ('composite', 0.015), ('gs', 0.015), ('stereo', 0.014), ('radius', 0.014), ('improvements', 0.014), ('imaging', 0.014), ('focus', 0.014), ('june', 0.014), ('restoration', 0.013), ('links', 0.013), ('normals', 0.013), ('arm', 0.013), ('sixete', 0.013), ('kimmel', 0.013), ('ldik', 0.013), ('eucalyptus', 0.013), ('grove', 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints

Author: Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin

Abstract: We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations namely coarse shape reconstruction and poor accuracy on textureless surfaces that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to recover accurately from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces. – –

2 0.34905493 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures

Author: Yuichi Takeda, Shinsaku Hiura, Kosuke Sato

Abstract: In this paper we propose a novel depth measurement method by fusing depth from defocus (DFD) and stereo. One of the problems of passive stereo method is the difficulty of finding correct correspondence between images when an object has a repetitive pattern or edges parallel to the epipolar line. On the other hand, the accuracy of DFD method is inherently limited by the effective diameter of the lens. Therefore, we propose the fusion of stereo method and DFD by giving different focus distances for left and right cameras of a stereo camera with coded apertures. Two types of depth cues, defocus and disparity, are naturally integrated by the magnification and phase shift of a single point spread function (PSF) per camera. In this paper we give the proof of the proportional relationship between the diameter of defocus and disparity which makes the calibration easy. We also show the outstanding performance of our method which has both advantages of two depth cues through simulation and actual experiments.

3 0.31362039 305 cvpr-2013-Non-parametric Filtering for Geometric Detail Extraction and Material Representation

Author: Zicheng Liao, Jason Rock, Yang Wang, David Forsyth

Abstract: Geometric detail is a universal phenomenon in real world objects. It is an important component in object modeling, but not accounted for in current intrinsic image works. In this work, we explore using a non-parametric method to separate geometric detail from intrinsic image components. We further decompose an image as albedo ∗ (ccoomarpsoen-escnatsle. shading +e shading pdoestaeil a).n Oaugre decomposition offers quantitative improvement in albedo recovery and material classification.Our method also enables interesting image editing activities, including bump removal, geometric detail smoothing/enhancement and material transfer.

4 0.27462256 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image

Author: Jonathan T. Barron, Jitendra Malik

Abstract: In this paper we extend the “shape, illumination and reflectance from shading ” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images.

5 0.19928163 394 cvpr-2013-Shading-Based Shape Refinement of RGB-D Images

Author: Lap-Fai Yu, Sai-Kit Yeung, Yu-Wing Tai, Stephen Lin

Abstract: We present a shading-based shape refinement algorithm which uses a noisy, incomplete depth map from Kinect to help resolve ambiguities in shape-from-shading. In our framework, the partial depth information is used to overcome bas-relief ambiguity in normals estimation, as well as to assist in recovering relative albedos, which are needed to reliably estimate the lighting environment and to separate shading from albedo. This refinement of surface normals using a noisy depth map leads to high-quality 3D surfaces. The effectiveness of our algorithm is demonstrated through several challenging real-world examples.

6 0.17633736 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery

7 0.16148889 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras

8 0.13602291 354 cvpr-2013-Relative Volume Constraints for Single View 3D Reconstruction

9 0.12051308 466 cvpr-2013-Whitened Expectation Propagation: Non-Lambertian Shape from Shading and Shadow

10 0.12026031 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

11 0.10113775 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera

12 0.091879614 117 cvpr-2013-Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera

13 0.088057742 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

14 0.087274089 423 cvpr-2013-Template-Based Isometric Deformable 3D Reconstruction with Sampling-Based Focal Length Self-Calibration

15 0.086670458 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

16 0.084828764 111 cvpr-2013-Dense Reconstruction Using 3D Object Shape Priors

17 0.083477907 232 cvpr-2013-Joint Geodesic Upsampling of Depth Images

18 0.082766034 465 cvpr-2013-What Object Motion Reveals about Shape with Unknown BRDF and Lighting

19 0.082258388 330 cvpr-2013-Photometric Ambient Occlusion

20 0.081801005 21 cvpr-2013-A New Perspective on Uncalibrated Photometric Stereo


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.118), (1, 0.216), (2, 0.017), (3, 0.083), (4, -0.049), (5, -0.018), (6, -0.086), (7, 0.113), (8, 0.046), (9, -0.01), (10, -0.071), (11, -0.127), (12, -0.027), (13, 0.108), (14, 0.021), (15, 0.036), (16, -0.111), (17, -0.046), (18, -0.048), (19, -0.095), (20, -0.013), (21, 0.011), (22, 0.008), (23, 0.031), (24, 0.087), (25, 0.048), (26, 0.041), (27, -0.021), (28, 0.015), (29, 0.043), (30, 0.174), (31, -0.104), (32, -0.02), (33, 0.154), (34, -0.04), (35, 0.073), (36, -0.059), (37, -0.033), (38, -0.071), (39, 0.003), (40, -0.1), (41, -0.068), (42, -0.006), (43, -0.092), (44, 0.056), (45, -0.049), (46, 0.08), (47, 0.008), (48, 0.026), (49, -0.147)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.90156662 56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints

Author: Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin

Abstract: We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations namely coarse shape reconstruction and poor accuracy on textureless surfaces that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to recover accurately from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces. – –

2 0.79577821 305 cvpr-2013-Non-parametric Filtering for Geometric Detail Extraction and Material Representation

Author: Zicheng Liao, Jason Rock, Yang Wang, David Forsyth

Abstract: Geometric detail is a universal phenomenon in real world objects. It is an important component in object modeling, but not accounted for in current intrinsic image works. In this work, we explore using a non-parametric method to separate geometric detail from intrinsic image components. We further decompose an image as albedo ∗ (ccoomarpsoen-escnatsle. shading +e shading pdoestaeil a).n Oaugre decomposition offers quantitative improvement in albedo recovery and material classification.Our method also enables interesting image editing activities, including bump removal, geometric detail smoothing/enhancement and material transfer.

3 0.76609749 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image

Author: Jonathan T. Barron, Jitendra Malik

Abstract: In this paper we extend the “shape, illumination and reflectance from shading ” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images.

4 0.7660507 394 cvpr-2013-Shading-Based Shape Refinement of RGB-D Images

Author: Lap-Fai Yu, Sai-Kit Yeung, Yu-Wing Tai, Stephen Lin

Abstract: We present a shading-based shape refinement algorithm which uses a noisy, incomplete depth map from Kinect to help resolve ambiguities in shape-from-shading. In our framework, the partial depth information is used to overcome bas-relief ambiguity in normals estimation, as well as to assist in recovering relative albedos, which are needed to reliably estimate the lighting environment and to separate shading from albedo. This refinement of surface normals using a noisy depth map leads to high-quality 3D surfaces. The effectiveness of our algorithm is demonstrated through several challenging real-world examples.

5 0.69489145 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery

Author: Kevin Karsch, Zicheng Liao, Jason Rock, Jonathan T. Barron, Derek Hoiem

Abstract: Early work in computer vision considered a host of geometric cues for both shape reconstruction [11] and recognition [14]. However, since then, the vision community has focused heavily on shading cues for reconstruction [1], and moved towards data-driven approaches for recognition [6]. In this paper, we reconsider these perhaps overlooked “boundary” cues (such as self occlusions and folds in a surface), as well as many other established constraints for shape reconstruction. In a variety of user studies and quantitative tasks, we evaluate how well these cues inform shape reconstruction (relative to each other) in terms of both shape quality and shape recognition. Our findings suggest many new directions for future research in shape reconstruction, such as automatic boundary cue detection and relaxing assumptions in shape from shading (e.g. orthographic projection, Lambertian surfaces).

6 0.68192035 354 cvpr-2013-Relative Volume Constraints for Single View 3D Reconstruction

7 0.59768826 466 cvpr-2013-Whitened Expectation Propagation: Non-Lambertian Shape from Shading and Shadow

8 0.55894959 21 cvpr-2013-A New Perspective on Uncalibrated Photometric Stereo

9 0.54043514 330 cvpr-2013-Photometric Ambient Occlusion

10 0.50131047 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures

11 0.4498859 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras

12 0.43557972 428 cvpr-2013-The Episolar Constraint: Monocular Shape from Shadow Correspondence

13 0.42850766 232 cvpr-2013-Joint Geodesic Upsampling of Depth Images

14 0.42200604 196 cvpr-2013-HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences

15 0.42017794 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

16 0.41860944 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns

17 0.41578814 115 cvpr-2013-Depth Super Resolution by Rigid Body Self-Similarity in 3D

18 0.40943527 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera

19 0.39310402 219 cvpr-2013-In Defense of 3D-Label Stereo

20 0.38169456 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.139), (16, 0.03), (26, 0.047), (33, 0.226), (66, 0.048), (67, 0.029), (68, 0.182), (69, 0.035), (87, 0.113), (91, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.8623504 194 cvpr-2013-Groupwise Registration via Graph Shrinkage on the Image Manifold

Author: Shihui Ying, Guorong Wu, Qian Wang, Dinggang Shen

Abstract: Recently, groupwise registration has been investigated for simultaneous alignment of all images without selecting any individual image as the template, thus avoiding the potential bias in image registration. However, none of current groupwise registration method fully utilizes the image distribution to guide the registration. Thus, the registration performance usually suffers from large inter-subject variations across individual images. To solve this issue, we propose a novel groupwise registration algorithm for large population dataset, guided by the image distribution on the manifold. Specifically, we first use a graph to model the distribution of all image data sitting on the image manifold, with each node representing an image and each edge representing the geodesic pathway between two nodes (or images). Then, the procedure of warping all images to theirpopulation center turns to the dynamic shrinking ofthe graph nodes along their graph edges until all graph nodes become close to each other. Thus, the topology ofimage distribution on the image manifold is always preserved during the groupwise registration. More importantly, by modeling , the distribution of all images via a graph, we can potentially reduce registration error since every time each image is warped only according to its nearby images with similar structures in the graph. We have evaluated our proposed groupwise registration method on both synthetic and real datasets, with comparison to the two state-of-the-art groupwise registration methods. All experimental results show that our proposed method achieves the best performance in terms of registration accuracy and robustness.

same-paper 2 0.84276682 56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints

Author: Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin

Abstract: We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations namely coarse shape reconstruction and poor accuracy on textureless surfaces that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to recover accurately from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces. – –

3 0.82879341 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

Author: Zhenglong Zhou, Zhe Wu, Ping Tan

Abstract: We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique that works for general isotropic materials. Our data capture setup is simple, which consists of only a digital camera and a handheld light source. From a single viewpoint, we use a set of photometric stereo images to identify surface points with the same distance to the camera. We collect this information from multiple viewpoints and combine it with structure-from-motion to obtain a precise reconstruction of the complete 3D shape. The spatially varying isotropic bidirectional reflectance distributionfunction (BRDF) is captured by simultaneously inferring a set of basis BRDFs and their mixing weights at each surface point. According to our experiments, the captured shapes are accurate to 0.3 millimeters. The captured reflectance has relative root-mean-square error (RMSE) of 9%.

4 0.82648689 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image

Author: Jonathan T. Barron, Jitendra Malik

Abstract: In this paper we extend the “shape, illumination and reflectance from shading ” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images.

5 0.82585275 430 cvpr-2013-The SVM-Minus Similarity Score for Video Face Recognition

Author: Lior Wolf, Noga Levy

Abstract: Face recognition in unconstrained videos requires specialized tools beyond those developed for still images: the fact that the confounding factors change state during the video sequence presents a unique challenge, but also an opportunity to eliminate spurious similarities. Luckily, a major source of confusion in visual similarity of faces is the 3D head orientation, for which image analysis tools provide an accurate estimation. The method we propose belongs to a family of classifierbased similarity scores. We present an effective way to discount pose induced similarities within such a framework, which is based on a newly introduced classifier called SVMminus. The presented method is shown to outperform existing techniques on the most challenging and realistic publicly available video face recognition benchmark, both by itself, and in concert with other methods.

6 0.82377678 127 cvpr-2013-Discovering the Structure of a Planar Mirror System from Multiple Observations of a Single Point

7 0.82363123 75 cvpr-2013-Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis

8 0.8208102 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

9 0.81951231 298 cvpr-2013-Multi-scale Curve Detection on Surfaces

10 0.81888705 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

11 0.8179822 394 cvpr-2013-Shading-Based Shape Refinement of RGB-D Images

12 0.81701195 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

13 0.81669909 344 cvpr-2013-Radial Distortion Self-Calibration

14 0.81472743 222 cvpr-2013-Incorporating User Interaction and Topological Constraints within Contour Completion via Discrete Calculus

15 0.81464362 393 cvpr-2013-Separating Signal from Noise Using Patch Recurrence across Scales

16 0.81413889 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery

17 0.81402135 19 cvpr-2013-A Minimum Error Vanishing Point Detection Approach for Uncalibrated Monocular Images of Man-Made Environments

18 0.81381422 248 cvpr-2013-Learning Collections of Part Models for Object Recognition

19 0.81282032 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path

20 0.81163889 331 cvpr-2013-Physically Plausible 3D Scene Tracking: The Single Actor Hypothesis