cvpr cvpr2013 cvpr2013-56 cvpr2013-56-reference knowledge-graph by maker-knowledge-mining

56 cvpr-2013-Bayesian Depth-from-Defocus with Shading Constraints


Source: pdf

Author: Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin

Abstract: We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations namely coarse shape reconstruction and poor accuracy on textureless surfaces that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to recover accurately from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces. – –


reference text

[1] Image composite editor. http://research.microsoft.com/ en-us/um/redmond/groups/ivm/ice.

[2] Light probe image gallery. http://www.pauldebevec.com/Probes/.

[3] J. T. Barron and J. Malik. Color constancy, intrinsic images, and shape estimation. In ECCV, 2012.

[4] A. Blake, P. Kohli, and C. Rother. Markov Random Fields for Vision and Image Processing. The MIT Press, 2011.

[5] J.-D. Durou, M. Falcone, and M. Sagona. Numerical methods for shape-from-shading: A new survey with benchmarks. Compt. Vision and Image Underst., 109(1):22–43, 2008.

[6] P. Favaro, S. Soatto, M. Burger, and S. Osher. Shape from defocus via diffusion. IEEE Trans. Patt. Anal. and Mach. Intel., 30(3):518–531, 2008.

[7] R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman. Ground truth dataset and baseline evaluations for intrinsic image algorithms. In ICCV, 2009.

[8] R. Huang and W. Smith. Shape-from-shading under complex natural illumination. In ICIP, pages 13–16, 2011.

[9] T.-l. Hwang, J. Clark, and A. Yuille. A depth recovery algorithm using defocus information. In CVPR, pages 476 –482, jun 1989.

[10] M. K. Johnson and E. H. Adelson. Shape estimation in natural illumination. In CVPR, pages 2553–2560, 2011.

[11] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel. A variational framework for retinex. Int. Journal of Computer Vision, 52:7–23, 2003.

[12] K. J. Lee, Q. Zhao, X. Tong, M. Gong, S. Izadi, S. U. Lee, P. Tan, and S. Lin. Estimation of intrinsic image sequences from image+depth video. In ECCV, pages 327–340, 2012.

[13] A. Levin, W. T. Freeman, and F. Durand. Understanding camera trade-offs through a bayesian analysis of light field projections. In ECCV, pages 88–101, 2008.

[14] S. Nayar, M. Watanabe, and M. Noguchi. Real-time focus range sensor. In ICCV, pages 995 –1001, jun 1995.

[15] M. Noguchi and S. Nayar. Microscopic shape from focus using active illumination. In ICPR, volume 1, pages 147– 152, oct 1994.

[16] G. Oxholm and K. Nishino. Shape and reflectance from natural illumination. In ECCV, pages I:528–541, 2012.

[17] B. Peacock, N. Hastings, and M. Evans. Statistical Distributions. Wiley-Interscience, June 2000.

[18] A. P. Pentland. A new sense for depth of field. IEEE Trans. Patt. Anal. and Mach. Intel., 9(4):523–53 1, Apr. 1987.

[19] E. Prados and O. Faugeras. A generic and provably convergent shape-from-shading method for orthographic and pinhole cameras. Int. Journal of Computer Vision, 65(1):97– 125, 2005.

[20] A. Rajagopalan, S. Chaudhuri, and U. Mudenagudi. Depth estimation and image restoration using defocused stereo pairs. IEEE Trans. Patt. Anal. and Mach. Intel., 26(1 1): 1521 –1525, nov. 2004.

[21] A. N. Rajagopalan and S. Chaudhuri. Optimal selection of camera parameters for recovery of depth from defocused images. In CVPR, 1997.

[22] A. N. Rajagopalan and S. Chaudhuri. An mrf model-based approach to simultaneous recovery of depth and restoration from defocused images. IEEE Trans. Patt. Anal. and Mach. Intel., 21(7):577–589, July 1999.

[23] R. Ramamoorthi and P. Hanrahan. An efficient representation for irradiance environment maps. In ACM SIGGRAPH, pages 497–500, 2001.

[24] L. Shen, P. Tan, and S. Lin. Intrinsic image decomposition with non-local texture cues. In CVPR, pages 1–7, june 2008.

[25] M. Subbarao and N. Gurumoorthy. Depth recovery from blurred edges. In CVPR, pages 498 –503, jun 1988.

[26] M. Subbarao and G. Surya. Depth from defocus: A spatial domain approach. Int. Journal of Computer Vision, 13(3):271–294, 1994.

[27] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother. A comparative study of energy minimization methods for markov random fields with smoothness-based priors. IEEE Trans.

[28]

[29]

[30] [3 1]

[32]

[33]

[34]

[35] Patt. Anal. and Mach. Intel., 30(6): 1068 –1080, june 2008. M. F. Tappen, E. H. Adelson, and W. T. Freeman. Estimating intrinsic component images using non-linear regression. In CVPR, pages 1992–1999, 2006. M. Watanabe, S. Nayar, and M. Noguchi. Real-Time Computation of Depth from Defocus. In Proc. SPIE, volume 2599, pages 14–25, Jan 1996. M. Watanabe and S. K. Nayar. Rational filters for passive depth from defocus. Int. Journal of Computer Vision, 27(3):203–225, May 1998. Y. Weiss. Deriving intrinsic images from image sequences. In ICCV, pages 68–75, 2001. C. Wu, B. Wilburn, Y. Matsushita, and C. Theobalt. Highquality shape from multi-view stereo and shading under general illumination. In CVPR, pages 969–976, 2011. Y. Xiong and S. A. Shafer. Depth from focusing and defocusing. In CVPR, pages 68–73, 1993. R. Zhang, P.-S. Tsai, J. Cryer, and M. Shah. Shape-fromshading: a survey. IEEE Trans. Patt. Anal. and Mach. Intel. , 21(8):690 –706, aug 1999. K. Zhou, X. Wang, Y. Tong, M. Desbrun, B. Guo, and H.-Y. Shum. Texturemontage: Seamless texturing of arbitrary surfaces from multiple images. ACM Transactions on Graphics, 24(3): 1148–1 155, 2005. 222222444