cvpr cvpr2013 cvpr2013-232 cvpr2013-232-reference knowledge-graph by maker-knowledge-mining

232 cvpr-2013-Joint Geodesic Upsampling of Depth Images


Source: pdf

Author: Ming-Yu Liu, Oncel Tuzel, Yuichi Taguchi

Abstract: We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution one. Though this is closely related to the all-pairshortest-path problem which has O(n2 log n) complexity, we develop a novel approximation algorithm whose complexity grows linearly with the image size and achieve realtime performance. We compare our algorithm with the state of the art on the benchmark dataset and show that our approach provides more accurate depth upsampling with fewer artifacts. In addition, we show that the proposed algorithm is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.


reference text

[1] A. Adams, J. Baek, and M. A. Davis. Fast high-dimensional filtering using the permutohedral lattice. In EUROGRAPHICS, 2010.

[2] X. Bai and G. Sapiro. A geodesic framework for fast interactive image and video segmentation and matting. In CVPR, 2007.

[3] A. Buades and B. Coll. A non-local algorithm for image denoising. In CVPR, 2005.

[4] D. Chan, H. Buisman, C. Theobalt, and S. Thrun. A noiseaware filter for real-time depth upsampling. In ECCV Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications, 2008.

[5] A. Criminisi, T. Sharp, C. Rother, and P. P ´erez. Geodesic image and video editing. ACM Transaction on Graphics, 29(5):134: 1–134:15, 2010.

[6] J. Diebel and S. Thrun. An application of markov random fields to range sensing. In NIPS, 2005.

[7] J. Dolson, J. Baek, C. Plagemann, and S. Thrun. Upsampling range data in dynamic environments. In CVPR, 2010.

[8] G. Facciolo and V. Caselles. Geodesic neighborhoods for piecewise affine interpolation of sparse data. In ICIP, 2009.

[9] K. He, J. Sun, and X. Tang. Guided image filtering. In ECCV, 2010.

[10] H. Hirschm u¨ller and D. Scharstein. Evaluation of cost functions for stereo matching. In CVPR, 2007.

[11] J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele. Joint bilateral upsampling. ACM Transaction on Graphics, 26(3), 2007.

[12] A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. ACM Transaction on Graphics, 23(3):689–694, 2004.

[13] M.-Y. Liu, O. Tuzel, A. Veeraraghavan, Y. Taguchi, T. K. Marks, and R. Chellappa. Fast object localization and pose estimation in heavy clutter for robotic bin picking. IJRR, 31(8):951–973, 2012.

[14] J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon. High quality depth map upsampling for 3D-TOF cameras. In ICCV, 2011.

[15] R. Raskar, K.-H. Tan, R. Feris, J. Yu, and M. Turk. Nonphotorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. ACM Transaction on Graphics, 23(3):679–688, 2004.

[16] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV, 47(1-3):7–42, 2002.

[17] M. Thorup and U. Zwick. Approximate distance oracles. In STOC, 2001.

[18] P. J. Toivanen. New geodesic distance transforms for gray-

[19]

[20]

[21]

[22]

[23]

[24] scale images. Pattern Recognition Letters, 17(5):437–450, 1996. J. Yang, X. Ye, K. Li, and C. Hou. Depth recovery using an adaptive color-guided auto-regressive model. In ECCV, 2012. Q. Yang. A non-local cost aggregation method for stereo matching. In CVPR, 2012. Q. Yang, K.-H. Tan, and N. Ahuja. Real-time O(1) bilateral filtering. In CVPR, 2009. Q. Yang, R. Yang, J. Davis, and D. Nister. Spatial-depth super resolution for range images. In CVPR, 2007. L. Yatziv, A. Bartesaghi, and G. Sapiro. O(N) implementation of the fast marching algorithm. Journal of Computational Physics, 212(2):393–399, 2006. L. Yatziv and G. Sapiro. Fast image and video colorization using chrominance blending. TIP, 15(5): 1120–1 129, 2006. 111777666