cvpr cvpr2013 cvpr2013-415 cvpr2013-415-reference knowledge-graph by maker-knowledge-mining

415 cvpr-2013-Structured Face Hallucination


Source: pdf

Author: Chih-Yuan Yang, Sifei Liu, Ming-Hsuan Yang

Abstract: The goal of face hallucination is to generate highresolution images with fidelity from low-resolution ones. In contrast to existing methods based on patch similarity or holistic constraints in the image space, we propose to exploit local image structures for face hallucination. Each face image is represented in terms of facial components, contours and smooth regions. The image structure is maintained via matching gradients in the reconstructed highresolution output. For facial components, we align input images to generate accurate exemplars and transfer the high-frequency details for preserving structural consistency. For contours, we learn statistical priors to generate salient structures in the high-resolution images. A patch matching method is utilized on the smooth regions where the image gradients are preserved. Experimental results demonstrate that the proposed algorithm generates hallucinated face images with favorable quality and adaptability.


reference text

[1] S. Baker and T. Kanade. Hallucinating faces. In FG, 2000.

[2] C. Barnes, E. Shechtman, D. B. Goldman, and A. Finkelstein. The generalized patchmatch correspondence algorithm. In ECCV, 2010.

[3] H. Chang, D.-Y. Yeung, and Y. Xiong. Super-resolution through neighbor embedding. In CVPR, 2004.

[4] T. Cootes, G. Edwards, and C. Taylor. Active appearance models. PAMI, 23(6):681–685, 2001 .

[5] R. Fattal. Image upsampling via imposed edge statistics. In SIGGRAPH, 2007.

[6] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker. Multi-PIE. In FG, 2008.

[7] M. Irani and S. Peleg. Improving resolution by image registration. CVGIP, 53(3):231–239, 1991.

[8] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and Simile Classifiers for Face Verification. In ICCV, 2009.

[9] C. Liu, H.-Y. Shum, and W. T. Freeman. Face hallucination: Theory and practice. IJCV, 75(1): 115–134, 2007.

[10] X. Ma, J. Zhang, and C. Qi. Hallucinating face by positionpatch. PR, 43(6):2224–2236, 2010.

[11] A. Moorthy and A. Bovik. Blind image quality assessment: From natural scene statistics to perceptual quality. TIP, 20(12):3350 –3364, 2011.

[12] J. Sun, J. Sun, Z. Xu, and H.-Y. Shum. Image superresolution using gradient profile prior. In CVPR, 2008.

[13] M. F. Tappen and C. Liu. A Bayesian approach to alignmentbased image hallucination. In ECCV, 2012.

[14] X. Wang and X. Tang. Hallucinating face by eigentransformation. SMC, 35(3):425 –434, 2005.

[15] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli. Image quality assessment: from error visibility to structural similarity. TIP, 13(4):600 –612, 2004.

[16] J. Yang, J. Wright, T. Huang, and Y. Ma. Image superresolution via sparse representation. TIP, 2010.

[17] X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In CVPR, 2012. 1 1 1 111110000044222 (a) Input (b) Irani91 [7] (c) Yang10 [16] (d) Ma10 [10] (e) Liu07 [9] (f) Proposed (g) Ground truth PSNR 32.97 28.15 18.93 31.24 32.65 Infinite SSIM 0.8842 0.7049 0.6847 0.8267 0.8649 1.0 DIIVINE idx. 48.43 32.85 54.05 39.92 30.18 22.37 Figure 6. Qualitative comparison for 4 times upsampled upright frontal faces (results best viewed on a high-resolution display). (a) Input(b) Irani91 [7](c) Yang10 [16](d) Ma10 [10](e) Liu07 [9](f) Proposed(g) Ground truth PSNR 35.51 29.32 19.78 15. 18 34.68 Infinite SSIM 0.9361 0.7569 0.7497 0.6667 0.9156 1.0 DIIVINE idx. 48.61 25.96 47.78 42.58 29.00 25.62 Figure 7. Qualitative comparison for 4 times upsampled upright frontal faces (results best viewed on a high-resolution display). (a) Input(b) Irani91 [7](c) Yang10 [16](d) Ma10 [10](e) Liu07 [9](f) Proposed(g) Ground truth PSNR 32.89 28.12 21.22 18.16 33.31 Infinite SSIM 0.8970 0.7324 0.7620 0.7257 0.8887 1.0 DIIVINE idx. 61.61 35.24 56.45 36.04 32.43 25.23 Figure 8. Qualitative comparison for 4 times upsampled upright frontal faces (results best viewed on a high-resolution display). 1 1 1 1 1 10 0 05 3 3 (a) Input(b) Irani91 [7](c) Yang10 [16](d) Ma10 [10](e) Liu07 [9](f) Proposed(g) Ground truth PSNR 33.87 23.17 20.89 16.53 33.23 Infinite SSIM DIIVINE idx. 0.9126 57.94 0.4943 29.57 0.7968 36.23 0.6663 50.96 0.8873 32.82 1.0 30.05 Figure 9. Qualitative comparison for 4 times upsampled non-frontal faces (results best viewed on a high-resolution display). (a) Input(b) Irani91 [7](c) Yang10 [16](d) Ma10 [10](e) Liu07 [9](f) Proposed(g) Ground truth PSNR 35.11 23.20 21.78 16.12 34.22 SSIM 0.9028 0.4733 0.7595 0.6332 0.871 1 Infinite 1.0 DIIVINE idx. 48.48 27.06 40.52 49.71 30.36 22.21 Figure 10. Qualitative comparison for 4 times upsampled non-frontal faces (results best viewed on a high-resolution display). (a) Input(b) Irani91 [7](c) Yang10 [16](d) Ma10 [10](e) Liu07 [9](f) Proposed(g) Ground truth PSNR 29.33 24.16 15.23 13.05 30.04 Infinite SSIM DIIVINE idx. 0.8338 50.05 0.5474 23.14 0.5230 47.25 0.4948 38.91 0.8798 33.92 1.0 29.34 Figure 11. Qualitative comparison for 4 times upsampled upright frontal faces (results best viewed on a high-resolution display). 1 1 1 1 1 10 0 06 4 4