iccv iccv2013 iccv2013-23 iccv2013-23-reference knowledge-graph by maker-knowledge-mining

23 iccv-2013-A New Image Quality Metric for Image Auto-denoising


Source: pdf

Author: Xiangfei Kong, Kuan Li, Qingxiong Yang, Liu Wenyin, Ming-Hsuan Yang

Abstract: This paper proposes a new non-reference image quality metric that can be adopted by the state-of-the-art image/video denoising algorithms for auto-denoising. The proposed metric is extremely simple and can be implemented in four lines of Matlab code1. The basic assumption employed by the proposed metric is that the noise should be independent of the original image. A direct measurement of this dependence is, however, impractical due to the relatively low accuracy of existing denoising method. The proposed metric thus aims at maximizing the structure similarity between the input noisy image and the estimated image noise around homogeneous regions and the structure similarity between the input noisy image and the denoised image around highly-structured regions, and is computed as the linear correlation coefficient of the two corresponding structure similarity maps. Numerous experimental results demonstrate that the proposed metric not only outperforms the current state-of-the-art non-reference quality metric quantitatively and qualitatively, but also better maintains temporal coherence when used for video denoising. ˜


reference text

[1] A.Buades, B. Coll, and J. Morel. Self-similarity-based image denoising. CACM, 54(5): 109–1 17, 2011.

[2] A. Buades, B. Coll, and J. Morel. A non-local algorithm for image denoising. In CVPR, pages 60–65, 2005.

[3] G. H. Chen, C. L. Yang, and S. L. Xie. Gradient-based structural similarity for image quality assessment. In ICIP, pages 2929–2932, 2006.

[4] K. Dabo, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. TIP, 16(8):2080–2095, 2007.

[5] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Color image denoising via sparse 3D collaborative filtering with 2893 (24.8 dB, σ = 15)(29.05 dB, itr = 8)(27.72 dB, itr = 19)(29.05 dB, itr = 8) (2 .45 dB, σ = 20)(24.63 dB, itr = 18)(24.79 dB, itr = 8)(24.90 dB, itr = 12) (a) Ground Truth. (b) Noisy. (c) Proposed. (d) Q-metric. (e) “optimal”. Figure 6. Visual evaluation using SKR with relatively high synthetic noise levels (σ ≥ 15). Note that Q-metric tends to remove textures bFeigsiudrees 6 6n.o Visiesu. aBle esvt avluieawtioedn o uns ihngig Sh-KreRso wluittihon re dlaistpivlealyys. h oRrSENPr10. 50 120(a3Q)P0ro−CpMoestrdicnM4s0etriacVnidet5on0Fraomies60lev7l0.8901 NSPrRoE3210. 503251 02(b3)0Var4y0iVndgeoF5nr0amoeis60lev 7.08QP−rompeo9ts0reicdMetric10 Oediσtpzmest21 0684102(c)30Con4s0taVnidet5o0nFroamPBiQOres−origmetpiP6eonS0tsareNlicdeNRMoσvies triLc70elv.80910eσdzimtpOse2150 BOQPer−siogmtp1Pineo0SatsreNlincdRomσise tr li2ec0vl(d3)0Var4y0inVgde5on0froamies60lev.708910 Evaluation using PSNR error. Evaluation using the estimated noise level. Figure 8. Video denoising. From left to right: (i) experimental results for synthetic WGN that has a constant noise level (σ = 15); (ii) experimental results for synthetic WGN that has dynamic changing noise levels with respect to the time domain. Note that the performance of the proposed metric is higher than the Q-metric for both situations. grouping constraint in luminance-Chrominance space. ICIP, pages I–313–I–316, 2007.

[6] D. Doermann. In for no-reference image quality assessment. In CVPR, pages 1098–1 105, 2012. Unsupervised feature learning framework

[7] R. Ferzli and L. J. Karam. A no-reference objective image 2894 (26.15 dB, itr = 8)(27.29 dB, itr = 30) (a) Proposed (b) Q-metric Figure 7. Visual evaluation using BM3D and SKR with high synthetic noise (σ = 19). Note that the visual perception does not always agree with the PSNR metric (which shows that the left image should have a lower performance). Best viewed on high-resolution displays.

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16] sharpness metric based on the notion of just noticeable blur. TIP, 18(4):717–728, 2009. B. Girod. What’s wrong with mean-squared error? In A. B. Watson, editor, Digital images and human vision, pages 207– 220. MIT Press, 1993. L. He, D. Tao, X. Li, and X. Gao. Sparse representation for blind image quality assessment. In CVPR, pages 1146–1 153, 2012. K. Hirakawa and T. W. Parks. Joint demosaicing and denoising. TIP, 15(8):2146–2157, 2006. T. M. Kusuma and H. J. Zepernick. A reduced-reference perceptual quality metric for in-service image quality assessment. In Joint Workshop on Mobile Future and Symposium on Trends in Communications, pages 71–74, 2003. Q. Li and Z. Wang. General-purpose reduced-reference image quality assessment based on perceptually and statistically motivated image representation. In ICIP, pages 1192– 1195, 2008. Q. Li and Z. Wang. Reduced-reference image quality assessment using divisive normalization-based image representation. JSTP, 3(2):202–21 1, 2009. M. Narwaria and W. Lin. SVD-based quality metric for image and video using machine learning. SMC-B, 42(2):347– 364, 2012. N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, and F. B. M. Carli. TID2008 - a database for evaluation of fullreference visual quality assessment metrics. Adv. Modern Radioelectron., 10:30–45, 2009. H. Sheikh, Z. Wang, L. Cormack, and A. Bovik. LIVE image quality assessment database release 2. ? ? ? ? ? ? ? !?

[17] A. Shnayderman, A. Gusev, and A. Eskicioglu. An SVDbased grayscale image quality measure for local and global assessment. TIP, 15(2):422–429, 2006.

[18] H. Takeda, S. Farsiu, and P. Milanfar. Kernel regression for image processing and reconstruction. TIP, 16(2):349–399, 2007.

[19] Z. Wang and A. Bovik. Mean squared error: Love it or leave it? A new look at signal fidelity measures. Signal Processing Magazine, 26(1):98 –1 17, 2009.

[20] Z. Wang and A. C. Bovik. A universal image quality index. SPL, 9(3):81–84, 2002.

[21] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural smilarity. TIP, 13(4):600–612, 2004.

[22] Z. Wang and Q. Li. Information content weighting for perceptual image quality assessment. TIP, 20(5): 1185 –1 198, 2011.

[23] Z. Wang, H. Sheikh, and A. Bovik. No-reference perceptual quality assessment of JPEG compressed images. In ICIP, pages I–477 I–480 vol.1, 2002.

[24] Z. Wang, E. Simoncelli, and A. Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conf. on Signals, Systems and Computers, pages 1398–1402, 2003.

[25] Z. Wang and E. P. Simoncelli. Reduced-reference image quality assessment using a wavelet-domain natural image statistic model. In SPIE Human Vision and Electronic Imaging, pages 149–159, 2005.

[26] L. Zhang, L. Zhang, X. Mou, and D. Zhang. FSIM: A feature similarity index for image quality assessment. TIP, 20(8):2378–2386, 2011.

[27] J. Zhu and N. Wang. Image quality assessment by visual gradient similarity. TIP, 21(3):919–933, 2012.

[28] X. Zhu and P. Milanfar. Automatic parameter selection for denoising algorithms using a no-reference measure of image content. TIP, 19(12):31 16–3132, 2010. – 2895