iccv iccv2013 iccv2013-23 knowledge-graph by maker-knowledge-mining

23 iccv-2013-A New Image Quality Metric for Image Auto-denoising


Source: pdf

Author: Xiangfei Kong, Kuan Li, Qingxiong Yang, Liu Wenyin, Ming-Hsuan Yang

Abstract: This paper proposes a new non-reference image quality metric that can be adopted by the state-of-the-art image/video denoising algorithms for auto-denoising. The proposed metric is extremely simple and can be implemented in four lines of Matlab code1. The basic assumption employed by the proposed metric is that the noise should be independent of the original image. A direct measurement of this dependence is, however, impractical due to the relatively low accuracy of existing denoising method. The proposed metric thus aims at maximizing the structure similarity between the input noisy image and the estimated image noise around homogeneous regions and the structure similarity between the input noisy image and the denoised image around highly-structured regions, and is computed as the linear correlation coefficient of the two corresponding structure similarity maps. Numerous experimental results demonstrate that the proposed metric not only outperforms the current state-of-the-art non-reference quality metric quantitatively and qualitatively, but also better maintains temporal coherence when used for video denoising. ˜

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 hk / Abstract This paper proposes a new non-reference image quality metric that can be adopted by the state-of-the-art image/video denoising algorithms for auto-denoising. [sent-5, score-0.682]

2 The proposed metric is extremely simple and can be implemented in four lines of Matlab code1. [sent-6, score-0.271]

3 The basic assumption employed by the proposed metric is that the noise should be independent of the original image. [sent-7, score-0.427]

4 A direct measurement of this dependence is, however, impractical due to the relatively low accuracy of existing denoising method. [sent-8, score-0.418]

5 Numerous experimental results demonstrate that the proposed metric not only outperforms the current state-of-the-art non-reference quality metric quantitatively and qualitatively, but also better maintains temporal coherence when used for video denoising. [sent-10, score-0.663]

6 Introduction Image denoising is one of the most fundamental tasks that finds numerous applications. [sent-12, score-0.342]

7 Numerous denoising algorithms have been proposed in the literature. [sent-14, score-0.305]

8 ˜qiyang/publ i ions / i cat ccv- 13 / As the distortion-free reference image is not available, typical image quality assessment (IQA) metrics such as the mean squared error (MSE) and peak signal to noise ratio (PSNR) cannot be used to assess the denoised image quality. [sent-20, score-0.884]

9 However, this is impractical due to the relatively low accuracy of existing denoising method (except when the noise level is extremely low). [sent-33, score-0.57]

10 This paper proposes to use a high-quality denoising algorithm (e. [sent-35, score-0.323]

11 , 2888 BM3D [4] or SKR [18]) to compute two structure similarity maps 1) between the input noisy image and the extracted MNI and 2) between the input noisy image and the denoised × image. [sent-37, score-0.72]

12 The experimental results demonstrate that the proposed metric not only outperforms the current state-of-the-art non-reference quality metric quantitatively and qualitatively, but also better maintains temporal coherence when used for video denoising. [sent-43, score-0.663]

13 Human subject study is also employed to demonstrate that the proposed metric perceptually outperforms Q-metric when the obtained PSNR values are very close while the denoised images are visually different. [sent-44, score-0.657]

14 Although the proposed metric uses the entire input image, its computational complexity is very low because it can be decomposed into a number of box filters that can be computed very efficiently (in time linear in the number of image pixels). [sent-45, score-0.289]

15 The IQA metrics include the (root) mean square error (MSE or RMSE) and peak signal noise ration (PSNR), and they can be computed efficiently with clear physical indications and desirable mathematical properties [19]. [sent-52, score-0.277]

16 The structure similarity (SSIM) metric [21] makes a significant progress compared to PSNR and MSE. [sent-54, score-0.342]

17 Variants of the SSIM metric including multi-scale SSIM [24] and information content-weighted SSIM [22] have made further progress based on perceptual preference of HVS. [sent-56, score-0.294]

18 In addition, other metrics that exploits image structure have been proposed based on feature similarity index [26], analysis with singular vector decomposition [17, 14], and assessment on image gradient [27, 3]. [sent-57, score-0.285]

19 The proposed metric also exploits image structure for quality assessment of image denoising algorithms. [sent-59, score-0.836]

20 Our Metric Good parameter setting is important to guide the denoising algorithm to process a noisy image with proper balance between preserving the informative structural details and the reduction of the noise. [sent-71, score-0.516]

21 For such purposes, the proposed method evaluates the denoised images with two measurements: (1) the noise reduction, and (2) the structure preservation. [sent-72, score-0.575]

22 However, different from SSIM, the proposed metric operates without the reference (noise-free) image. [sent-74, score-0.278]

23 Let I denote the input noisy image and denote the denoised image obtained from a state-of-the-art denoising algorithm with parameter configuration h. [sent-78, score-0.803]

24 Two maps N and P measuring the local structure similarity between the noisy image I Mh and and I and are then computed based on SSIM, and the linear correlation coefficient of the two maps is used as an IQA metric. [sent-81, score-0.357]

25 This IQA metric can be employed by a parametric denoising algorithm for image auto-denoising. [sent-86, score-0.573]

26 Compute the MNI which is the difference of the input noisy image I the denoised image and Mh = I 2. [sent-90, score-0.476]

27 Compute structure similarity map N between= =th Ie −input Iˆh: −Iˆh; noisy image I the MNI Mh via SSIM metric (Eq. [sent-91, score-0.452]

28 Compute structure similarity map P between the input noisy image I the denoised image and via SSIM metric (Eq. [sent-93, score-0.818]

29 Compute image quality score e as the linear correlation coefficient of the two structure similarity maps N and P. [sent-95, score-0.335]

30 Iˆh Iˆh h so that the denoised image has the best visual quality with respect to the input noisy image I: Iˆh= argˆIhmiaxe(Iˆhi,I), (1) where hi ∈ (h1, h2 , . [sent-96, score-0.585]

31 hK) representing K possible parameter config∈u(r ahtions for the selected denoising algorithm and e(·) is the proposed IQA metric. [sent-99, score-0.327]

32 In our problem, we assume that a denoising algorithm does not change the luminance nor the contrast of a noisy image (which is true most ofthe time) and estimate the visual quality of a denoised image only with the structure comparison term. [sent-104, score-0.976]

33 The image structure compared here is independent of luminance and contrast, both of which affect the visual quality of an image less than the structure does [20]. [sent-119, score-0.263]

34 Noise and Structure Measurements The MNI is the difference between the input noisy image and the denoised image: Mh = I Comparing to the −Iˆh. [sent-122, score-0.476]

35 An example of the noise reduction and structure preservation maps. [sent-124, score-0.327]

36 (b) and (c) are two maps for measuring the noise reduction and structure preservation, respectively. [sent-126, score-0.286]

37 BM3D denoising algorithm is used to obtained the denoised image with parameter σest set to σ. [sent-127, score-0.672]

38 MNI Mh, the noisy image I and the denoised images Iˆh are rich in image contents. [sent-128, score-0.455]

39 This property makes the MNI potentially helpful to evaluate the nature of the denoising algorithms. [sent-130, score-0.305]

40 In the proposed metric, the noise reduction measurement is designed as a map of local structure similarity measurement N computed from the noisy image I the MNI Mh. [sent-133, score-0.512]

41 The noise reduction measurement at p is then computed as follows: Np = S(Ip, Mhp). [sent-135, score-0.264]

42 (3) Figure 1(b) shows an example of the noise reduction measurement computed using Eq. [sent-136, score-0.264]

43 The main motivation to use this measurement is that in homogeneous regions, a good denoising algorithm should reduce the image noise as much as possible, and the removed noise should present in the MNI at the same location. [sent-138, score-0.716]

44 and On the other hand, ifthe denoising algorithm fails, the structure should be dissimilar. [sent-140, score-0.352]

45 Same as the noise reduction measurement, the structure preservation measurement is also a local structure similarity map P which is computed from the input noisy image I and the denoised image Iˆh: Pp = S(Ip, Iˆph). [sent-141, score-0.941]

46 3 incorporate not only the spatial information of the noise reduction and structure preservation but also their energy/strength. [sent-149, score-0.327]

47 A good denoising algorithm should maintain a good balance and maximize both terms. [sent-150, score-0.305]

48 Experimental Results ×× To demonstrate the effectiveness of the proposed metric, visual and numerical evaluations are conducted on both real and synthetic noisy images and videos. [sent-158, score-0.252]

49 The proposed metric takes around 55 ms to process a 512 314 image using a Matlab imple5m5en mtast tioon p. [sent-162, score-0.269]

50 Denoising with Real Noisy Images This section presents experimental results to demonstrate the effectiveness of the proposed metric when real noise is presented. [sent-165, score-0.432]

51 The CMOS noise is known to be much more complicated than WGN noise [10]. [sent-167, score-0.318]

52 The high ISO noise reduction function of the camera is turned off and the output image quality is set to be JPEG fine. [sent-169, score-0.327]

53 o Figure 2 demonstrates that using the proposed metric, the CBM3D [5] filter (a generalized version of the BM3D [4] algorithm for WGN denoising on color images) handles this noise well in practice. [sent-172, score-0.483]

54 The denoised images in Figure 2 (c) and (d) are obtained from the proposed metric and Qmetric, respectively. [sent-173, score-0.595]

55 While the noise is reduced effectively in both images, the visual quality of the image using the proposed metric can better preserve images details. [sent-174, score-0.518]

56 More evaluations on denoising using real images are available in the supplementary material. [sent-175, score-0.326]

57 The proposed metric and Q-metric are used to estimate the noise level σest for the BM3D algorithm using Eq. [sent-180, score-0.445]

58 We note that incorrect parameter setting of these two denoising algorithms likely leads to either insufficient noise reduction or loss of details. [sent-182, score-0.545]

59 We use the PSNR metric for evaluating the quality of the denoised image in these experiments. [sent-183, score-0.704]

60 The denoised image obtained with parameter setting optimized using the PSNR metric (which requires the ground truth image) is used as the “optimal” solution, and the PSNR value obtained from this denoised image as well as the ground truth is considered as the “optimal” PSNR value. [sent-184, score-0.962]

61 The overall performance is then evaluated in terms of the PSNR error, which is defined as the absolute difference between this “optimal” PSNR value and the PSNR value of the denoised image obtained with parameter setting optimized using another metric (e. [sent-185, score-0.617]

62 From left to right: average PSNR error on TID and LIVE database, respectively; from top to bottom: average PSNR error using BM3D and SKR image denoising algorithm, respectively. [sent-194, score-0.355]

63 Note that the proposed metric clearly outperforms Qmetric except when the noise level is high. [sent-195, score-0.445]

64 The results in Figure 3(a) show that the proposed metric has a lower PSNR error than Qmetric especially when the BM3D denoising algorithm is employed (see Figure 3(a) and (b)). [sent-197, score-0.598]

65 ≤ According to the curves reported in Figure 3, the over performance of the proposed metric is indeed a bit lower than Q-metric when the noise level is high, especially when the SKR image denoising algorithm is used. [sent-201, score-0.75]

66 In fact, the denoised images that have higher PSNR value are not visually superior to denoised images that have lower PSNR in this case. [sent-207, score-0.69]

67 (a) is a real image captured with very little noise (when ISO is set to 200) for visual evaluation, while (b) is a noisy version of (a) captured with high ISO value (set to 6400) . [sent-210, score-0.269]

68 (c) Denoised image obtained from CBM3D [5] with noise level estimated using the proposed metric and Q-metric. [sent-211, score-0.464]

69 The optimal noise standard deviation values estimated for CBM3D are presented under the corresponding denoised images. [sent-212, score-0.523]

70 denoising results obtained with high noise levels (σ = 19) using the SKR denoising algorithm. [sent-214, score-0.8]

71 Nevertheless, numerical comparison of the two metrics using PSNR error with respect to large noise levels (σ ≥ 25) based on the BM3D rdeesnpoeicstin tog laalgrgoeri tnhomis2e i lse vperelsse (nσte ≥d i 2n5 5F)i gbuarsee d5 . [sent-217, score-0.321]

72 Numerical comparison of the proposed metric and Qmetric when the noise level is high (σ ≥ 25). [sent-221, score-0.445]

73 However, even state-of-the-art denoising algorithm (BM3D) is weak when the noise level is high (see average PSNR value in (b)); thus evaluation using PSNR error is not that suitable. [sent-224, score-0.525]

74 Video Denoising This section evaluates the proposed metric with the BM3D algorithm for video denoising. [sent-227, score-0.292]

75 Note that the curve of the proposed metric in Figure 8(c) is flatter than that by the Q-metric, which demonstrates the temporal consistency of the proposed metric. [sent-231, score-0.291]

76 The PSNR error curves presented in Figure 8(b) demonstrates that the proposed metric outperforms the one by the Q-metric when the noise level is relatively low. [sent-234, score-0.516]

77 Figure 8(d) shows that the shape of the noise levels estimated using the proposed metric better agrees with the shape of the synthetic noise levels. [sent-235, score-0.67]

78 The correlation of the green curve (noise level estimated from the proposed metric) and the dark curve (synthetic noise level) in Figure 8(d) is 0. [sent-237, score-0.318]

79 Human Subject Study This section evaluates the perceptual performance of the proposed metric against the Q-metric and PSNR metric using human subject study. [sent-242, score-0.607]

80 56% of the participants prefer the results obtained from the proposed metric to those from the Q-metric, 29. [sent-249, score-0.279]

81 68% prefer the proposed metric to PSNR metric, and 17. [sent-250, score-0.279]

82 The results show that the proposed metric outperforms Q-metric, and is comparable to PSNR metric when the noise level is relatively low. [sent-253, score-0.722]

83 Concluding Remarks This paper proposes a new metric for automatizing existing state-of-the-art image/video denoising algorithms. [sent-255, score-0.573]

84 Visual evaluation using BM3D with relatively low synthetic noise levels (σ ≤ 10). [sent-274, score-0.269]

85 Note that the proposed metric visually outperforms Q-metric for preserving structure details. [sent-278, score-0.297]

86 Specifically, the proposed metric is used to search for the optimal parameter setting of a denoising algorithm by evaluating the quality of the denoised images. [sent-280, score-1.031]

87 The propose metric is extremely simple (can be implemented in four lines of Matlab code) and yet very robust and efficient. [sent-281, score-0.271]

88 Experimental results demonstrate that the proposed metric outperforms the current state-of-the-art Q-metric method on two popular image quality assessment data sets and a video sequence. [sent-282, score-0.502]

89 Our future work will extend the proposed work to other types of noise and distortion including spatially correlated noise and JPEG compression. [sent-283, score-0.318]

90 Color image denoising via sparse 3D collaborative filtering with 2893 (24. [sent-317, score-0.323]

91 Visual evaluation using SKR with relatively high synthetic noise levels (σ ≥ 15). [sent-331, score-0.269]

92 From left to right: (i) experimental results for synthetic WGN that has a constant noise level (σ = 15); (ii) experimental results for synthetic WGN that has dynamic changing noise levels with respect to the time domain. [sent-345, score-0.489]

93 Note that the visual perception does not always agree with the PSNR metric (which shows that the left image should have a lower performance). [sent-361, score-0.273]

94 [8] [9] [10] [11] [12] [13] [14] [15] [16] sharpness metric based on the notion of just noticeable blur. [sent-363, score-0.27]

95 A reduced-reference perceptual quality metric for in-service image quality assessment. [sent-390, score-0.512]

96 General-purpose reduced-reference image quality assessment based on perceptually and statistically motivated image representation. [sent-395, score-0.257]

97 Reduced-reference image quality assessment using divisive normalization-based image representation. [sent-400, score-0.269]

98 SVD-based quality metric for image and video using machine learning. [sent-405, score-0.377]

99 (a) subjective comparison between the proposed metric and Q-metric; (b) subjective comparison between the proposed metric and PSNR metric; (c) subjective comparison between Q-metric and PSNR metric. [sent-451, score-0.596]

100 Automatic parameter selection for denoising algorithms using a no-reference measure of image content. [sent-526, score-0.327]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('psnr', 0.407), ('denoised', 0.345), ('denoising', 0.305), ('mni', 0.296), ('metric', 0.25), ('skr', 0.237), ('iqa', 0.228), ('wgn', 0.197), ('est', 0.172), ('ssim', 0.16), ('noise', 0.159), ('db', 0.135), ('itr', 0.131), ('assessment', 0.125), ('noisy', 0.11), ('quality', 0.109), ('qmetric', 0.099), ('iso', 0.089), ('mh', 0.082), ('tip', 0.076), ('jpeg', 0.074), ('metrics', 0.068), ('preservation', 0.062), ('luminance', 0.06), ('correlation', 0.06), ('reduction', 0.059), ('tid', 0.059), ('coefficient', 0.053), ('synthetic', 0.052), ('vel', 0.049), ('structure', 0.047), ('homogeneous', 0.047), ('measurement', 0.046), ('hvs', 0.046), ('similarity', 0.045), ('perceptual', 0.044), ('cmos', 0.044), ('dmos', 0.039), ('mhp', 0.039), ('measurements', 0.039), ('numerical', 0.038), ('numerous', 0.037), ('level', 0.036), ('icip', 0.035), ('divisive', 0.035), ('corrupted', 0.035), ('subjective', 0.032), ('levels', 0.031), ('conducted', 0.031), ('sheikh', 0.031), ('opinion', 0.031), ('kong', 0.03), ('buades', 0.029), ('coll', 0.029), ('prefer', 0.029), ('pages', 0.028), ('reference', 0.028), ('relatively', 0.027), ('live', 0.027), ('katkovnik', 0.027), ('foi', 0.026), ('pearson', 0.026), ('signal', 0.025), ('error', 0.025), ('evaluates', 0.024), ('perceptually', 0.023), ('hong', 0.023), ('mse', 0.023), ('presents', 0.023), ('ip', 0.023), ('perception', 0.023), ('impractical', 0.022), ('parameter', 0.022), ('curve', 0.022), ('input', 0.021), ('extremely', 0.021), ('matlab', 0.021), ('maps', 0.021), ('subject', 0.021), ('career', 0.021), ('evaluations', 0.021), ('digital', 0.02), ('structural', 0.02), ('noticeable', 0.02), ('qualitatively', 0.02), ('around', 0.019), ('city', 0.019), ('demonstrates', 0.019), ('estimated', 0.019), ('aims', 0.019), ('filters', 0.018), ('patches', 0.018), ('dependence', 0.018), ('maintains', 0.018), ('collaborative', 0.018), ('human', 0.018), ('proposes', 0.018), ('video', 0.018), ('coherence', 0.018), ('employed', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 23 iccv-2013-A New Image Quality Metric for Image Auto-denoising

Author: Xiangfei Kong, Kuan Li, Qingxiong Yang, Liu Wenyin, Ming-Hsuan Yang

Abstract: This paper proposes a new non-reference image quality metric that can be adopted by the state-of-the-art image/video denoising algorithms for auto-denoising. The proposed metric is extremely simple and can be implemented in four lines of Matlab code1. The basic assumption employed by the proposed metric is that the noise should be independent of the original image. A direct measurement of this dependence is, however, impractical due to the relatively low accuracy of existing denoising method. The proposed metric thus aims at maximizing the structure similarity between the input noisy image and the estimated image noise around homogeneous regions and the structure similarity between the input noisy image and the denoised image around highly-structured regions, and is computed as the linear correlation coefficient of the two corresponding structure similarity maps. Numerous experimental results demonstrate that the proposed metric not only outperforms the current state-of-the-art non-reference quality metric quantitatively and qualitatively, but also better maintains temporal coherence when used for video denoising. ˜

2 0.25277129 223 iccv-2013-Joint Noise Level Estimation from Personal Photo Collections

Author: Yichang Shih, Vivek Kwatra, Troy Chinen, Hui Fang, Sergey Ioffe

Abstract: Personal photo albums are heavily biased towards faces of people, but most state-of-the-art algorithms for image denoising and noise estimation do not exploit facial information. We propose a novel technique for jointly estimating noise levels of all face images in a photo collection. Photos in a personal album are likely to contain several faces of the same people. While some of these photos would be clean and high quality, others may be corrupted by noise. Our key idea is to estimate noise levels by comparing multiple images of the same content that differ predominantly in their noise content. Specifically, we compare geometrically and photometrically aligned face images of the same person. Our estimation algorithm is based on a probabilistic formulation that seeks to maximize the joint probability of estimated noise levels across all images. We propose an approximate solution that decomposes this joint maximization into a two-stage optimization. The first stage determines the relative noise between pairs of images by pooling estimates from corresponding patch pairs in a probabilistic fashion. The second stage then jointly optimizes for all absolute noise parameters by conditioning them upon relative noise levels, which allows for a pairwise factorization of the probability distribution. We evaluate our noise estimation method using quantitative experiments to measure accuracy on synthetic data. Additionally, we employ the estimated noise levels for automatic denoising using “BM3D”, and evaluate the quality of denoising on real-world photos through a user study.

3 0.18083788 394 iccv-2013-Single-Patch Low-Rank Prior for Non-pointwise Impulse Noise Removal

Author: Ruixuan Wang, Emanuele Trucco

Abstract: This paper introduces a ‘low-rank prior’ for small oriented noise-free image patches: considering an oriented patch as a matrix, a low-rank matrix approximation is enough to preserve the texture details in the properly oriented patch. Based on this prior, we propose a single-patch method within a generalized joint low-rank and sparse matrix recovery framework to simultaneously detect and remove non-pointwise random-valued impulse noise (e.g., very small blobs). A weighting matrix is incorporated in the framework to encode an initial estimate of the spatial noise distribution. An accelerated proximal gradient method is adapted to estimate the optimal noise-free image patches. Experiments show the effectiveness of our framework in removing non-pointwise random-valued impulse noise.

4 0.16472185 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error

Author: Wufeng Xue, Xuanqin Mou, Lei Zhang, Xiangchu Feng

Abstract: How to measure the perceptual quality of natural images is an important problem in low level vision. It is known that the Mean Squared Error (MSE) is not an effective index to describe the perceptual fidelity of images. Numerous perceptual fidelity indices have been developed, while the representatives include the Structural SIMilarity (SSIM) index and its variants. However, most of those perceptual measures are nonlinear, and they cannot be easily adopted as an objective function to minimize in various low level vision tasks. Can MSE be perceptual fidelity aware after some minor adaptation ? In this paper we propose a simple framework to enhance the perceptual fidelity awareness of MSE by introducing an l2-norm structural error term to it. Such a Structural MSE (SMSE) can lead to very competitive image quality assessment (IQA) results. More surprisingly, we show that by using certain structure extractors, SMSE can befurther turned into a Gaussian smoothed MSE (i.e., the Euclidean distance between the original and distorted images after Gaussian , smooth filtering), which is much simpler to calculate but achieves rather better IQA performance than SSIM. The socalled Perceptual-fidelity Aware MSE (PAMSE) can have great potentials in applications such as perceptual image coding and perceptual image restoration.

5 0.11612555 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition

Author: De-An Huang, Yu-Chiang Frank Wang

Abstract: Cross-domain image synthesis and recognition are typically considered as two distinct tasks in the areas of computer vision and pattern recognition. Therefore, it is not clear whether approaches addressing one task can be easily generalized or extended for solving the other. In this paper, we propose a unified model for coupled dictionary and feature space learning. The proposed learning model not only observes a common feature space for associating cross-domain image data for recognition purposes, the derived feature space is able to jointly update the dictionaries in each image domain for improved representation. This is why our method can be applied to both cross-domain image synthesis and recognition problems. Experiments on a variety of synthesis and recognition tasks such as single image super-resolution, cross-view action recognition, and sketchto-photo face recognition would verify the effectiveness of our proposed learning model.

6 0.11495528 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

7 0.11227466 392 iccv-2013-Similarity Metric Learning for Face Recognition

8 0.10874195 101 iccv-2013-DCSH - Matching Patches in RGBD Images

9 0.09480115 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions

10 0.088064969 51 iccv-2013-Anchored Neighborhood Regression for Fast Example-Based Super-Resolution

11 0.087378442 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

12 0.080651432 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

13 0.079121038 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition

14 0.065122157 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

15 0.063412726 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution

16 0.060830709 212 iccv-2013-Image Set Classification Using Holistic Multiple Order Statistics Features and Localized Multi-kernel Metric Learning

17 0.059501015 351 iccv-2013-Restoring an Image Taken through a Window Covered with Dirt or Rain

18 0.058238026 177 iccv-2013-From Point to Set: Extend the Learning of Distance Metrics

19 0.056601137 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes

20 0.048977081 222 iccv-2013-Joint Learning of Discriminative Prototypes and Large Margin Nearest Neighbor Classifiers


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.117), (1, -0.017), (2, -0.038), (3, -0.036), (4, -0.089), (5, 0.003), (6, 0.003), (7, -0.028), (8, 0.009), (9, -0.044), (10, -0.022), (11, -0.052), (12, 0.011), (13, -0.013), (14, 0.003), (15, 0.025), (16, -0.029), (17, -0.042), (18, -0.011), (19, 0.065), (20, 0.002), (21, 0.019), (22, -0.023), (23, -0.028), (24, -0.012), (25, 0.125), (26, 0.134), (27, -0.035), (28, -0.076), (29, 0.021), (30, -0.025), (31, -0.021), (32, 0.096), (33, 0.166), (34, 0.002), (35, -0.039), (36, 0.163), (37, -0.074), (38, 0.067), (39, -0.063), (40, 0.003), (41, 0.161), (42, 0.002), (43, -0.06), (44, 0.251), (45, 0.062), (46, 0.104), (47, 0.002), (48, 0.006), (49, -0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95851654 23 iccv-2013-A New Image Quality Metric for Image Auto-denoising

Author: Xiangfei Kong, Kuan Li, Qingxiong Yang, Liu Wenyin, Ming-Hsuan Yang

Abstract: This paper proposes a new non-reference image quality metric that can be adopted by the state-of-the-art image/video denoising algorithms for auto-denoising. The proposed metric is extremely simple and can be implemented in four lines of Matlab code1. The basic assumption employed by the proposed metric is that the noise should be independent of the original image. A direct measurement of this dependence is, however, impractical due to the relatively low accuracy of existing denoising method. The proposed metric thus aims at maximizing the structure similarity between the input noisy image and the estimated image noise around homogeneous regions and the structure similarity between the input noisy image and the denoised image around highly-structured regions, and is computed as the linear correlation coefficient of the two corresponding structure similarity maps. Numerous experimental results demonstrate that the proposed metric not only outperforms the current state-of-the-art non-reference quality metric quantitatively and qualitatively, but also better maintains temporal coherence when used for video denoising. ˜

2 0.79660243 394 iccv-2013-Single-Patch Low-Rank Prior for Non-pointwise Impulse Noise Removal

Author: Ruixuan Wang, Emanuele Trucco

Abstract: This paper introduces a ‘low-rank prior’ for small oriented noise-free image patches: considering an oriented patch as a matrix, a low-rank matrix approximation is enough to preserve the texture details in the properly oriented patch. Based on this prior, we propose a single-patch method within a generalized joint low-rank and sparse matrix recovery framework to simultaneously detect and remove non-pointwise random-valued impulse noise (e.g., very small blobs). A weighting matrix is incorporated in the framework to encode an initial estimate of the spatial noise distribution. An accelerated proximal gradient method is adapted to estimate the optimal noise-free image patches. Experiments show the effectiveness of our framework in removing non-pointwise random-valued impulse noise.

3 0.76660085 223 iccv-2013-Joint Noise Level Estimation from Personal Photo Collections

Author: Yichang Shih, Vivek Kwatra, Troy Chinen, Hui Fang, Sergey Ioffe

Abstract: Personal photo albums are heavily biased towards faces of people, but most state-of-the-art algorithms for image denoising and noise estimation do not exploit facial information. We propose a novel technique for jointly estimating noise levels of all face images in a photo collection. Photos in a personal album are likely to contain several faces of the same people. While some of these photos would be clean and high quality, others may be corrupted by noise. Our key idea is to estimate noise levels by comparing multiple images of the same content that differ predominantly in their noise content. Specifically, we compare geometrically and photometrically aligned face images of the same person. Our estimation algorithm is based on a probabilistic formulation that seeks to maximize the joint probability of estimated noise levels across all images. We propose an approximate solution that decomposes this joint maximization into a two-stage optimization. The first stage determines the relative noise between pairs of images by pooling estimates from corresponding patch pairs in a probabilistic fashion. The second stage then jointly optimizes for all absolute noise parameters by conditioning them upon relative noise levels, which allows for a pairwise factorization of the probability distribution. We evaluate our noise estimation method using quantitative experiments to measure accuracy on synthetic data. Additionally, we employ the estimated noise levels for automatic denoising using “BM3D”, and evaluate the quality of denoising on real-world photos through a user study.

4 0.73224795 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error

Author: Wufeng Xue, Xuanqin Mou, Lei Zhang, Xiangchu Feng

Abstract: How to measure the perceptual quality of natural images is an important problem in low level vision. It is known that the Mean Squared Error (MSE) is not an effective index to describe the perceptual fidelity of images. Numerous perceptual fidelity indices have been developed, while the representatives include the Structural SIMilarity (SSIM) index and its variants. However, most of those perceptual measures are nonlinear, and they cannot be easily adopted as an objective function to minimize in various low level vision tasks. Can MSE be perceptual fidelity aware after some minor adaptation ? In this paper we propose a simple framework to enhance the perceptual fidelity awareness of MSE by introducing an l2-norm structural error term to it. Such a Structural MSE (SMSE) can lead to very competitive image quality assessment (IQA) results. More surprisingly, we show that by using certain structure extractors, SMSE can befurther turned into a Gaussian smoothed MSE (i.e., the Euclidean distance between the original and distorted images after Gaussian , smooth filtering), which is much simpler to calculate but achieves rather better IQA performance than SSIM. The socalled Perceptual-fidelity Aware MSE (PAMSE) can have great potentials in applications such as perceptual image coding and perceptual image restoration.

5 0.64783144 101 iccv-2013-DCSH - Matching Patches in RGBD Images

Author: Yaron Eshet, Simon Korman, Eyal Ofek, Shai Avidan

Abstract: We extend patch based methods to work on patches in 3D space. We start with Coherency Sensitive Hashing [12] (CSH), which is an algorithm for matching patches between two RGB images, and extend it to work with RGBD images. This is done by warping all 3D patches to a common virtual plane in which CSH is performed. To avoid noise due to warping of patches of various normals and depths, we estimate a group of dominant planes and compute CSH on each plane separately, before merging the matching patches. The result is DCSH - an algorithm that matches world (3D) patches in order to guide the search for image plane matches. An independent contribution is an extension of CSH, which we term Social-CSH. It allows a major speedup of the k nearest neighbor (kNN) version of CSH - its runtime growing linearly, rather than quadratically, in k. Social-CSH is used as a subcomponent of DCSH when many NNs are required, as in the case of image denoising. We show the benefits ofusing depth information to image reconstruction and image denoising, demonstrated on several RGBD images.

6 0.5982244 15 iccv-2013-A Generalized Low-Rank Appearance Model for Spatio-temporally Correlated Rain Streaks

7 0.57830608 351 iccv-2013-Restoring an Image Taken through a Window Covered with Dirt or Rain

8 0.51871651 19 iccv-2013-A Learning-Based Approach to Reduce JPEG Artifacts in Image Matting

9 0.51164877 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map

10 0.47553766 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions

11 0.47023046 364 iccv-2013-SGTD: Structure Gradient and Texture Decorrelating Regularization for Image Decomposition

12 0.46024704 357 iccv-2013-Robust Matrix Factorization with Unknown Noise

13 0.44387275 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

14 0.43781313 25 iccv-2013-A Novel Earth Mover's Distance Methodology for Image Matching with Gaussian Mixture Models

15 0.43111083 153 iccv-2013-Face Recognition Using Face Patch Networks

16 0.41736314 60 iccv-2013-Bayesian Robust Matrix Factorization for Image and Video Processing

17 0.41329169 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning

18 0.39887744 313 iccv-2013-Person Re-identification by Salience Matching

19 0.39187309 177 iccv-2013-From Point to Set: Extend the Learning of Distance Metrics

20 0.38488004 222 iccv-2013-Joint Learning of Discriminative Prototypes and Large Margin Nearest Neighbor Classifiers


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.05), (7, 0.022), (26, 0.093), (31, 0.034), (35, 0.013), (40, 0.016), (42, 0.111), (48, 0.011), (55, 0.255), (64, 0.024), (73, 0.116), (89, 0.125), (98, 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.75254345 23 iccv-2013-A New Image Quality Metric for Image Auto-denoising

Author: Xiangfei Kong, Kuan Li, Qingxiong Yang, Liu Wenyin, Ming-Hsuan Yang

Abstract: This paper proposes a new non-reference image quality metric that can be adopted by the state-of-the-art image/video denoising algorithms for auto-denoising. The proposed metric is extremely simple and can be implemented in four lines of Matlab code1. The basic assumption employed by the proposed metric is that the noise should be independent of the original image. A direct measurement of this dependence is, however, impractical due to the relatively low accuracy of existing denoising method. The proposed metric thus aims at maximizing the structure similarity between the input noisy image and the estimated image noise around homogeneous regions and the structure similarity between the input noisy image and the denoised image around highly-structured regions, and is computed as the linear correlation coefficient of the two corresponding structure similarity maps. Numerous experimental results demonstrate that the proposed metric not only outperforms the current state-of-the-art non-reference quality metric quantitatively and qualitatively, but also better maintains temporal coherence when used for video denoising. ˜

2 0.75136983 224 iccv-2013-Joint Optimization for Consistent Multiple Graph Matching

Author: Junchi Yan, Yu Tian, Hongyuan Zha, Xiaokang Yang, Ya Zhang, Stephen M. Chu

Abstract: The problem of graph matching in general is NP-hard and approaches have been proposed for its suboptimal solution, most focusing on finding the one-to-one node mapping between two graphs. A more general and challenging problem arises when one aims to find consistent mappings across a number of graphs more than two. Conventional graph pair matching methods often result in mapping inconsistency since the mapping between two graphs can either be determined by pair mapping or by an additional anchor graph. To address this issue, a novel formulation is derived which is maximized via alternating optimization. Our method enjoys several advantages: 1) the mappings are jointly optimized rather than sequentially performed by applying pair matching, allowing the global affinity information across graphs can be propagated and explored; 2) the number of concerned variables to optimize is in linear with the number of graphs, being superior to local pair matching resulting in O(n2) variables; 3) the mapping consistency constraints are analytically satisfied during optimization; and 4) off-the-shelf graph pair matching solvers can be reused under the proposed framework in an ‘out-of-thebox’ fashion. Competitive results on both the synthesized data and the real data are reported, by varying the level of deformation, outliers and edge densities. ∗Corresponding author. The work is supported by NSF IIS1116886, NSF IIS-1049694, NSFC 61129001/F010403 and the 111 Project (B07022). Yu Tian Shanghai Jiao Tong University Shanghai, China, 200240 yut ian @ s j tu . edu .cn Xiaokang Yang Shanghai Jiao Tong University Shanghai, China, 200240 xkyang@ s j tu .edu . cn Stephen M. Chu IBM T.J. Waston Research Center Yorktown Heights, NY USA, 10598 s chu @u s . ibm . com

3 0.69655502 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error

Author: Wufeng Xue, Xuanqin Mou, Lei Zhang, Xiangchu Feng

Abstract: How to measure the perceptual quality of natural images is an important problem in low level vision. It is known that the Mean Squared Error (MSE) is not an effective index to describe the perceptual fidelity of images. Numerous perceptual fidelity indices have been developed, while the representatives include the Structural SIMilarity (SSIM) index and its variants. However, most of those perceptual measures are nonlinear, and they cannot be easily adopted as an objective function to minimize in various low level vision tasks. Can MSE be perceptual fidelity aware after some minor adaptation ? In this paper we propose a simple framework to enhance the perceptual fidelity awareness of MSE by introducing an l2-norm structural error term to it. Such a Structural MSE (SMSE) can lead to very competitive image quality assessment (IQA) results. More surprisingly, we show that by using certain structure extractors, SMSE can befurther turned into a Gaussian smoothed MSE (i.e., the Euclidean distance between the original and distorted images after Gaussian , smooth filtering), which is much simpler to calculate but achieves rather better IQA performance than SSIM. The socalled Perceptual-fidelity Aware MSE (PAMSE) can have great potentials in applications such as perceptual image coding and perceptual image restoration.

4 0.69322681 406 iccv-2013-Style-Aware Mid-level Representation for Discovering Visual Connections in Space and Time

Author: Yong Jae Lee, Alexei A. Efros, Martial Hebert

Abstract: We present a weakly-supervised visual data mining approach that discovers connections between recurring midlevel visual elements in historic (temporal) and geographic (spatial) image collections, and attempts to capture the underlying visual style. In contrast to existing discovery methods that mine for patterns that remain visually consistent throughout the dataset, our goal is to discover visual elements whose appearance changes due to change in time or location; i.e., exhibit consistent stylistic variations across the label space (date or geo-location). To discover these elements, we first identify groups of patches that are stylesensitive. We then incrementally build correspondences to find the same element across the entire dataset. Finally, we train style-aware regressors that model each element’s range of stylistic differences. We apply our approach to date and geo-location prediction and show substantial improvement over several baselines that do not model visual style. We also demonstrate the method’s effectiveness on the related task of fine-grained classification.

5 0.68223572 183 iccv-2013-Geometric Registration Based on Distortion Estimation

Author: Wei Zeng, Mayank Goswami, Feng Luo, Xianfeng Gu

Abstract: Surface registration plays a fundamental role in many applications in computer vision and aims at finding a oneto-one correspondence between surfaces. Conformal mapping based surface registration methods conformally map 2D/3D surfaces onto 2D canonical domains and perform the matching on the 2D plane. This registration framework reduces dimensionality, and the result is intrinsic to Riemannian metric and invariant under isometric deformation. However, conformal mapping will be affected by inconsistent boundaries and non-isometric deformations of surfaces. In this work, we quantify the effects of boundary variation and non-isometric deformation to conformal mappings, and give the theoretical upper bounds for the distortions of conformal mappings under these two factors. Besides giving the thorough theoretical proofs of the theorems, we verified them by concrete experiments using 3D human facial scans with dynamic expressions and varying boundaries. Furthermore, we used the distortion estimates for reducing search range in feature matching of surface registration applications. The experimental results are consistent with the theoreticalpredictions and also demonstrate the performance improvements in feature tracking.

6 0.6814695 32 iccv-2013-A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration

7 0.66944677 363 iccv-2013-Rolling Shutter Stereo

8 0.65799129 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video

9 0.65035021 394 iccv-2013-Single-Patch Low-Rank Prior for Non-pointwise Impulse Noise Removal

10 0.63757223 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map

11 0.62524813 144 iccv-2013-Estimating the 3D Layout of Indoor Scenes and Its Clutter from Depth Sensors

12 0.61854088 223 iccv-2013-Joint Noise Level Estimation from Personal Photo Collections

13 0.61451465 58 iccv-2013-Bayesian 3D Tracking from Monocular Video

14 0.61320049 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

15 0.61247838 150 iccv-2013-Exemplar Cut

16 0.61108065 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

17 0.60767436 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

18 0.60693216 399 iccv-2013-Spoken Attributes: Mixing Binary and Relative Attributes to Say the Right Thing

19 0.60683548 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

20 0.60679126 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions