cvpr cvpr2013 cvpr2013-216 knowledge-graph by maker-knowledge-mining

216 cvpr-2013-Improving Image Matting Using Comprehensive Sampling Sets


Source: pdf

Author: Ehsan Shahrian, Deepu Rajan, Brian Price, Scott Cohen

Abstract: In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. This is achieved by solving two major problems encountered by current sampling based algorithms. The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. Here, we describe a method by which a more comprehensive and representative set of samples is collected so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. The second problem is the overlap in color distributions of foreground and background regions. This causes sampling based methods to fail to pick the correct samples for foreground and background. Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. Comparison on the dataset at and evaluation by www.alphamatting.com shows that the proposed method ranks first in terms of error measures used in the website.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. [sent-6, score-0.6]

2 The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. [sent-8, score-1.192]

3 This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. [sent-10, score-1.119]

4 The second problem is the overlap in color distributions of foreground and background regions. [sent-11, score-0.756]

5 This causes sampling based methods to fail to pick the correct samples for foreground and background. [sent-12, score-0.733]

6 Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. [sent-13, score-0.756]

7 Introduction Accurate extraction of a foreground object from an image is known as alpha or digital matting. [sent-18, score-0.772]

8 Typically, matting approaches rely on constraints such as assumption on image statistics [10, 9] or the availability of a trimap to reduce the solution space. [sent-26, score-0.668]

9 Trimaps partition the image into three regions - known foreground, known background and unknown regions that consist of a mixture of foreground (F) and background (B) colors. [sent-27, score-1.108]

10 Current alpha matting approaches can be categorized into alpha propagation based and color sampling based methods. [sent-29, score-1.534]

11 Alpha propagation based matting methods [10, 15, 6, 8, 2] assume that neighboring pixels are correlated under some image statistics and use their affinities to propagate alpha values of known regions toward unknown ones. [sent-30, score-1.306]

12 A closed form solution for alpha matting is proposed in [10] by minimizing a quadratic cost function based on α. [sent-31, score-0.93]

13 The assumptions of large kernels by [8] and local color line of [10] are relaxed in KNN matting [2] using nonlocal principles and K nearest neighbors. [sent-32, score-0.743]

14 Color sampling based methods collect a set of known foreground and background samples to estimate alpha values of unknown pixels. [sent-33, score-1.495]

15 Different combinations of spatial, photometric and probabilistic characteristics of an image are used [5, 18] to find the known samples that best represent the true foreground and background colors of unknown pixels. [sent-34, score-1.187]

16 Once the best known foreground and background samples are selected for pixel z, its alpha value is computed as αz=(Iz−? [sent-35, score-1.186]

17 Parametric sampling methods like [3, 13, 16] usually fit parametric statistical models to the known foreground and background samples and then estimate alpha by considering the distance of unknown pixels to known foreground and background distributions. [sent-39, score-2.096]

18 Non-parametric methods including [11, 1, 18, 5, 7, 14] simply collect set of known F and B samples to estimate alpha values of unknown pixels. [sent-40, score-0.852]

19 (a) Original Image and Sampling strategies of Proposed, Robust, Shared and Global matting methods. [sent-70, score-0.573]

20 and estimated mattes by Proposed, Robust [18], Shared (b) Groundtruth matte [5] and Global matting [7] methods. [sent-71, score-0.943]

21 It degrades when the true foreground and background colors of unknown pixels are not in the sample sets. [sent-73, score-0.95]

22 A comprehensive review on image matting methods can be found in [17]. [sent-77, score-0.68]

23 In the knockout system [1], known regions are extrapolated into unknown region and a weighted sum of known samples are used to estimate true foreground and background of unknown samples. [sent-79, score-1.55]

24 Robust matting [18] collects a few samples that are spatially close to the unknown pixel as shown in the first row of Fig. [sent-81, score-1.113]

25 1(c) in which the unknown pixel is shown in yellow and the foreground and background samples are shown in red and blue, respectively. [sent-82, score-0.95]

26 The selection of best known background and foreground samples from the candidate set is done with respect to a color fitness parameter. [sent-83, score-1.021]

27 It works better than the knockout system because only good samples that linearly explain the observed color of unknown pixels are used for matting. [sent-84, score-0.61]

28 However, the quality ofestimated mattes degrades when the true samples are not in the sets of known samples. [sent-85, score-0.623]

29 Shared matting [5] divides the image plane into disjoint sectors containing equal planar angles and collect samples that lie along rays that are emanated from unknown pixels as shown in the first row of Fig. [sent-86, score-1.048]

30 It collects these samples from the boundaries of foreground and background regions as specified by a trimap. [sent-89, score-0.845]

31 The weighted color and texture matting [14] uses the same sampling approach. [sent-91, score-0.959]

32 In order to avoid missing true samples, the largest set of known samples, among all other sampling based approaches, are built by collecting all known boundary samples in Global matting [7] as shown in the first row of Fig. [sent-93, score-1.369]

33 A simple cost function and an efficient random search are used to find the best samples among a huge number of known samples for every unknown pixel. [sent-95, score-0.697]

34 Once again, the true samples may still be missed if they are not on the boundary of the trimap from where the samples are collected. [sent-97, score-0.675]

35 The doll has black and light brown colors but only light brown color samples are on or near the foreground boundaries. [sent-103, score-0.917]

36 Thus, sets of collected known foreground samples by robust and shared matting methods do not contain black colors and therefore, the black region of the doll is wrongly estimated as background as shown in Fig. [sent-104, score-1.875]

37 1(e) for global sampling, the matte is still inaccurate because foreground black color samples are inside the region and are excluded from the set 666333557 of candidate samples. [sent-108, score-1.07]

38 Hence, it is important that the set of candidate samples should be comprehensive enough to represent all color variations in foreground and background regions. [sent-109, score-1.041]

39 The color statistics helps especially in the case when foreground and background color distributions overlap leading to erroneous samples for F and B. [sent-112, score-1.118]

40 Proposed Method In this section, we first describe how a comprehensive set of samples are generated followed by the process by which candidate samples are selected. [sent-120, score-0.649]

41 Next, we illustrate the problem when the F and B color distributions overlap and finally formulate an objective function whose optimization leads to the true foreground-background (F, B) pair for an unknown sample. [sent-121, score-0.599]

42 First, the range over which samples are gathered is varied according to the distance of a given pixel to the known foreground and background. [sent-126, score-0.735]

43 The motivation for this is that the closer an unknown sample is to known regions, the higher is the likelihood of a high correlation with known samples and thus known samples can estimate true samples robustly. [sent-127, score-1.25]

44 By removing this restriction and instead adjusting the sampling range, we collect samples near boundaries as well as from inside F and B regions generating a more comprehensive sample set. [sent-130, score-0.659]

45 (a) Original Image, (b) Colored trimap showing foreground, background and unknown region, (c) - (f) Region 1, Region 2, Region 3 and Region 4 in foreground from where potential samples are obtained, (g) Unknown samples at varying distance from foreground. [sent-265, score-1.195]

46 Each of them will receive known foreground samples from different regions based on its distance to the foreground, e. [sent-266, score-0.738]

47 The trimap is divided into regions to obtain a set of known F and B samples which form foreground- background pairs for an unknown pixel. [sent-271, score-0.803]

48 2(a) shows part of an original image whose trimap consisting of background, foreground and unknown regions labeled as B, F and U is shown in Fig. [sent-276, score-0.702]

49 The foreground region is divided into four regions (for illustration only), labeled as activated known foreground samples, as shown in Fig. [sent-278, score-0.974]

50 The number of regions is determined by the size of the foreground region; however, we need as many regions as to cover the entire foreground as seen in Fig. [sent-287, score-0.888]

51 In the first level, the samples are clustered with respect to color through Gaussian mixture models (GMM) in which the number of components of the GMM is same as number of peaks in the color histogram of samples in the region. [sent-295, score-0.724]

52 Thus, we obtain a comprehensive sample set that includes samples from all color distributions thereby handling the missing samples problem. [sent-298, score-0.877]

53 We observe that constructing a comprehensive sampling set covering all possible foreground and background colors is more important than whether that set was constructed parametrically or non-parametrically. [sent-300, score-0.886]

54 Choosing candidate samples Each pixel in the unknown region collects a set of candidate samples that are in the form of a foregroundbackground pair. [sent-303, score-0.946]

55 Since it is close to the foreground region, its candidate pixels will come from the region closest to the boundary of the foreground, viz. [sent-307, score-0.621]

56 Thus, the foreground candidate samples for pixel a are the means of the clusters generated in the corresponding region, as discussed in the previous subsection. [sent-311, score-0.754]

57 However, the pixel marked b is further away from the boundary and hence, would need a larger collection of foreground candidate samples. [sent-312, score-0.589]

58 Finally, the pixel marked c is very far from the boundary and the entire foreground region is utilized to generate foreground samples for this pixel. [sent-315, score-1.153]

59 Using exactly the same method, background candidate samples can be obtained for the unknown pixels. [sent-318, score-0.602]

60 The candidate samples for each unknown pixel is in the form of a foreground-background pair (F, B). [sent-319, score-0.569]

61 For example, if an unknown pixel obtains 4 candidate foreground samples and 3 candidate background samples, then the total number of candidate (F, B) pairs is 12. [sent-320, score-1.226]

62 (a) Color distribution of foreground and background regions, (b) Effect of overlapped distribution on α. [sent-335, score-0.554]

63 Handling overlapping color distributions In addition to selecting a representative set of candidate samples, the proposed method also addresses the problem encountered in current sampling based matting methods that involves overlapped color distributions of foreground and background regions. [sent-338, score-1.9]

64 3(a) where the overlapped color distributions of known foreground and background regions which generated (F, B) pairs are shown in red and blue. [sent-342, score-1.005]

65 ×× The estimated alpha values show that the pixel is considered as foreground by (F1, B1) and as background by (F2 , B2). [sent-345, score-0.925]

66 We propose that the (F, B) pair that is generated from the least overlapping color distributions of a foreground and background cluster should be selected. [sent-348, score-0.82]

67 Selection of best (F, B) pair Once the set of candidate (F, B) pairs is determined for unknown pixels, the task is to select the best pair that can represent the true foreground and background colors and estimate its α using eq. [sent-352, score-1.045]

68 Pre and Post-processing The proposed method uses a pre-processing step to expand known regions to unknown regions according to the following condition: An unknown pixel z is considered as foreground if, for a pixel q ∈ F, (D(z, q) < Ethr) ∧ (? [sent-396, score-1.075]

69 The alpha matte obtained by estimating α for each pixel using the best (F, B) pair in eq. [sent-400, score-0.543]

70 In particular, we adopt the postprocessing method of [5] where a cost function consisting of the data term ˆα and a confidence value f together with a smoothness term consisting of the matting Laplacian [10] is minimized with respect to α. [sent-402, score-0.573]

71 Σ is a diagonal matrix with values 1for known foreground and background pixels and 0 for unknown ones, while diagonal matrix Γˆ has values 0 for known foreground and background pixels and f for unknown pixels. [sent-406, score-1.568]

72 Experimental Results In the first experiment, the performance of the proposed matting method is evaluated on a benchmark dataset [12]. [sent-408, score-0.6]

73 Finally, we evaluate the performance on the dataset described in [14], which contains images with significant overlap in color distributions of foreground and background; we show that the proposed method outperforms other color sampling based methods. [sent-414, score-0.925]

74 Evaluation on benchmark dataset Table 1shows the quantitative evaluation of the proposed matting method when compared to current matting methods 666343880 Table 1. [sent-417, score-1.173]

75 Evaluation of Matting methods by alpha matting website over set of bench mark images with three trimaps with respect to SAD , MSE and Gradient errors . [sent-418, score-0.98]

76 Estimated mattes by (c) Robust [18], (d) Shared [5], (e) Global sampling [7], (f) Weighted color and texture [14] and (g) proposed method. [sent-524, score-0.553]

77 SVR matting [19] , Weighted Color and Texture [14], Shared [5] and Global [7] matting methods have SAD ranks of 5. [sent-533, score-1.196]

78 Visual comparison of the proposed method with some other sampling based matting methods are shown in Fig. [sent-536, score-0.726]

79 The estimated mattes for zoomed areas by sampling based matting methods of Robust [18], Shared [5], Global sampling [7] and Weighted Color and Texture [14] are shown in Fig. [sent-540, score-1.229]

80 The Elephant (first row) has similar color as background, which makes it hard for color sampling based methods (Robust, Shared and Global) to discriminate between foreground and background as shown Fig. [sent-542, score-0.931]

81 Using only boundary samples makes it hard for Robust, Shared and Global Matting method to estimate true foreground colors for plant’s leaves in unknown region. [sent-544, score-1.027]

82 However, it uses a sampling strategy similar to shared matting and hence, it still suffers from missing true samples problem as seen in Fig. [sent-548, score-1.201]

83 The proposed methods takes advantage of comprehensive sampling to cover all true samples and also selects the best foreground and background pairs that are generated from well-separated distributions. [sent-550, score-1.13]

84 Moreover, the standard deviation of matting methods over three types of trimaps on the set of benchmark images with respect to SAD is computed. [sent-553, score-0.684]

85 625, among more than 25 matting methods on the site. [sent-555, score-0.573]

86 Estimated mattes by (d) Proposed, (e) Global [7], (f) Shared [5] and (g) Robust [18] matting methods. [sent-605, score-0.774]

87 Missing true samples Current sampling based matting methods fail to estimate the true foreground and background colors of pixels when the set of collected samples do not contain the true colors. [sent-608, score-2.1]

88 Zoomed regions and ground truth alpha mattes of zoomed regions are shown in Fig. [sent-611, score-0.771]

89 Because of this, the set of samples collected by the global, shared and robust matting methods cannot sample from the blue color distribution and are unable to estimate the true foreground colors of these parts of the ball. [sent-614, score-1.698]

90 A similar situation arises in the second image whereby global, shared and robust matting miss the true black and white colors in the foreground as shown in Figs. [sent-617, score-1.301]

91 The proposed method uses samples inside of known regions to complement the set of highly correlated bound- × ary samples to solve the problem of missing true samples by sampling from all color distributions. [sent-619, score-1.313]

92 The color similarity between foreground and background in the images is illustrated in Fig. [sent-627, score-0.634]

93 In the first row, the background texture contains colors similar to the leaves making it hard for sampling based methods to find the true samples as shown in Fig. [sent-629, score-0.747]

94 False correlations are increased due to color similarity for closed form matting in Fig. [sent-631, score-0.751]

95 For the flower image, the performance of weighted color and texture matting is better, probably because the texture is not as strong as in the other two images. [sent-638, score-0.861]

96 Conclusion A new sampling based image matting method is proposed that uses a new sampling strategy to build a comprehensive set of known samples by sampling from all color distributions in known regions. [sent-648, score-1.77]

97 This set includes highly correlated boundary samples as well as samples inside the F and B regions to capture all color variations and solve the problem of missing true samples. [sent-649, score-0.913]

98 Moreover the problem of overlapping color distributions of foreground and back- 666444002 Figure 6. [sent-650, score-0.641]

99 Visual comparison of matting methods on the dataset of [14] to illustrate cases when foreground and background color distributions overlap. [sent-651, score-1.324]

100 (a) Original image, (b) Zoomed area, (c) Foreground and background color distributions on red channel, (d) Closed form [10], (e) Robust [18], (f) Shared [5], (g) Weighted color and texture [14], (h) Proposed method, (i) Ground truth. [sent-652, score-0.566]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('matting', 0.573), ('foreground', 0.362), ('alpha', 0.323), ('samples', 0.218), ('mattes', 0.201), ('unknown', 0.174), ('sampling', 0.153), ('color', 0.144), ('background', 0.128), ('matte', 0.125), ('comprehensive', 0.107), ('colors', 0.107), ('zoomed', 0.105), ('shared', 0.105), ('distributions', 0.095), ('trimap', 0.095), ('known', 0.087), ('true', 0.086), ('trimaps', 0.084), ('candidate', 0.082), ('regions', 0.071), ('pixel', 0.068), ('region', 0.066), ('missing', 0.066), ('sad', 0.065), ('fi', 0.064), ('overlapped', 0.064), ('doll', 0.061), ('compositing', 0.058), ('boundary', 0.058), ('iz', 0.057), ('bi', 0.057), ('texture', 0.055), ('ranks', 0.05), ('estimated', 0.044), ('conference', 0.043), ('cohen', 0.041), ('bz', 0.041), ('cthr', 0.041), ('knockout', 0.041), ('overlapping', 0.04), ('chromatic', 0.039), ('mse', 0.039), ('collects', 0.038), ('weighted', 0.034), ('closed', 0.034), ('pixels', 0.033), ('ethr', 0.032), ('degrades', 0.031), ('kz', 0.03), ('pairs', 0.03), ('sample', 0.029), ('parametrically', 0.029), ('cz', 0.029), ('patent', 0.029), ('widths', 0.029), ('collect', 0.028), ('boundaries', 0.028), ('collected', 0.028), ('overlap', 0.027), ('benchmark', 0.027), ('rhemann', 0.027), ('correlated', 0.027), ('pair', 0.027), ('nonlocal', 0.026), ('activated', 0.026), ('photometric', 0.025), ('pattern', 0.025), ('black', 0.025), ('inside', 0.025), ('correlation', 0.024), ('adobe', 0.024), ('generated', 0.024), ('objective', 0.024), ('robust', 0.024), ('global', 0.023), ('farther', 0.023), ('wrongly', 0.022), ('illustrate', 0.022), ('cover', 0.022), ('pages', 0.022), ('row', 0.022), ('laplacian', 0.022), ('distortion', 0.022), ('receives', 0.022), ('estimate', 0.022), ('gmm', 0.021), ('plant', 0.021), ('close', 0.02), ('knn', 0.02), ('narrow', 0.02), ('encountered', 0.02), ('collecting', 0.019), ('spatial', 0.019), ('sg', 0.019), ('extent', 0.019), ('parametric', 0.019), ('miss', 0.019), ('marked', 0.019), ('propagation', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 216 cvpr-2013-Improving Image Matting Using Comprehensive Sampling Sets

Author: Ehsan Shahrian, Deepu Rajan, Brian Price, Scott Cohen

Abstract: In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. This is achieved by solving two major problems encountered by current sampling based algorithms. The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. Here, we describe a method by which a more comprehensive and representative set of samples is collected so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. The second problem is the overlap in color distributions of foreground and background regions. This causes sampling based methods to fail to pick the correct samples for foreground and background. Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. Comparison on the dataset at and evaluation by www.alphamatting.com shows that the proposed method ranks first in terms of error measures used in the website.

2 0.6844517 211 cvpr-2013-Image Matting with Local and Nonlocal Smooth Priors

Author: Xiaowu Chen, Dongqing Zou, Steven Zhiying Zhou, Qinping Zhao, Ping Tan

Abstract: In this paper we propose a novel alpha matting method with local and nonlocal smooth priors. We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. This nonlocal smooth prior and the well known local smooth priorfrom matting Laplacian complement each other. So we combine them with a simple data term from color sampling in a graph model for nature image matting. Our method has a closed-form solution and can be solved efficiently. Compared with the state-of-the-art methods, our method produces more accurate results according to the evaluation on standard benchmark datasets.

3 0.22055773 453 cvpr-2013-Video Editing with Temporal, Spatial and Appearance Consistency

Author: Xiaojie Guo, Xiaochun Cao, Xiaowu Chen, Yi Ma

Abstract: Given an area of interest in a video sequence, one may want to manipulate or edit the area, e.g. remove occlusions from or replace with an advertisement on it. Such a task involves three main challenges including temporal consistency, spatial pose, and visual realism. The proposed method effectively seeks an optimal solution to simultaneously deal with temporal alignment, pose rectification, as well as precise recovery of the occlusion. To make our method applicable to long video sequences, we propose a batch alignment method for automatically aligning and rectifying a small number of initial frames, and then show how to align the remaining frames incrementally to the aligned base images. From the error residual of the robust alignment process, we automatically construct a trimap of the region for each frame, which is used as the input to alpha matting methods to extract the occluding foreground. Experimental results on both simulated and real data demonstrate the accurate and robust performance of our method.

4 0.16268341 148 cvpr-2013-Ensemble Video Object Cut in Highly Dynamic Scenes

Author: Xiaobo Ren, Tony X. Han, Zhihai He

Abstract: We consider video object cut as an ensemble of framelevel background-foreground object classifiers which fuses information across frames and refine their segmentation results in a collaborative and iterative manner. Our approach addresses the challenging issues of modeling of background with dynamic textures and segmentation of foreground objects from cluttered scenes. We construct patch-level bagof-words background models to effectively capture the background motion and texture dynamics. We propose a foreground salience graph (FSG) to characterize the similarity of an image patch to the bag-of-words background models in the temporal domain and to neighboring image patches in the spatial domain. We incorporate this similarity information into a graph-cut energy minimization framework for foreground object segmentation. The background-foreground classification results at neighboring frames are fused together to construct a foreground probability map to update the graph weights. The resulting object shapes at neighboring frames are also used as constraints to guide the energy minimization process during graph cut. Our extensive experimental results and performance comparisons over a diverse set of challenging videos with dynamic scenes, including the new Change Detection Challenge Dataset, demonstrate that the proposed ensemble video object cut method outperforms various state-ofthe-art algorithms.

5 0.12960127 450 cvpr-2013-Unsupervised Joint Object Discovery and Segmentation in Internet Images

Author: Michael Rubinstein, Armand Joulin, Johannes Kopf, Ce Liu

Abstract: We present a new unsupervised algorithm to discover and segment out common objects from large and diverse image collections. In contrast to previous co-segmentation methods, our algorithm performs well even in the presence of significant amounts of noise images (images not containing a common object), as typical for datasets collected from Internet search. The key insight to our algorithm is that common object patterns should be salient within each image, while being sparse with respect to smooth transformations across images. We propose to use dense correspondences between images to capture the sparsity and visual variability of the common object over the entire database, which enables us to ignore noise objects that may be salient within their own images but do not commonly occur in others. We performed extensive numerical evaluation on es- tablished co-segmentation datasets, as well as several new datasets generated using Internet search. Our approach is able to effectively segment out the common object for diverse object categories, while naturally identifying images where the common object is not present.

6 0.11591058 378 cvpr-2013-Sampling Strategies for Real-Time Action Recognition

7 0.11145677 352 cvpr-2013-Recovering Stereo Pairs from Anaglyphs

8 0.10683358 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras

9 0.10411137 318 cvpr-2013-Optimized Pedestrian Detection for Multiple and Occluded People

10 0.09940064 55 cvpr-2013-Background Modeling Based on Bidirectional Analysis

11 0.092557207 222 cvpr-2013-Incorporating User Interaction and Topological Constraints within Contour Completion via Discrete Calculus

12 0.088072196 332 cvpr-2013-Pixel-Level Hand Detection in Ego-centric Videos

13 0.08628758 22 cvpr-2013-A Non-parametric Framework for Document Bleed-through Removal

14 0.082419164 30 cvpr-2013-Accurate Localization of 3D Objects from RGB-D Data Using Segmentation Hypotheses

15 0.08209125 130 cvpr-2013-Discriminative Color Descriptors

16 0.07972493 327 cvpr-2013-Pattern-Driven Colorization of 3D Surfaces

17 0.078301623 207 cvpr-2013-Human Pose Estimation Using a Joint Pixel-wise and Part-wise Formulation

18 0.073693223 111 cvpr-2013-Dense Reconstruction Using 3D Object Shape Priors

19 0.073552921 10 cvpr-2013-A Fully-Connected Layered Model of Foreground and Background Flow

20 0.067837164 54 cvpr-2013-BRDF Slices: Accurate Adaptive Anisotropic Appearance Acquisition


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.165), (1, 0.019), (2, 0.046), (3, 0.042), (4, 0.025), (5, -0.02), (6, 0.005), (7, -0.006), (8, -0.016), (9, -0.007), (10, 0.03), (11, -0.052), (12, 0.013), (13, -0.027), (14, 0.034), (15, -0.027), (16, -0.038), (17, -0.143), (18, 0.085), (19, 0.048), (20, 0.002), (21, 0.188), (22, -0.139), (23, -0.285), (24, 0.1), (25, -0.302), (26, 0.29), (27, 0.281), (28, -0.141), (29, -0.076), (30, 0.067), (31, 0.094), (32, -0.152), (33, -0.077), (34, -0.118), (35, -0.187), (36, -0.053), (37, -0.154), (38, 0.154), (39, 0.056), (40, 0.076), (41, -0.125), (42, -0.002), (43, 0.046), (44, 0.006), (45, 0.033), (46, 0.062), (47, -0.08), (48, 0.052), (49, -0.12)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.96828216 211 cvpr-2013-Image Matting with Local and Nonlocal Smooth Priors

Author: Xiaowu Chen, Dongqing Zou, Steven Zhiying Zhou, Qinping Zhao, Ping Tan

Abstract: In this paper we propose a novel alpha matting method with local and nonlocal smooth priors. We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. This nonlocal smooth prior and the well known local smooth priorfrom matting Laplacian complement each other. So we combine them with a simple data term from color sampling in a graph model for nature image matting. Our method has a closed-form solution and can be solved efficiently. Compared with the state-of-the-art methods, our method produces more accurate results according to the evaluation on standard benchmark datasets.

same-paper 2 0.96380401 216 cvpr-2013-Improving Image Matting Using Comprehensive Sampling Sets

Author: Ehsan Shahrian, Deepu Rajan, Brian Price, Scott Cohen

Abstract: In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. This is achieved by solving two major problems encountered by current sampling based algorithms. The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. Here, we describe a method by which a more comprehensive and representative set of samples is collected so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. The second problem is the overlap in color distributions of foreground and background regions. This causes sampling based methods to fail to pick the correct samples for foreground and background. Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. Comparison on the dataset at and evaluation by www.alphamatting.com shows that the proposed method ranks first in terms of error measures used in the website.

3 0.59357989 453 cvpr-2013-Video Editing with Temporal, Spatial and Appearance Consistency

Author: Xiaojie Guo, Xiaochun Cao, Xiaowu Chen, Yi Ma

Abstract: Given an area of interest in a video sequence, one may want to manipulate or edit the area, e.g. remove occlusions from or replace with an advertisement on it. Such a task involves three main challenges including temporal consistency, spatial pose, and visual realism. The proposed method effectively seeks an optimal solution to simultaneously deal with temporal alignment, pose rectification, as well as precise recovery of the occlusion. To make our method applicable to long video sequences, we propose a batch alignment method for automatically aligning and rectifying a small number of initial frames, and then show how to align the remaining frames incrementally to the aligned base images. From the error residual of the robust alignment process, we automatically construct a trimap of the region for each frame, which is used as the input to alpha matting methods to extract the occluding foreground. Experimental results on both simulated and real data demonstrate the accurate and robust performance of our method.

4 0.58323169 22 cvpr-2013-A Non-parametric Framework for Document Bleed-through Removal

Author: Róisín Rowley-Brooke, François Pitié, Anil Kokaram

Abstract: This paper presents recent work on a new framework for non-blind document bleed-through removal. The framework includes image preprocessing to remove local intensity variations, pixel region classification based on a segmentation of the joint recto-verso intensity histogram and connected component analysis on the subsequent image labelling. Finally restoration of the degraded regions is performed using exemplar-based image inpainting. The proposed method is evaluated visually and numerically on a freely available database of 25 scanned manuscript image pairs with ground truth, and is shown to outperform recent non-blind bleed-through removal techniques.

5 0.54153192 55 cvpr-2013-Background Modeling Based on Bidirectional Analysis

Author: Atsushi Shimada, Hajime Nagahara, Rin-ichiro Taniguchi

Abstract: Background modeling and subtraction is an essential task in video surveillance applications. Most traditional studies use information observed in past frames to create and update a background model. To adapt to background changes, the backgroundmodel has been enhancedby introducing various forms of information including spatial consistency and temporal tendency. In this paper, we propose a new framework that leverages information from a future period. Our proposed approach realizes a low-cost and highly accurate background model. The proposed framework is called bidirectional background modeling, and performs background subtraction based on bidirectional analysis; i.e., analysis from past to present and analysis from future to present. Although a result will be output with some delay because information is takenfrom a futureperiod, our proposed approach improves the accuracy by about 30% if only a 33-millisecond of delay is acceptable. Furthermore, the memory cost can be reduced by about 65% relative to typical background modeling.

6 0.49789679 352 cvpr-2013-Recovering Stereo Pairs from Anaglyphs

7 0.47710878 235 cvpr-2013-Jointly Aligning and Segmenting Multiple Web Photo Streams for the Inference of Collective Photo Storylines

8 0.46875203 148 cvpr-2013-Ensemble Video Object Cut in Highly Dynamic Scenes

9 0.46555039 327 cvpr-2013-Pattern-Driven Colorization of 3D Surfaces

10 0.45959798 263 cvpr-2013-Learning the Change for Automatic Image Cropping

11 0.42919558 450 cvpr-2013-Unsupervised Joint Object Discovery and Segmentation in Internet Images

12 0.39975685 130 cvpr-2013-Discriminative Color Descriptors

13 0.35466695 261 cvpr-2013-Learning by Associating Ambiguously Labeled Images

14 0.34062469 171 cvpr-2013-Fast Trust Region for Segmentation

15 0.33551392 193 cvpr-2013-Graph Transduction Learning with Connectivity Constraints with Application to Multiple Foreground Cosegmentation

16 0.30464858 332 cvpr-2013-Pixel-Level Hand Detection in Ego-centric Videos

17 0.29401729 210 cvpr-2013-Illumination Estimation Based on Bilayer Sparse Coding

18 0.28672835 145 cvpr-2013-Efficient Object Detection and Segmentation for Fine-Grained Recognition

19 0.28408277 54 cvpr-2013-BRDF Slices: Accurate Adaptive Anisotropic Appearance Acquisition

20 0.28017187 10 cvpr-2013-A Fully-Connected Layered Model of Foreground and Background Flow


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.088), (16, 0.055), (26, 0.05), (28, 0.01), (33, 0.324), (53, 0.203), (67, 0.063), (69, 0.053), (87, 0.063)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.92290115 129 cvpr-2013-Discriminative Brain Effective Connectivity Analysis for Alzheimer's Disease: A Kernel Learning Approach upon Sparse Gaussian Bayesian Network

Author: Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, Dinggang Shen

Abstract: Analyzing brain networks from neuroimages is becoming a promising approach in identifying novel connectivitybased biomarkers for the Alzheimer’s disease (AD). In this regard, brain “effective connectivity ” analysis, which studies the causal relationship among brain regions, is highly challenging and of many research opportunities. Most of the existing works in this field use generative methods. Despite their success in data representation and other important merits, generative methods are not necessarily discriminative, which may cause the ignorance of subtle but critical disease-induced changes. In this paper, we propose a learning-based approach that integrates the benefits of generative and discriminative methods to recover effective connectivity. In particular, we employ Fisher kernel to bridge the generative models of sparse Bayesian networks (SBN) and the discriminative classifiers of SVMs, and convert the SBN parameter learning to Fisher kernel learning via minimizing a generalization error bound of SVMs. Our method is able to simultaneously boost the discriminative power of both the generative SBN models and the SBN-induced SVM classifiers via Fisher kernel. The proposed method is tested on analyzing brain effective connectivity for AD from ADNI data, and demonstrates significant improvements over the state-of-the-art work.

2 0.90827012 453 cvpr-2013-Video Editing with Temporal, Spatial and Appearance Consistency

Author: Xiaojie Guo, Xiaochun Cao, Xiaowu Chen, Yi Ma

Abstract: Given an area of interest in a video sequence, one may want to manipulate or edit the area, e.g. remove occlusions from or replace with an advertisement on it. Such a task involves three main challenges including temporal consistency, spatial pose, and visual realism. The proposed method effectively seeks an optimal solution to simultaneously deal with temporal alignment, pose rectification, as well as precise recovery of the occlusion. To make our method applicable to long video sequences, we propose a batch alignment method for automatically aligning and rectifying a small number of initial frames, and then show how to align the remaining frames incrementally to the aligned base images. From the error residual of the robust alignment process, we automatically construct a trimap of the region for each frame, which is used as the input to alpha matting methods to extract the occluding foreground. Experimental results on both simulated and real data demonstrate the accurate and robust performance of our method.

3 0.90565658 211 cvpr-2013-Image Matting with Local and Nonlocal Smooth Priors

Author: Xiaowu Chen, Dongqing Zou, Steven Zhiying Zhou, Qinping Zhao, Ping Tan

Abstract: In this paper we propose a novel alpha matting method with local and nonlocal smooth priors. We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. This nonlocal smooth prior and the well known local smooth priorfrom matting Laplacian complement each other. So we combine them with a simple data term from color sampling in a graph model for nature image matting. Our method has a closed-form solution and can be solved efficiently. Compared with the state-of-the-art methods, our method produces more accurate results according to the evaluation on standard benchmark datasets.

same-paper 4 0.8990286 216 cvpr-2013-Improving Image Matting Using Comprehensive Sampling Sets

Author: Ehsan Shahrian, Deepu Rajan, Brian Price, Scott Cohen

Abstract: In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. This is achieved by solving two major problems encountered by current sampling based algorithms. The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. Here, we describe a method by which a more comprehensive and representative set of samples is collected so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. The second problem is the overlap in color distributions of foreground and background regions. This causes sampling based methods to fail to pick the correct samples for foreground and background. Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. Comparison on the dataset at and evaluation by www.alphamatting.com shows that the proposed method ranks first in terms of error measures used in the website.

5 0.89189273 288 cvpr-2013-Modeling Mutual Visibility Relationship in Pedestrian Detection

Author: Wanli Ouyang, Xingyu Zeng, Xiaogang Wang

Abstract: Detecting pedestrians in cluttered scenes is a challenging problem in computer vision. The difficulty is added when several pedestrians overlap in images and occlude each other. We observe, however, that the occlusion/visibility statuses of overlapping pedestrians provide useful mutual relationship for visibility estimation - the visibility estimation of one pedestrian facilitates the visibility estimation of another. In this paper, we propose a mutual visibility deep model that jointly estimates the visibility statuses of overlapping pedestrians. The visibility relationship among pedestrians is learned from the deep model for recognizing co-existing pedestrians. Experimental results show that the mutual visibility deep model effectively improves the pedestrian detection results. Compared with existing image-based pedestrian detection approaches, our approach has the lowest average miss rate on the CaltechTrain dataset, the Caltech-Test dataset and the ETHdataset. Including mutual visibility leads to 4% −8% improvements on mluudlitnipglem ubteunaclh vmiasibrki ditayta lesaedtss.

6 0.88623339 44 cvpr-2013-Area Preserving Brain Mapping

7 0.86032313 82 cvpr-2013-Class Generative Models Based on Feature Regression for Pose Estimation of Object Categories

8 0.86009699 202 cvpr-2013-Hierarchical Saliency Detection

9 0.85997611 464 cvpr-2013-What Makes a Patch Distinct?

10 0.85974419 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases

11 0.85971957 43 cvpr-2013-Analyzing Semantic Segmentation Using Hybrid Human-Machine CRFs

12 0.85938382 173 cvpr-2013-Finding Things: Image Parsing with Regions and Per-Exemplar Detectors

13 0.85922188 322 cvpr-2013-PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors

14 0.85900682 384 cvpr-2013-Segment-Tree Based Cost Aggregation for Stereo Matching

15 0.85894012 284 cvpr-2013-Mesh Based Semantic Modelling for Indoor and Outdoor Scenes

16 0.85883576 221 cvpr-2013-Incorporating Structural Alternatives and Sharing into Hierarchy for Multiclass Object Recognition and Detection

17 0.85880953 355 cvpr-2013-Representing Videos Using Mid-level Discriminative Patches

18 0.85867333 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections

19 0.85854471 326 cvpr-2013-Patch Match Filter: Efficient Edge-Aware Filtering Meets Randomized Search for Fast Correspondence Field Estimation

20 0.85853189 250 cvpr-2013-Learning Cross-Domain Information Transfer for Location Recognition and Clustering