cvpr cvpr2013 cvpr2013-211 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Xiaowu Chen, Dongqing Zou, Steven Zhiying Zhou, Qinping Zhao, Ping Tan
Abstract: In this paper we propose a novel alpha matting method with local and nonlocal smooth priors. We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. This nonlocal smooth prior and the well known local smooth priorfrom matting Laplacian complement each other. So we combine them with a simple data term from color sampling in a graph model for nature image matting. Our method has a closed-form solution and can be solved efficiently. Compared with the state-of-the-art methods, our method produces more accurate results according to the evaluation on standard benchmark datasets.
Reference: text
sentIndex sentText sentNum sentScore
1 We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. [sent-2, score-1.258]
2 This nonlocal smooth prior and the well known local smooth priorfrom matting Laplacian complement each other. [sent-3, score-1.3]
3 So we combine them with a simple data term from color sampling in a graph model for nature image matting. [sent-4, score-0.157]
4 Introduction Image matting seeks to decompose an image I into the foreground F and the background B. [sent-8, score-0.774]
5 (1) Here, the alpha matte α defines the opacity ofeach pixel and its value lies in [0, 1] . [sent-10, score-0.714]
6 Accurate matting plays an important role in various image and video editing applications. [sent-11, score-0.749]
7 It is a typical practice to include a trimap or some user scribbles to simplify the problem. [sent-13, score-0.112]
8 At the same time, strong priors on the alpha matte can significantly improve the results. [sent-14, score-0.672]
9 In the closed-form matting [11], a matting Laplacian matrix is derived based on the color line model [14] to constrain the alpha matte within local windows. [sent-15, score-2.094]
10 This local smooth prior can be combined with data terms derived from color sampling [23, 15]. [sent-16, score-0.309]
11 This smooth prior is further improved in [20] for image regions with constant foreground ∗corresponding author(email: zoudq@vrlab. [sent-17, score-0.236]
12 [9] improve the color sampling with generalized Patchmatch [2]. [sent-22, score-0.126]
13 Such a combination of data term and local smooth term generates high quality results according to recent surveys [22, 18]. [sent-23, score-0.276]
14 However, as discussed in [10], it is nontrivial to set an appropriate local window size when computing the Laplacian matrix. [sent-24, score-0.084]
15 On the other hand, a large window breaks the color line model and also leads to poor results. [sent-26, score-0.116]
16 [4] proposed a manifold preserving editing propagation method, and applied it to alpha matting. [sent-28, score-0.775]
17 We observe that this method is essentially a novel nonlocal smooth prior on the alpha matte. [sent-29, score-0.983]
18 It helps to correlate alpha values at faraway pixels, which is complementary to the matting Laplacian. [sent-30, score-1.231]
19 When this nonlocal smooth prior is applied alone, it might not capture local structures of semitransparent objects. [sent-31, score-0.53]
20 So we propose to combine this nonlocal smooth term with the local Laplacian smooth term, and further include a trivial data term. [sent-32, score-0.624]
21 Our main contributions are: 1) new insights in the manifold preserving based matting propagation; 2) a novel matting algorithm that achieves superior performance on standard benchmark database. [sent-34, score-1.523]
22 Sampling-Based Matting estimates the alpha matte, foreground and background color of a pixel simultaneously. [sent-39, score-0.685]
23 Many methods [16, 5, 15, 6, 3] applied different parametric or non-parametric models to collect nearby pixel samples from the known foreground and background. [sent-40, score-0.155]
24 Ruzon and Tomasi [16] assumed the unknown pixels are in a nar- row band region around the foreground boundary. [sent-41, score-0.159]
25 Later 111999000200 The pixel A can be generated by linearly combining the color at B and C. [sent-42, score-0.09]
26 These methods work well when the unknown pixels are near the foreground boundary, and the number of unknown pixels is relatively small. [sent-45, score-0.215]
27 [15] proposed an improved color model to collect samples according to the geodesic distance. [sent-47, score-0.11]
28 Shared matting [6] collected those samples along rays of different directions. [sent-48, score-0.703]
29 Affinity-Based Matting solves the alpha matte independent of the foreground and background colors. [sent-50, score-0.767]
30 Poisson matting [21] assumed that the matte gradient is proportional to the image gradient. [sent-51, score-0.851]
31 Random walk matting [8] employed the random walks algorithm [7] to solve the alpha value according to the neighboring colors affinities. [sent-52, score-1.257]
32 Closed-form matting [11] assumed color line model in local windows and solved the alpha matte by minimizing a cost function. [sent-53, score-1.415]
33 Spectral matting [12] extended [11] into an unsupervised fashion by exploiting its relationship with spectral clustering. [sent-54, score-0.679]
34 The matting Laplacian has been combined with various ‘data constraints’ [23, 15], priors [17] or learning based method [24] for image matting. [sent-55, score-0.679]
35 However, the local smooth assumption is insufficient to deal with complex images. [sent-56, score-0.176]
36 Thus we further combine it with a nonlocal smooth prior to improve the results. [sent-57, score-0.466]
37 Robust matting [23] first collected samples with high confidence, and then used the Random Walk [7] to minimize the matting energy. [sent-59, score-1.382]
38 Global sampling matting [9] searched the global optimum samples with a random search algorithm derived from the PatchMatch algorithm [2]. [sent-60, score-0.781]
39 [4] proposed a manifold preserving method for edit propagation and applied it to propagate the alpha matte from the definite foreground and background to the unknown regions. [sent-63, score-1.039]
40 Specifically, they first apply the locally linear embedding (LLE) [19] to represent each pixel as a linear combination of a few of its nearest neighbors in the RGBXY feature space, which is the RGB value concatenated with the image coordinate. [sent-64, score-0.118]
41 The propagation algorithm keeps the alpha of known pixels unchanged, and requires every pixel to be the same linear combination of its neighbors in the feature space. [sent-65, score-0.695]
42 For example, in the Figure 1, the pixel A can be generated by linearly combining the color at B and C according to the weight A = w1B + (1−w1)C. [sent-66, score-0.108]
43 In the alpha matte, the manifold preserving propagation requires the same equation hold for the alpha matte, i. [sent-67, score-1.205]
44 Here, the three scalars αA, αB and αC are t+he( alpha values at the three pixels A, B and C. [sent-70, score-0.532]
45 When B and C are known foreground and background pixels (i. [sent-71, score-0.127]
46 αB = 1and αC = 0), the manifold preserving condition simply requires αA = w1. [sent-73, score-0.135]
47 In fact, w1 is an estimation of αA from the color sampling [23]. [sent-74, score-0.126]
48 So this method includes color sampling in the RGBXY space to solve the alpha. [sent-75, score-0.126]
49 Furthermore, B and C can also be unknown pixels, so preserving the local manifold structure brings more constraints than color sampling. [sent-76, score-0.24]
50 Note A could be faraway from B and C, since the neighbors are found in the feature space. [sent-77, score-0.103]
51 So the manifold preserving constraint is essentially a nonlocal smooth constraint that relates the alpha values at faraway pixels. [sent-78, score-1.196]
52 It propagates information across the whole image, while the local smooth constraint from matting Laplacian can only propagate information within local windows. [sent-79, score-0.877]
53 This nonlocal smooth prior is complementary to the matting Laplacian. [sent-80, score-1.145]
54 As shown in the first row of Figure 2, a large window (41 41) is required to capture the alpha ma laatrtge estr wuicntudorew by t1he × m 4e1th)o isd r[e1q q1u]. [sent-82, score-0.602]
55 i Hedow toev ceapr, fuorre th thee example in the second row of Figure 2, a small window (3 3) ipsl required etcoo ensure t ohef Fcoiglourre eli 2n,e a am somdaell wtoi n bdeo wvali (d3 over complicated background. [sent-83, score-0.119]
56 In comparison, when combined with the nonlocal smooth prior, matting Laplacian with a small window (3 3) generates consistently good results on both examples. [sent-86, score-1.226]
57 F 3u)rt gheenremraoteres, a nsmsisatlel wtlyind goowod m reaskuletss othne matting Laplacian more sparse and more efficient to solve. [sent-87, score-0.679]
58 The nonlocal smooth prior alone is also insufficient to compute accurate alpha matte. [sent-88, score-1.018]
59 As shown in the first row of Figure 3, when most of the alpha values are close to 0 or 1, the nonlocal smooth prior alone generates satisfactory results. [sent-89, score-1.055]
60 However, as shown in the second row, the nonlocal smooth prior alone cannot handle images with large semitransparent region. [sent-90, score-0.545]
61 This is because for pixels with alpha values close to 0. [sent-91, score-0.532]
62 As a result, their alpha values are less constrained by the manifold preserving constraint. [sent-93, score-0.635]
63 In contrast, when the local smooth constraint from matting Laplacian is applied, good results can be obtained on both examples. [sent-94, score-0.861]
64 It is not easy to set the local window size when computing the matting Laplacian. [sent-96, score-0.763]
65 In the first row, a large window (41 41) Fisi required tt ios capture yth teo complex alpha imndatotew by tehe w mheenth coodm [11]. [sent-97, score-0.568]
66 The nonlocal prior alone [4] is also insufficient to compute the accurate matte. [sent-103, score-0.379]
67 It works for alpha matte with mostly binary alpha values. [sent-104, score-1.172]
68 Graph Model for Matting We propose a new matting method by combining the local smooth term, nonlocal smooth term and a data term based on color sampling in a graph model. [sent-107, score-1.46]
69 Color Sampling We take a simple color sampling method [23] as the data term. [sent-111, score-0.126]
70 Given a selected foreground and background samples pair (Fi, Bj), the alpha value of pixel C can be estimated as, ? [sent-113, score-0.661]
71 For the sampling, we simply take some spatial nearest pixels to form a candidate we simply select the spatial nearest pixels as the candidate samples (orange points and purple points) by using FLANN [13]. [sent-125, score-0.16]
72 × set, and choose the pair of samples with highest confidence for each unknown pixel as shown in Figure 4. [sent-126, score-0.127]
73 We find the K nearest samples for each unknown pixel by using FLANN [13]. [sent-127, score-0.132]
74 Though better sampling methods [23, 9] can be used, we find this simple method produces good results with the help of local and nonlocal smooth terms 4. [sent-128, score-0.532]
75 Graph Model As shown in Figure 5, in our graph model, the white nodes represent the unknown pixels on the image lattice, the orange nodes and the purple nodes are known pixels marked by a trimap or user scribbles. [sent-130, score-0.325]
76 Two virtual nodes ΩF and ΩB representing the foreground and background are connected with each pixel. [sent-131, score-0.172]
77 Each pixel is connected with its neighboring pixels in a 3 3 window, and also connected with its neighbors iinn tahe 3 R×G 3B wXinYd foewa,tu arned space. [sent-132, score-0.184]
78 Tnnhee ccteodnn weictthion itss between each pixel and its feature space neighbors are indicated by red lines. [sent-133, score-0.093]
79 The data weights, W(i,F) and W(i,B) which represent the probability of a pixel belonging to foreground and background, are defined between each pixel and a vir111999000422 resenting the foreground and background are connected with each pixel. [sent-135, score-0.286]
80 Each pixel is further connected with its spatial neighbors and feature space neighbors. [sent-136, score-0.114]
81 Data term and smooth terms are defined on these edges. [sent-137, score-0.17]
82 αe ig ohft eoafc hea pchix neol rdeef lienc tthse t hime ategned leanttciyce t oawccaordrdsin fgo teoits initial alpha value. [sent-142, score-0.551]
83 Specifically, for each unknown pixel i, two data weights W(i,F) and W(i,B) are defined as: W(i,F) = γ α? [sent-143, score-0.083]
84 sTpheec tilvoeclayl matting Laplacian enhances the local smoothness of the result alpha matte. [sent-162, score-1.195]
85 For Wtheilja ppixiesl dse ifai nnedd j a isn: a 3 × 3 window wk, the neighbor term Wiljap= δ(i,j? [sent-163, score-0.099]
86 To enforce the nonlocal smooth constraint, for each pixel Xi, we connect it to its K nearest neighbors Xi1, . [sent-180, score-0.556]
87 resents the (ri, gi , bi, xi , yi) for the pixel i. [sent-193, score-0.088]
88 The result matrix (the ij-th element of is Wiljle) encodes the nonlocal manifold constraint. [sent-197, score-0.371]
89 Closed-form Solution Wlle Wlle We first collect a subset of pixels S, which includes pixels Wofe k finrostw cno alpha av saulubese ftro omf p tixhee trimap or user sucdriebsb pliexs. [sent-200, score-0.687]
90 ∈NiWij(αi− αj)⎠⎞2, (5) where N is the number of all nodes in the graph model, including all nodes in the image lattice plus two virtual nodes ΩF and ΩB . [sent-209, score-0.138]
91 Wij represents three kinds of weights , containing local smooth term nonlocal smooth term Wiljle and data term W(i,F) and W(i,B) . [sent-210, score-0.686]
92 The set Ni is the set of neighbors of the pixel i, including neighboring pixels in 3 3 window and those K nearest neighbors in the RGBXY space, iannddo twhe a tnwdo t hvoisrteuKa l nneoadreess. [sent-211, score-0.286]
93 Our method ranks first according to the measurements of SAD, MSE and gradient error, and ranks third according to the connectivity error. [sent-245, score-0.114]
94 The Global Sampling Matting has comparable accuracy by introducing a strong data term which optimizes color sampling from all pixel pairs. [sent-250, score-0.199]
95 Several of these methods include the local smoothness term from matting Laplacian. [sent-253, score-0.726]
96 In comparison, with the help of nonlocal smoothness constraint, our method generates consistently good results with a small fixed window size. [sent-255, score-0.408]
97 The foreground and background in the Elephant example have similar color distribution. [sent-260, score-0.143]
98 Conclusion We observe that the manifold preserving alpha matte propagation is an effectively nonlocal smooth constraint on the alpha matte. [sent-267, score-1.842]
99 We combine it with the local smooth constraint from the matting Laplacian and a simple data term from color sampling for nature image matting. [sent-268, score-1.018]
100 A global [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] sampling method for alpha matting. [sent-352, score-0.578]
wordName wordTfidf (topN-words)
[('matting', 0.679), ('alpha', 0.5), ('nonlocal', 0.299), ('matte', 0.172), ('smooth', 0.139), ('laplacian', 0.09), ('sampling', 0.078), ('rhemann', 0.077), ('manifold', 0.072), ('propagation', 0.07), ('editing', 0.07), ('foreground', 0.069), ('window', 0.068), ('preserving', 0.063), ('trimap', 0.06), ('rgbxy', 0.058), ('faraway', 0.052), ('neighbors', 0.051), ('color', 0.048), ('semitransparent', 0.048), ('pixel', 0.042), ('unknown', 0.041), ('generates', 0.041), ('rother', 0.039), ('suzhou', 0.039), ('wiljap', 0.039), ('wiljle', 0.039), ('wilmle', 0.039), ('wlle', 0.039), ('bj', 0.039), ('patchmatch', 0.037), ('shared', 0.033), ('walks', 0.033), ('pixels', 0.032), ('bij', 0.032), ('chuang', 0.032), ('alone', 0.031), ('term', 0.031), ('nodes', 0.031), ('singapore', 0.03), ('snd', 0.03), ('flann', 0.03), ('steven', 0.03), ('benchmark', 0.03), ('gi', 0.029), ('prior', 0.028), ('walk', 0.027), ('constraint', 0.027), ('scribbles', 0.026), ('edit', 0.026), ('user', 0.026), ('background', 0.026), ('nearest', 0.025), ('pages', 0.025), ('supplement', 0.025), ('ping', 0.025), ('mse', 0.025), ('sad', 0.025), ('virtual', 0.025), ('samples', 0.024), ('cf', 0.024), ('connectivity', 0.022), ('purple', 0.022), ('insufficient', 0.021), ('connected', 0.021), ('lattice', 0.02), ('confidence', 0.02), ('collect', 0.02), ('gs', 0.02), ('poisson', 0.019), ('orange', 0.019), ('ranks', 0.019), ('measurements', 0.018), ('according', 0.018), ('wij', 0.018), ('levin', 0.017), ('fuorre', 0.017), ('neol', 0.017), ('estr', 0.017), ('resenting', 0.017), ('singaraju', 0.017), ('tixhee', 0.017), ('arned', 0.017), ('beihang', 0.017), ('email', 0.017), ('eoafc', 0.017), ('fgo', 0.017), ('fisi', 0.017), ('gastal', 0.017), ('lisa', 0.017), ('nces', 0.017), ('omer', 0.017), ('resents', 0.017), ('tnwdo', 0.017), ('toev', 0.017), ('xiaowu', 0.017), ('essentially', 0.017), ('row', 0.017), ('local', 0.016), ('uploaded', 0.016)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 211 cvpr-2013-Image Matting with Local and Nonlocal Smooth Priors
Author: Xiaowu Chen, Dongqing Zou, Steven Zhiying Zhou, Qinping Zhao, Ping Tan
Abstract: In this paper we propose a novel alpha matting method with local and nonlocal smooth priors. We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. This nonlocal smooth prior and the well known local smooth priorfrom matting Laplacian complement each other. So we combine them with a simple data term from color sampling in a graph model for nature image matting. Our method has a closed-form solution and can be solved efficiently. Compared with the state-of-the-art methods, our method produces more accurate results according to the evaluation on standard benchmark datasets.
2 0.6844517 216 cvpr-2013-Improving Image Matting Using Comprehensive Sampling Sets
Author: Ehsan Shahrian, Deepu Rajan, Brian Price, Scott Cohen
Abstract: In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. This is achieved by solving two major problems encountered by current sampling based algorithms. The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. Here, we describe a method by which a more comprehensive and representative set of samples is collected so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. The second problem is the overlap in color distributions of foreground and background regions. This causes sampling based methods to fail to pick the correct samples for foreground and background. Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. Comparison on the dataset at and evaluation by www.alphamatting.com shows that the proposed method ranks first in terms of error measures used in the website.
3 0.21128826 453 cvpr-2013-Video Editing with Temporal, Spatial and Appearance Consistency
Author: Xiaojie Guo, Xiaochun Cao, Xiaowu Chen, Yi Ma
Abstract: Given an area of interest in a video sequence, one may want to manipulate or edit the area, e.g. remove occlusions from or replace with an advertisement on it. Such a task involves three main challenges including temporal consistency, spatial pose, and visual realism. The proposed method effectively seeks an optimal solution to simultaneously deal with temporal alignment, pose rectification, as well as precise recovery of the occlusion. To make our method applicable to long video sequences, we propose a batch alignment method for automatically aligning and rectifying a small number of initial frames, and then show how to align the remaining frames incrementally to the aligned base images. From the error residual of the robust alignment process, we automatically construct a trimap of the region for each frame, which is used as the input to alpha matting methods to extract the occluding foreground. Experimental results on both simulated and real data demonstrate the accurate and robust performance of our method.
4 0.05775664 427 cvpr-2013-Texture Enhanced Image Denoising via Gradient Histogram Preservation
Author: Wangmeng Zuo, Lei Zhang, Chunwei Song, David Zhang
Abstract: Image denoising is a classical yet fundamental problem in low level vision, as well as an ideal test bed to evaluate various statistical image modeling methods. One of the most challenging problems in image denoising is how to preserve the fine scale texture structures while removing noise. Various natural image priors, such as gradient based prior, nonlocal self-similarity prior, and sparsity prior, have been extensively exploited for noise removal. The denoising algorithms based on these priors, however, tend to smooth the detailed image textures, degrading the image visual quality. To address this problem, in this paper we propose a texture enhanced image denoising (TEID) method by enforcing the gradient distribution of the denoised image to be close to the estimated gradient distribution of the original image. A novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise. Our experimental results demonstrate that theproposed GHP based TEID can well preserve the texture features of the denoised images, making them look more natural.
5 0.055222694 450 cvpr-2013-Unsupervised Joint Object Discovery and Segmentation in Internet Images
Author: Michael Rubinstein, Armand Joulin, Johannes Kopf, Ce Liu
Abstract: We present a new unsupervised algorithm to discover and segment out common objects from large and diverse image collections. In contrast to previous co-segmentation methods, our algorithm performs well even in the presence of significant amounts of noise images (images not containing a common object), as typical for datasets collected from Internet search. The key insight to our algorithm is that common object patterns should be salient within each image, while being sparse with respect to smooth transformations across images. We propose to use dense correspondences between images to capture the sparsity and visual variability of the common object over the entire database, which enables us to ignore noise objects that may be salient within their own images but do not commonly occur in others. We performed extensive numerical evaluation on es- tablished co-segmentation datasets, as well as several new datasets generated using Internet search. Our approach is able to effectively segment out the common object for diverse object categories, while naturally identifying images where the common object is not present.
6 0.054169584 378 cvpr-2013-Sampling Strategies for Real-Time Action Recognition
7 0.053396247 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds
8 0.049435463 352 cvpr-2013-Recovering Stereo Pairs from Anaglyphs
10 0.047557428 10 cvpr-2013-A Fully-Connected Layered Model of Foreground and Background Flow
11 0.045600012 148 cvpr-2013-Ensemble Video Object Cut in Highly Dynamic Scenes
12 0.045376722 107 cvpr-2013-Deformable Spatial Pyramid Matching for Fast Dense Correspondences
13 0.044750012 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior
14 0.0441515 152 cvpr-2013-Exemplar-Based Face Parsing
15 0.043597717 180 cvpr-2013-Fully-Connected CRFs with Non-Parametric Pairwise Potential
16 0.041943461 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification
17 0.041718427 222 cvpr-2013-Incorporating User Interaction and Topological Constraints within Contour Completion via Discrete Calculus
18 0.040162224 169 cvpr-2013-Fast Patch-Based Denoising Using Approximated Patch Geodesic Paths
19 0.039949391 259 cvpr-2013-Learning a Manifold as an Atlas
20 0.038810249 375 cvpr-2013-Saliency Detection via Graph-Based Manifold Ranking
topicId topicWeight
[(0, 0.098), (1, 0.022), (2, 0.017), (3, 0.039), (4, 0.022), (5, -0.012), (6, -0.004), (7, -0.035), (8, -0.029), (9, -0.025), (10, 0.04), (11, -0.024), (12, -0.018), (13, -0.024), (14, 0.024), (15, -0.025), (16, -0.064), (17, -0.121), (18, 0.059), (19, 0.041), (20, 0.0), (21, 0.168), (22, -0.093), (23, -0.237), (24, 0.083), (25, -0.281), (26, 0.261), (27, 0.247), (28, -0.133), (29, -0.067), (30, 0.062), (31, 0.034), (32, -0.124), (33, -0.069), (34, -0.128), (35, -0.182), (36, -0.054), (37, -0.166), (38, 0.187), (39, 0.061), (40, 0.067), (41, -0.155), (42, -0.033), (43, 0.032), (44, 0.029), (45, 0.039), (46, 0.048), (47, -0.079), (48, 0.036), (49, -0.163)]
simIndex simValue paperId paperTitle
same-paper 1 0.96370476 211 cvpr-2013-Image Matting with Local and Nonlocal Smooth Priors
Author: Xiaowu Chen, Dongqing Zou, Steven Zhiying Zhou, Qinping Zhao, Ping Tan
Abstract: In this paper we propose a novel alpha matting method with local and nonlocal smooth priors. We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. This nonlocal smooth prior and the well known local smooth priorfrom matting Laplacian complement each other. So we combine them with a simple data term from color sampling in a graph model for nature image matting. Our method has a closed-form solution and can be solved efficiently. Compared with the state-of-the-art methods, our method produces more accurate results according to the evaluation on standard benchmark datasets.
2 0.90667832 216 cvpr-2013-Improving Image Matting Using Comprehensive Sampling Sets
Author: Ehsan Shahrian, Deepu Rajan, Brian Price, Scott Cohen
Abstract: In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. This is achieved by solving two major problems encountered by current sampling based algorithms. The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. Here, we describe a method by which a more comprehensive and representative set of samples is collected so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. The second problem is the overlap in color distributions of foreground and background regions. This causes sampling based methods to fail to pick the correct samples for foreground and background. Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. Comparison on the dataset at and evaluation by www.alphamatting.com shows that the proposed method ranks first in terms of error measures used in the website.
3 0.54243237 453 cvpr-2013-Video Editing with Temporal, Spatial and Appearance Consistency
Author: Xiaojie Guo, Xiaochun Cao, Xiaowu Chen, Yi Ma
Abstract: Given an area of interest in a video sequence, one may want to manipulate or edit the area, e.g. remove occlusions from or replace with an advertisement on it. Such a task involves three main challenges including temporal consistency, spatial pose, and visual realism. The proposed method effectively seeks an optimal solution to simultaneously deal with temporal alignment, pose rectification, as well as precise recovery of the occlusion. To make our method applicable to long video sequences, we propose a batch alignment method for automatically aligning and rectifying a small number of initial frames, and then show how to align the remaining frames incrementally to the aligned base images. From the error residual of the robust alignment process, we automatically construct a trimap of the region for each frame, which is used as the input to alpha matting methods to extract the occluding foreground. Experimental results on both simulated and real data demonstrate the accurate and robust performance of our method.
4 0.49393094 22 cvpr-2013-A Non-parametric Framework for Document Bleed-through Removal
Author: Róisín Rowley-Brooke, François Pitié, Anil Kokaram
Abstract: This paper presents recent work on a new framework for non-blind document bleed-through removal. The framework includes image preprocessing to remove local intensity variations, pixel region classification based on a segmentation of the joint recto-verso intensity histogram and connected component analysis on the subsequent image labelling. Finally restoration of the degraded regions is performed using exemplar-based image inpainting. The proposed method is evaluated visually and numerically on a freely available database of 25 scanned manuscript image pairs with ground truth, and is shown to outperform recent non-blind bleed-through removal techniques.
5 0.43961 352 cvpr-2013-Recovering Stereo Pairs from Anaglyphs
Author: Armand Joulin, Sing Bing Kang
Abstract: An anaglyph is a single image created by selecting complementary colors from a stereo color pair; the user can perceive depth by viewing it through color-filtered glasses. We propose a technique to reconstruct the original color stereo pair given such an anaglyph. We modified SIFT-Flow and use it to initially match the different color channels across the two views. Our technique then iteratively refines the matches, selects the good matches (which defines the “anchor” colors), and propagates the anchor colors. We use a diffusion-based technique for the color propagation, and added a step to suppress unwanted colors. Results on a variety of inputs demonstrate the robustness of our technique. We also extended our method to anaglyph videos by using optic flow between time frames.
6 0.43143904 55 cvpr-2013-Background Modeling Based on Bidirectional Analysis
8 0.38527414 327 cvpr-2013-Pattern-Driven Colorization of 3D Surfaces
9 0.36931738 148 cvpr-2013-Ensemble Video Object Cut in Highly Dynamic Scenes
10 0.35809034 263 cvpr-2013-Learning the Change for Automatic Image Cropping
11 0.34782025 450 cvpr-2013-Unsupervised Joint Object Discovery and Segmentation in Internet Images
12 0.3397857 130 cvpr-2013-Discriminative Color Descriptors
13 0.30231747 261 cvpr-2013-Learning by Associating Ambiguously Labeled Images
14 0.29551864 193 cvpr-2013-Graph Transduction Learning with Connectivity Constraints with Application to Multiple Foreground Cosegmentation
15 0.26303798 171 cvpr-2013-Fast Trust Region for Segmentation
16 0.2391507 10 cvpr-2013-A Fully-Connected Layered Model of Foreground and Background Flow
17 0.23679945 410 cvpr-2013-Specular Reflection Separation Using Dark Channel Prior
18 0.2319216 177 cvpr-2013-FrameBreak: Dramatic Image Extrapolation by Guided Shift-Maps
19 0.22503538 210 cvpr-2013-Illumination Estimation Based on Bilayer Sparse Coding
20 0.22435011 20 cvpr-2013-A New Model and Simple Algorithms for Multi-label Mumford-Shah Problems
topicId topicWeight
[(10, 0.083), (16, 0.043), (26, 0.038), (28, 0.012), (33, 0.262), (52, 0.142), (53, 0.177), (67, 0.037), (69, 0.05), (87, 0.052)]
simIndex simValue paperId paperTitle
same-paper 1 0.84725583 211 cvpr-2013-Image Matting with Local and Nonlocal Smooth Priors
Author: Xiaowu Chen, Dongqing Zou, Steven Zhiying Zhou, Qinping Zhao, Ping Tan
Abstract: In this paper we propose a novel alpha matting method with local and nonlocal smooth priors. We observe that the manifold preserving editing propagation [4] essentially introduced a nonlocal smooth prior on the alpha matte. This nonlocal smooth prior and the well known local smooth priorfrom matting Laplacian complement each other. So we combine them with a simple data term from color sampling in a graph model for nature image matting. Our method has a closed-form solution and can be solved efficiently. Compared with the state-of-the-art methods, our method produces more accurate results according to the evaluation on standard benchmark datasets.
Author: Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, Dinggang Shen
Abstract: Analyzing brain networks from neuroimages is becoming a promising approach in identifying novel connectivitybased biomarkers for the Alzheimer’s disease (AD). In this regard, brain “effective connectivity ” analysis, which studies the causal relationship among brain regions, is highly challenging and of many research opportunities. Most of the existing works in this field use generative methods. Despite their success in data representation and other important merits, generative methods are not necessarily discriminative, which may cause the ignorance of subtle but critical disease-induced changes. In this paper, we propose a learning-based approach that integrates the benefits of generative and discriminative methods to recover effective connectivity. In particular, we employ Fisher kernel to bridge the generative models of sparse Bayesian networks (SBN) and the discriminative classifiers of SVMs, and convert the SBN parameter learning to Fisher kernel learning via minimizing a generalization error bound of SVMs. Our method is able to simultaneously boost the discriminative power of both the generative SBN models and the SBN-induced SVM classifiers via Fisher kernel. The proposed method is tested on analyzing brain effective connectivity for AD from ADNI data, and demonstrates significant improvements over the state-of-the-art work.
3 0.82882696 453 cvpr-2013-Video Editing with Temporal, Spatial and Appearance Consistency
Author: Xiaojie Guo, Xiaochun Cao, Xiaowu Chen, Yi Ma
Abstract: Given an area of interest in a video sequence, one may want to manipulate or edit the area, e.g. remove occlusions from or replace with an advertisement on it. Such a task involves three main challenges including temporal consistency, spatial pose, and visual realism. The proposed method effectively seeks an optimal solution to simultaneously deal with temporal alignment, pose rectification, as well as precise recovery of the occlusion. To make our method applicable to long video sequences, we propose a batch alignment method for automatically aligning and rectifying a small number of initial frames, and then show how to align the remaining frames incrementally to the aligned base images. From the error residual of the robust alignment process, we automatically construct a trimap of the region for each frame, which is used as the input to alpha matting methods to extract the occluding foreground. Experimental results on both simulated and real data demonstrate the accurate and robust performance of our method.
4 0.82489455 413 cvpr-2013-Story-Driven Summarization for Egocentric Video
Author: Zheng Lu, Kristen Grauman
Abstract: We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video subshots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a randomwalk based metric of influence between subshots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subshot summary. Whereas traditional methods optimize a summary ’s diversity or representativeness, ours explicitly accounts for how one sub-event “leads to ” another—which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.
5 0.82369113 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras
Author: Donald G. Dansereau, Oscar Pizarro, Stefan B. Williams
Abstract: Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedurefor lenselet-basedplenoptic cameras appropriatefor a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.
6 0.81643057 216 cvpr-2013-Improving Image Matting Using Comprehensive Sampling Sets
8 0.81143075 288 cvpr-2013-Modeling Mutual Visibility Relationship in Pedestrian Detection
9 0.80741286 44 cvpr-2013-Area Preserving Brain Mapping
10 0.80350137 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features
11 0.80103391 63 cvpr-2013-Binary Code Ranking with Weighted Hamming Distance
12 0.78066427 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases
13 0.7800104 194 cvpr-2013-Groupwise Registration via Graph Shrinkage on the Image Manifold
14 0.77994412 384 cvpr-2013-Segment-Tree Based Cost Aggregation for Stereo Matching
15 0.77984113 425 cvpr-2013-Tensor-Based High-Order Semantic Relation Transfer for Semantic Scene Segmentation
16 0.77950054 329 cvpr-2013-Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images
17 0.77944845 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras
18 0.77921921 355 cvpr-2013-Representing Videos Using Mid-level Discriminative Patches
19 0.77890587 330 cvpr-2013-Photometric Ambient Occlusion
20 0.77888554 250 cvpr-2013-Learning Cross-Domain Information Transfer for Location Recognition and Clustering