iccv iccv2013 iccv2013-329 knowledge-graph by maker-knowledge-mining

329 iccv-2013-Progressive Multigrid Eigensolvers for Multiscale Spectral Segmentation


Source: pdf

Author: Michael Maire, Stella X. Yu

Abstract: We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, whichprematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarseto-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to crossscale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. [sent-4, score-0.369]

2 We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. [sent-7, score-0.369]

3 We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarseto-fine structure. [sent-8, score-0.945]

4 Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to crossscale constraints in the problem definition. [sent-10, score-0.306]

5 Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. [sent-11, score-0.532]

6 The spectral relaxation of the Normalized Cuts problem [14] appears as the driving method in much research on image segmentation [5, 13, 17, 15, 1, 11]. [sent-15, score-0.178]

7 The importance of exploiting multiscale cues to generate high quality segmentations is widely recognized. [sent-16, score-0.288]

8 Yu [15] formulates segmentation as clustering on the average ofmultiscale affinity matrices. [sent-17, score-0.199]

9 [1] derive entries of a single affinity matrix from a combination of multiscale features. [sent-19, score-0.423]

10 Both of these systems summarize multiscale cues and optimize a single output that best explains the summary. [sent-20, score-0.288]

11 [5] present an alternative, suggesting multirange or multiscale approaches which couple sparse affinity matrices at coarse and fine levels of an image pyramid using constraints [17]. [sent-22, score-0.714]

12 Though this work lacks the sophisticated post-processing steps found in other spectral segmentation pipelines, such as gPb [1], it offers the insight that scale-space representation should be preserved throughout the clustering procedure. [sent-23, score-0.217]

13 Our key observation is that multigrid and multiscale techniques are complimentary and should be intertwined, producing fast high-quality segmentation algorithms. [sent-29, score-0.914]

14 Unlike generic multigrid methods [4, 2, 8], we explicitly address constrained problems [17, 5] with the intuition that the constraints themselves can guide the schedule of computations within the solver. [sent-30, score-0.59]

15 Sections 2 and 3 present eigensolver technical details in the more general setting of Angular Embedding, an extension of Normalized Cuts to handle both grouping and ordering relationships [16], recently used in simultaneously resolving segmentation, figure-ground, and object detection [9, 10]. [sent-32, score-0.371]

16 We develop our solver within the framework of randomized matrix approximation [7], a new mathematical technique which naturally fits our interpolation strategy. [sent-33, score-0.261]

17 Section 4 demonstrates speedup results as well as segmentation improvements, in the form of high quality contours, achieved by combining our efficient solver with the best aspects ofprevious systems [5, 1]. [sent-35, score-0.244]

18 W2W10 M u ltiscale P ro gressive M u ltigrid M u ltiscale Transfo rmed Pro g ressive M ultig rid M u ltiscale Figure 1. [sent-54, score-0.195]

19 Multigrid techniques exploit coarse-to-fine structure within a spectral clustering problem by adapting the optimization routine used to solve it. [sent-56, score-0.136]

20 Multirange or multiscale techniques instead adapt the problem definition to explicitly encode such structure. [sent-57, score-0.288]

21 We combine both approaches, producing a progressive multigrid algorithm for the solver. [sent-58, score-0.664]

22 Top Left: Consider a sparse matrix W defining pairwise affinities between nodes in a graph (e. [sent-59, score-0.162]

23 Generic multigrid eigensolvers [4, 2, 8], applied to the corresponding Normalized Cuts eigenproblem [14], coarsen the problem by subsampling nodes and interpolating weights. [sent-62, score-0.786]

24 Solution eigenvectors from iteratively coarsened problems, Ψ(W) and Ψ(Ψ(W)), initialize the solver on the next finer problems, W and Ψ(W), respectively (blue arrows). [sent-63, score-0.314]

25 Top Middle: Multirange [5] simulates the effect of a dense affinity matrix by sampling longer-range affinities on coarser pixel grids (τ(W)) and tying graphs together using constraints (Us, red links) [17]. [sent-64, score-0.307]

26 Top Right: Rather than resampling, a true multiscale approach [5, 15] ties together level-dependent information, in the form of different affinities, W0, W1, W2, on each subgraph. [sent-65, score-0.288]

27 Bottom: Our custom eigensolver maps a multiscale constrained spectral clustering problem onto a progressive multigrid computation strategy. [sent-66, score-1.488]

28 Unlike generic multigrid methods, the constraints from the problem definition shape computation within the solver. [sent-67, score-0.59]

29 Constraints both tie levels in the expanded problem and determine interpolation functions ΦUs for moving work between levels. [sent-69, score-0.09]

30 When dropping a level, say W0, we can optionally use the constraints to fold it into the next coarsest level, substituting transformed affinity τU (W0, W1) for W1. [sent-70, score-0.151]

31 We build these vectors progressively over scale, at each step applying diffusion and projection based on the graph weights and constraints, and then checking whether the r vectors lie in the lspace. [sent-75, score-0.144]

32 We first initialize (orange) the coarse scale n1 (l + r) vectors (top block) with random Gaussian noise, and follow with diffusion and projection, repeating until convergence (pink). [sent-76, score-0.255]

33 We then use the top block to initialize the (bottom-block) fine-scale n0×(l+r) vectors via interpolation (blue) defined by inter-scale constraints. [sent-77, score-0.131]

34 This is followed again by diffusion, projection, and checking the entire matrix A. [sent-78, score-0.09]

35 We apply diffusion to the lblock before collapsing it to a core l×lmatrix B. [sent-80, score-0.113]

36 We extract m eigenvectors of this much smaller matrix B and then interpolate back to recover the desired m eigenvectors of length rn 0. [sent-81, score-0.484]

37 Right: In an equivalent view, A initially lives in a subspace of dimension n1(l + r), where diffusion and projection operations are cheap. [sent-82, score-0.194]

38 Performing most of the work in this subspace before interpolating to deal with the full problem in the larger space gives us a speedup. [sent-83, score-0.094]

39 Skew-symmetric n n matrix Θ specifies relative ordering relationships between n nodes. [sent-86, score-0.087]

40 The task is to embed nodes into an m-dimensional space, such that location in this embedding space preserves the pairwise relationships. [sent-89, score-0.126]

41 The n u matrix U specifies u linear constraints that the solution embedding x must satisfy: U∗x = 0, where ∗ denotes complex conjugate transpose. [sent-90, score-0.222]

42 In the case of multiscale segmentation, U will state that each coarse pixel must be consistent with the finer pixels in the scale below it; the coarse pixel’s embedding must be the average of the fine. [sent-91, score-0.56]

43 The multiscale setting upgrades each of C, Θ, U to an array of matrices, C, Θ, U, indexed by level s°. [sent-94, score-0.288]

44 Constraints must appear incrementally, associating nodes newly appearing at level s with nodes from coarser levels. [sent-97, score-0.129]

45 Eigensolver Let Ms = QsPsQs denote the matrix whose leading eigenvectors solve the multiscale AE problem (C, Θ, U) restricted to levels s and coarser. [sent-100, score-0.553]

46 The intuition behind our eigensolver is to interpolate from the eigenvectors of Ms an initial solver state for Ms−1, eventually obtaining the eigenvectors of M0 and thereby solving the unrestricted problem. [sent-101, score-0.899]

47 2186 ImageMultiscale Eigenvectors 2 through 7 1 coarse iteration: 0. [sent-103, score-0.091]

48 46 sec 20 coarse iterations: 3 sec 20 coarse, 1 medium: 5 sec 20 c. [sent-104, score-0.298]

49 Top: Image and leading eigenvectors for multiscale Normalized Cuts applied across a three-level image pyramid with scales linked by constraints. [sent-108, score-0.505]

50 Bottom: Our progressive multigrid solver processes sub-pyramids in coarseto-fine order. [sent-109, score-0.764]

51 The baseline solver immediately starts work on the finest pyramid, taking far longer to converge (760 sec vs 27 sec). [sent-110, score-0.2]

52 To accomplish this, we borrow the randomized subspace iteration procedure from recent results concerning probabilistic algorithms for constructing matrix decompositions [7]. [sent-111, score-0.138]

53 We add a novel interpolation step to randomized subspace iteration in order to initialize As−1 from As. [sent-117, score-0.177]

54 As × coarse and fine subproblems share the coarse levels, initialization is a copy operation on these coarse levels and an interpolation for the finest level (see dotted and solid blue arrows in Figure 1). [sent-118, score-0.506]

55 = 0 (5) where U[n1] is the n1 u upper block of U involving values for the n1 coarse nodes and U[n0] is the lower block with values for the n0 fine nodes. [sent-125, score-0.247]

56 Given only x1, solving this underconstrained equation for x0 in the least squares sense allows us to interpolate a fine representation from a coarse one. [sent-127, score-0.247]

57 We apply precisely the same interpolation procedure during coarse-to-fine subspace iteration, with x replaced by the appropriate subblock of A. [sent-128, score-0.089]

58 Lines 4-16 extract the active subproblem; here Diag(·) places its matrix arguments on the block diagonal of a larger matrix. [sent-133, score-0.092]

59 Lines 17-27 initialize A or interpolate from a coarser level. [sent-134, score-0.174]

60 Lines 2187 Algorithm 1Matrix approximation via subspace iteration Given functions f, g such that f(X) = MX and g(X) = M∗X for some n n matrix M, compute an n (l + r) matrix A whose leftmost lcolumns approximate the range of M and rightmost r columns test convergence [7]. [sent-136, score-0.254]

61 1: function MXAPPROXINIT(f, n, l,r) 2: draw n (l + r) Gaussian matrix Ω ∈ Cn·(l+r) 3: A ← MXAPPROXREORTH(f(Ω), l,r) 4: return A Perform a single update to A to improve the approximation. [sent-138, score-0.137]

62 Also return Aˆˆ, the left lcolumns of A just before the final update. [sent-142, score-0.146]

63 leftmost lcolumns 27: 28: Ap Aq ← A[0:(n−1), return (Ap, Aq) l:(l+r−1)] ? [sent-155, score-0.146]

64 Algorithm 2Progressive multigrid Angular Embedding Compute the m leading eigenvectors V and eigenvalues Λ of a multiscale constrained Angular Embedding problem. [sent-169, score-1.008]

65 nr0 m eigenvectors 39: return (V, Λ) ˆ ( Aq) (MXAPPROXTEST(Aˆˆ, Aq) , ˆ (Ap, Ap∗f(Ap) D−21ApV Apply eigensolver diffusion and projection operations. [sent-195, score-0.737]

66 40: function DIFFUSE(Ps, Z) return PsZ 41: function DIFFUSEPROJECT(Ps, Us, Rs, Z) 42: Z ← Z −Us(Rs \ Rs∗ \ Us∗Z) 43: Z ← PsZ 44: Z ← Z −Us(Rs \ Rs∗ \ Us∗Z) 45: return Z 2188 35-38 solve a trivially small eigenproblem and recover the solution to the original Angular Embedding problem. [sent-196, score-0.249]

67 Algorithm 3 Weight folding for transformed multigrid AE Compute degree matrix D and weight matrix W that are active for the pyramid based at level s when solving the multiscale AE problem (C, Θ, U) using transformed multigrid. [sent-204, score-1.092]

68 n s´+1 n s´+1 matrix Ignoring our multigrid strategy, using randomized matrix approximation techniques for eigenproblems has the same computational complexity as traditional eigensolvers, but with a slightly larger constant factor. [sent-223, score-0.706]

69 We inherit this property; each step of our eigensolver operates simultaneously across at least m vectors. [sent-225, score-0.399]

70 Efficiently parallelizing a Lanczos eigensolver for running the gPb algorithm [1] on a GPU requires assuming the affinity matrix has a repeating stencil structure [3]. [sent-226, score-0.506]

71 For any fixed computational cost, running our solver using a progressive multigrid strategy produces significantly lower error than running it on the multiscale problem without using multigrid. [sent-229, score-1.052]

72 Jumps in error (arrows) occur as the multigrid solver switches levels. [sent-230, score-0.645]

73 Here, cost is the total amount of computation required for diffusion and projection operations, adjusted for the fact that operations are cheaper on coarser subproblems when using progressive multigrid. [sent-231, score-0.37]

74 Note the log scale; multigrid is 31 times faster by this measure. [sent-232, score-0.545]

75 × jection operations applied to refine the matrix approximation dominate the running time of our solver in both baseline and progressive multigrid modes. [sent-233, score-0.871]

76 Even with a tighter error tolerance, the multigrid strategy offers an 11 speedup over the baseline. [sent-234, score-0.608]

77 eigensolver is parallelizable without any restrictions on the sparsity patterns in the problem definition. [sent-237, score-0.371]

78 [1], we take gradients of eigenvectors to turn the result of spectral clustering into a spectral probability of boundary (sPb) measure. [sent-242, score-0.408]

79 Our eigenvectors live on an image pyramid, rather than a grid, so we end up with a set of consistent coarse-to-fine boundaries across three scales. [sent-243, score-0.175]

80 Comparing our results (center columns) to the original sPb (right) shows the advantage of preserving multiscale information throughout the spectral clustering stage. [sent-244, score-0.424]

81 Row 2: Our multiscale version correctly separates the right side of the bird’s head from the background; the corresponding original sPb edge is extremely weak. [sent-246, score-0.288]

82 Rows 5-7: Foreground objects pop-out strongly at coarse scale; their boundaries are more salient across scales. [sent-249, score-0.091]

83 Experiments We apply our eigensolver to image segmentation prob- lems, defined on a multilevel pyramid [5], with scaledependent affinities on each level. [sent-252, score-0.591]

84 [1], we refrain from collapsing the affinity matrix into a single scale before spectral clustering. [sent-255, score-0.264]

85 We preserve multiscale information through the entire segmentation pipeline. [sent-256, score-0.369]

86 Figure 3 provides a visual comparison of eigensolver convergence behavior on an example image segmentation problem. [sent-257, score-0.496]

87 Figures 4 and 5 quantify the speedup in terms of both counting operations and recording actual CPU runtime. [sent-258, score-0.114]

88 Note that the solver spends the vast majority of time on applying the inherently m-way parallel diffusion and projection operations to matrix A, as shown by the yellow blocks in Figure 5. [sent-259, score-0.317]

89 Though our eigensolver is amenable to parallelization, our experiments are all on a serial implementation in MATLAB and observed speedups are solely due to our use of constraints to shape multigrid computation. [sent-260, score-0.988]

90 Progressive multigrid gives more than a factor of 10 speedup over the baseline. [sent-261, score-0.608]

91 All benchmarks are for finding 32 eigenvectors and are averaged over the 100 images of the Berkeley segmentation dataset test set [12]. [sent-262, score-0.256]

92 Figure 6 compares the output from our multiscale spectral clustering problem to the single scale currently used in gPb. [sent-263, score-0.424]

93 Clear differences hint at future applications ofour multiscale pipeline to improving image segmentation quality. [sent-264, score-0.369]

94 Conclusion Our novel eigensolver merges constrained spectral clustering with progressive multigrid computation. [sent-266, score-1.171]

95 We demonstrate large speedups in solving multiscale image segmentation problems. [sent-267, score-0.396]

96 Our eigensolver is applicable to many problems with coarse-to-fine structure and our segmentation framework shows the benefits of a full multiscale pipeline. [sent-268, score-0.74]

97 A generalized eigensolver based on smoothed aggregation (GES-SA) for initializing smoothed aggregation multigrid (SA). [sent-287, score-0.916]

98 Spectral segmentation with [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] multiscale graph decomposition. [sent-309, score-0.369]

99 Efficient multilevel eigensolvers with applications to data analysis tasks. [sent-330, score-0.114]

100 Object detection and segmentation from joint embedding of parts and pixels. [sent-341, score-0.171]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('multigrid', 0.545), ('eigensolver', 0.371), ('multiscale', 0.288), ('eigenvectors', 0.175), ('aq', 0.161), ('spb', 0.155), ('nrs', 0.131), ('progressive', 0.119), ('mxapproxreorth', 0.109), ('mxapproxsplit', 0.109), ('solver', 0.1), ('spectral', 0.097), ('coarse', 0.091), ('embedding', 0.09), ('eigenproblem', 0.087), ('eigensolvers', 0.087), ('multirange', 0.087), ('raions', 0.087), ('diag', 0.086), ('rp', 0.084), ('ps', 0.082), ('diffusion', 0.081), ('return', 0.081), ('segmentation', 0.081), ('cuts', 0.079), ('affinity', 0.079), ('interpolate', 0.078), ('rns', 0.077), ('angular', 0.077), ('affinities', 0.07), ('sec', 0.069), ('inccholesky', 0.065), ('lcolumns', 0.065), ('ltiscale', 0.065), ('mxapproxupdate', 0.065), ('speedup', 0.063), ('smax', 0.058), ('coarser', 0.057), ('interpolation', 0.056), ('matrix', 0.056), ('rs', 0.055), ('ap', 0.054), ('arbel', 0.054), ('operations', 0.051), ('folding', 0.051), ('randomized', 0.049), ('fine', 0.048), ('constraints', 0.045), ('convergence', 0.044), ('csmax', 0.044), ('diffuseproject', 0.044), ('imagemultiscale', 0.044), ('mxapproxinit', 0.044), ('mxapproxrefine', 0.044), ('mxapproxtest', 0.044), ('psz', 0.044), ('qspsqs', 0.044), ('stella', 0.044), ('usmax', 0.044), ('pyramid', 0.042), ('aez', 0.042), ('initialize', 0.039), ('clustering', 0.039), ('dp', 0.038), ('berkeley', 0.037), ('block', 0.036), ('nystr', 0.036), ('nodes', 0.036), ('ns', 0.035), ('subproblem', 0.035), ('wx', 0.034), ('levels', 0.034), ('checking', 0.034), ('subproblems', 0.033), ('subspace', 0.033), ('normalized', 0.033), ('collapsing', 0.032), ('maire', 0.032), ('arrows', 0.031), ('specifies', 0.031), ('urs', 0.031), ('interpolating', 0.031), ('sharon', 0.031), ('finest', 0.031), ('gpb', 0.031), ('nr', 0.031), ('us', 0.03), ('ms', 0.03), ('fowlkes', 0.03), ('ae', 0.03), ('projection', 0.029), ('custom', 0.029), ('inherit', 0.028), ('parallelization', 0.028), ('qs', 0.027), ('speedups', 0.027), ('yu', 0.027), ('transformed', 0.027), ('med', 0.027), ('multilevel', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 329 iccv-2013-Progressive Multigrid Eigensolvers for Multiscale Spectral Segmentation

Author: Michael Maire, Stella X. Yu

Abstract: We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, whichprematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarseto-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to crossscale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target.

2 0.074996971 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

Author: Ali Elqursh, Ahmed Elgammal

Abstract: The vast majority of work on motion segmentation adopts the affine camera model due to its simplicity. Under the affine model, the motion segmentation problem becomes that of subspace separation. Due to this assumption, such methods are mainly offline and exhibit poor performance when the assumption is not satisfied. This is made evident in state-of-the-art methods that relax this assumption by using piecewise affine spaces and spectral clustering techniques to achieve better results. In this paper, we formulate the problem of motion segmentation as that of manifold separation. We then show how label propagation can be used in an online framework to achieve manifold separation. The performance of our framework is evaluated on a benchmark dataset and achieves competitive performance while being online.

3 0.070161879 447 iccv-2013-Volumetric Semantic Segmentation Using Pyramid Context Features

Author: Jonathan T. Barron, Mark D. Biggin, Pablo Arbeláez, David W. Knowles, Soile V.E. Keranen, Jitendra Malik

Abstract: We present an algorithm for the per-voxel semantic segmentation of a three-dimensional volume. At the core of our algorithm is a novel “pyramid context” feature, a descriptive representation designed such that exact per-voxel linear classification can be made extremely efficient. This feature not only allows for efficient semantic segmentation but enables other aspects of our algorithm, such as novel learned features and a stacked architecture that can reason about self-consistency. We demonstrate our technique on 3Dfluorescence microscopy data ofDrosophila embryosfor which we are able to produce extremely accurate semantic segmentations in a matter of minutes, and for which other algorithms fail due to the size and high-dimensionality of the data, or due to the difficulty of the task.

4 0.068160705 232 iccv-2013-Latent Space Sparse Subspace Clustering

Author: Vishal M. Patel, Hien Van Nguyen, René Vidal

Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.

5 0.065191761 342 iccv-2013-Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length

Author: Zuzana Kukelova, Martin Bujnak, Tomas Pajdla

Abstract: Theproblem ofdetermining the absoluteposition andorientation of a camera from a set of 2D-to-3D point correspondences is one of the most important problems in computer vision with a broad range of applications. In this paper we present a new solution to the absolute pose problem for camera with unknown radial distortion and unknown focal length from five 2D-to-3D point correspondences. Our new solver is numerically more stable, more accurate, and significantly faster than the existing state-of-the-art minimal fourpoint absolutepose solvers for this problem. Moreover, our solver results in less solutions and can handle larger radial distortions. The new solver is straightforward and uses only simple concepts from linear algebra. Therefore it is simpler than the state-of-the-art Gr¨ obner basis solvers. We compare our new solver with the existing state-of-theart solvers and show its usefulness on synthetic and real datasets. 1

6 0.06051065 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

7 0.059812866 208 iccv-2013-Image Co-segmentation via Consistent Functional Maps

8 0.059193794 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

9 0.058040671 404 iccv-2013-Structured Forests for Fast Edge Detection

10 0.055579703 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

11 0.051890135 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

12 0.051874168 140 iccv-2013-Elastic Net Constraints for Shape Matching

13 0.051203363 379 iccv-2013-Semantic Segmentation without Annotating Segments

14 0.05089727 33 iccv-2013-A Unified Video Segmentation Benchmark: Annotation, Metrics and Analysis

15 0.050431732 110 iccv-2013-Detecting Curved Symmetric Parts Using a Deformable Disc Model

16 0.050200913 74 iccv-2013-Co-segmentation by Composition

17 0.049086139 204 iccv-2013-Human Attribute Recognition by Rich Appearance Dictionary

18 0.048794061 76 iccv-2013-Coarse-to-Fine Semantic Video Segmentation Using Supervoxel Trees

19 0.047128942 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

20 0.045957968 225 iccv-2013-Joint Segmentation and Pose Tracking of Human in Natural Videos


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.12), (1, -0.023), (2, -0.013), (3, -0.011), (4, -0.028), (5, 0.056), (6, -0.028), (7, 0.05), (8, 0.056), (9, -0.05), (10, 0.011), (11, 0.025), (12, -0.042), (13, -0.005), (14, -0.002), (15, 0.037), (16, 0.01), (17, -0.008), (18, -0.017), (19, 0.005), (20, 0.016), (21, 0.035), (22, -0.044), (23, -0.003), (24, -0.042), (25, 0.012), (26, 0.006), (27, 0.02), (28, -0.012), (29, -0.01), (30, 0.007), (31, -0.001), (32, 0.021), (33, 0.018), (34, -0.009), (35, 0.001), (36, -0.016), (37, -0.043), (38, -0.012), (39, 0.004), (40, 0.04), (41, 0.014), (42, -0.02), (43, 0.008), (44, 0.015), (45, -0.013), (46, -0.016), (47, -0.006), (48, -0.058), (49, -0.074)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94462425 329 iccv-2013-Progressive Multigrid Eigensolvers for Multiscale Spectral Segmentation

Author: Michael Maire, Stella X. Yu

Abstract: We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, whichprematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarseto-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to crossscale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target.

2 0.7335009 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

Author: Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I. Jordan

Abstract: Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data. Low-Rank Representation (LRR), a convex formulation of the subspace segmentation problem, is provably and empirically accurate on small problems but does not scale to the massive sizes of modern vision datasets. Moreover, past work aimed at scaling up low-rank matrix factorization is not applicable to LRR given its non-decomposable constraints. In this work, we propose a novel divide-and-conquer algorithm for large-scale subspace segmentation that can cope with LRR ’s non-decomposable constraints and maintains LRR ’s strong recovery guarantees. This has immediate implications for the scalability of subspace segmentation, which we demonstrate on a benchmark face recognition dataset and in simulations. We then introduce novel applications of LRR-based subspace segmentation to large-scale semisupervised learning for multimedia event detection, concept detection, and image tagging. In each case, we obtain stateof-the-art results and order-of-magnitude speed ups.

3 0.69890779 112 iccv-2013-Detecting Irregular Curvilinear Structures in Gray Scale and Color Imagery Using Multi-directional Oriented Flux

Author: Engin Türetken, Carlos Becker, Przemyslaw Glowacki, Fethallah Benmansour, Pascal Fua

Abstract: We propose a new approach to detecting irregular curvilinear structures in noisy image stacks. In contrast to earlier approaches that rely on circular models of the crosssections, ours allows for the arbitrarily-shaped ones that are prevalent in biological imagery. This is achieved by maximizing the image gradient flux along multiple directions and radii, instead of only two with a unique radius as is usually done. This yields a more complex optimization problem for which we propose a computationally efficient solution. We demonstrate the effectiveness of our approach on a wide range ofchallenging gray scale and color datasets and show that it outperforms existing techniques, especially on very irregular structures.

4 0.66843724 208 iccv-2013-Image Co-segmentation via Consistent Functional Maps

Author: Fan Wang, Qixing Huang, Leonidas J. Guibas

Abstract: Joint segmentation of image sets has great importance for object recognition, image classification, and image retrieval. In this paper, we aim to jointly segment a set of images starting from a small number of labeled images or none at all. To allow the images to share segmentation information with each other, we build a network that contains segmented as well as unsegmented images, and extract functional maps between connected image pairs based on image appearance features. These functional maps act as general property transporters between the images and, in particular, are used to transfer segmentations. We define and operate in a reduced functional space optimized so that the functional maps approximately satisfy cycle-consistency under composition in the network. A joint optimization framework is proposed to simultaneously generate all segmentation functions over the images so that they both align with local segmentation cues in each particular image, and agree with each other under network transportation. This formulation allows us to extract segmentations even with no training data, but can also exploit such data when available. The collective effect of the joint processing using functional maps leads to accurate information sharing among images and yields superior segmentation results, as shown on the iCoseg, MSRC, and PASCAL data sets.

5 0.65488231 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

6 0.65458107 364 iccv-2013-SGTD: Structure Gradient and Texture Decorrelating Regularization for Image Decomposition

7 0.65001953 432 iccv-2013-Uncertainty-Driven Efficiently-Sampled Sparse Graphical Models for Concurrent Tumor Segmentation and Atlas Registration

8 0.63294518 63 iccv-2013-Bounded Labeling Function for Global Segmentation of Multi-part Objects with Geometric Constraints

9 0.62633884 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

10 0.62255859 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation

11 0.62041229 401 iccv-2013-Stacked Predictive Sparse Coding for Classification of Distinct Regions in Tumor Histopathology

12 0.61618572 186 iccv-2013-GrabCut in One Cut

13 0.6122545 232 iccv-2013-Latent Space Sparse Subspace Clustering

14 0.60801393 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

15 0.59680516 388 iccv-2013-Shape Index Descriptors Applied to Texture-Based Galaxy Analysis

16 0.59336966 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering

17 0.59232259 447 iccv-2013-Volumetric Semantic Segmentation Using Pyramid Context Features

18 0.58999491 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map

19 0.57943231 148 iccv-2013-Example-Based Facade Texture Synthesis

20 0.56803733 125 iccv-2013-Drosophila Embryo Stage Annotation Using Label Propagation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.054), (6, 0.014), (26, 0.093), (27, 0.012), (31, 0.042), (34, 0.018), (35, 0.021), (40, 0.02), (42, 0.101), (48, 0.016), (64, 0.04), (73, 0.023), (89, 0.152), (92, 0.267), (95, 0.011), (98, 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78308606 329 iccv-2013-Progressive Multigrid Eigensolvers for Multiscale Spectral Segmentation

Author: Michael Maire, Stella X. Yu

Abstract: We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, whichprematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarseto-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to crossscale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target.

2 0.73492992 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering

Author: Canyi Lu, Jinhui Tang, Min Lin, Liang Lin, Shuicheng Yan, Zhouchen Lin

Abstract: In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows/features. The multiplicative form of half-quadratic optimization is used to optimize the nonconvex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.

3 0.68406653 128 iccv-2013-Dynamic Probabilistic Volumetric Models

Author: Ali Osman Ulusoy, Octavian Biris, Joseph L. Mundy

Abstract: This paper presents a probabilistic volumetric framework for image based modeling of general dynamic 3-d scenes. The framework is targeted towards high quality modeling of complex scenes evolving over thousands of frames. Extensive storage and computational resources are required in processing large scale space-time (4-d) data. Existing methods typically store separate 3-d models at each time step and do not address such limitations. A novel 4-d representation is proposed that adaptively subdivides in space and time to explain the appearance of 3-d dynamic surfaces. This representation is shown to achieve compression of 4-d data and provide efficient spatio-temporal processing. The advances oftheproposedframework is demonstrated on standard datasets using free-viewpoint video and 3-d tracking applications.

4 0.62672341 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

5 0.62512779 150 iccv-2013-Exemplar Cut

Author: Jimei Yang, Yi-Hsuan Tsai, Ming-Hsuan Yang

Abstract: We present a hybrid parametric and nonparametric algorithm, exemplar cut, for generating class-specific object segmentation hypotheses. For the parametric part, we train a pylon model on a hierarchical region tree as the energy function for segmentation. For the nonparametric part, we match the input image with each exemplar by using regions to obtain a score which augments the energy function from the pylon model. Our method thus generates a set of highly plausible segmentation hypotheses by solving a series of exemplar augmented graph cuts. Experimental results on the Graz and PASCAL datasets show that the proposed algorithm achievesfavorable segmentationperformance against the state-of-the-art methods in terms of visual quality and accuracy.

6 0.62462425 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

7 0.62427384 414 iccv-2013-Temporally Consistent Superpixels

8 0.62367439 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions

9 0.62334609 95 iccv-2013-Cosegmentation and Cosketch by Unsupervised Learning

10 0.62138516 21 iccv-2013-A Method of Perceptual-Based Shape Decomposition

11 0.62079704 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

12 0.62064302 245 iccv-2013-Learning a Dictionary of Shape Epitomes with Applications to Image Labeling

13 0.620588 379 iccv-2013-Semantic Segmentation without Annotating Segments

14 0.61989993 196 iccv-2013-Hierarchical Data-Driven Descent for Efficient Optimal Deformation Estimation

15 0.61901385 6 iccv-2013-A Convex Optimization Framework for Active Learning

16 0.61810082 258 iccv-2013-Low-Rank Sparse Coding for Image Classification

17 0.61789709 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

18 0.61784381 121 iccv-2013-Discriminatively Trained Templates for 3D Object Detection: A Real Time Scalable Approach

19 0.61776996 61 iccv-2013-Beyond Hard Negative Mining: Efficient Detector Learning via Block-Circulant Decomposition

20 0.61741215 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation