iccv iccv2013 iccv2013-324 knowledge-graph by maker-knowledge-mining

324 iccv-2013-Potts Model, Parametric Maxflow and K-Submodular Functions


Source: pdf

Author: Igor Gridchyn, Vladimir Kolmogorov

Abstract: The problem of minimizing the Potts energy function frequently occurs in computer vision applications. One way to tackle this NP-hard problem was proposed by Kovtun [20, 21]. It identifies a part of an optimal solution by running k maxflow computations, where k is the number of labels. The number of “labeled” pixels can be significant in some applications, e.g. 50-93% in our tests for stereo. We show how to reduce the runtime to O(log k) maxflow computations (or one parametric maxflow computation). Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. To derive our technique, we generalize the algorithm of Felzenszwalb et al. [7] for Tree Metrics. We also show a connection to k-submodular functions from combinatorial optimization, and discuss k-submodular relaxations for general energy functions.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Potts model, parametric maxflow and k-submodular functions Igor Gridchyn IST Austria igor . [sent-1, score-0.419]

2 It identifies a part of an optimal solution by running k maxflow computations, where k is the number of labels. [sent-6, score-0.297]

3 We show how to reduce the runtime to O(log k) maxflow computations (or one parametric maxflow computation). [sent-10, score-0.799]

4 Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. [sent-11, score-0.172]

5 We also show a connection to k-submodular functions from combinatorial optimization, and discuss k-submodular relaxations for general energy functions. [sent-14, score-0.162]

6 [4] who proposed an efficient approximation algorithm for this NP-hard problem called alpha expansion. [sent-18, score-0.119]

7 The algorithm of [4] is based on the maxflow algorithm, also known as graph cuts. [sent-19, score-0.34]

8 Each iteration involves k maxflow computations, where k is the number of labels. [sent-20, score-0.297]

9 The most relevant to us is the method of Kovtun [20, 21] which computes a part of an optimal solution via k maxflow computations. [sent-22, score-0.297]

10 We can then fix “labeled” nodes and run the alpha expansion algorithm for the remaining nodes. [sent-23, score-0.244]

11 Our main contribution is to improve the efficiency of Kovtun’s method from k maxflow computations to ? [sent-26, score-0.45]

12 We may get an improvement even when there are few labeled nodes: it is reported in [1] that using flow from Kovtun’s computations always speeds up the alpha expansion algorithm for unlabeled nodes. [sent-33, score-0.455]

13 Similarly, Kovtun’s approach does not improve on the standard Schlesinger’s LP relaxation of the energy [25]. [sent-40, score-0.139]

14 The default version of FastPD for the Potts energy produces the same answer as the alpha expansion algorithm but faster, since it maintains not only primal variables (current solution) but also dual variables (“messages”). [sent-43, score-0.228]

15 Intuitively, it allows to reuse flow between different maxflow computations. [sent-44, score-0.361]

16 Preliminaries The Potts energy for labeling x ∈ LV is given by = f(x) ? [sent-48, score-0.168]

17 Consequently, there exists minimizer x∗ ∈ argx m∈iLnVf(x) ing xyis defined via such that xi∗ = a for all nodes i∈ V with yi = a. [sent-69, score-0.302]

18 A naive way to do this is to use k maxflow computations on a graph awyi ttoh |dVo |t hniosd ises t oa undse |E k |m edges, w choemrepku =ta |iLo|n . [sent-71, score-0.493]

19 yi =y) a iyf xi a=, a }and} yi r= e ac ho tahe ∈rw Lis ien. [sent-76, score-0.297]

20 (3) where d(·, ·) is a tree metric with respect to a certain tree T: Definition 2. [sent-83, score-0.146]

21 ( A3)l are gseets as efo alslsoiwgnse: gi e(nog) h= 1 . [sent-92, score-0.299]

22 seen cthtiaotn minimizing )f ias is equivalent to minimizing g(y) over y ∈ {a, o}V. [sent-97, score-0.114]

23 For any i ∈ V and a, b ∈ L with a tPhreorep ohsoiltdiosn gi 3(. [sent-100, score-0.299]

24 More generally, we say that function gi : D → R is Tconvex if for any pair of edges {a, b}, {b, c} ∈: D DE w→ith R a s= T c tchoenrvee xho ilfd fos d(a, c)gi (b) ≤ d(b, c)gi (a) + d(a, b)gi (c) (4) Clearly, terms gi contructed above are T-convex. [sent-106, score-0.628]

25 We will prove the following result for an arbitrary tree T and function g with T-convex unary terms gi. [sent-107, score-0.203]

26 For labeling x h∈e oDreVm define binary l}ab beelin agn nx e[adbg] e∈ i n{a E, . [sent-111, score-0.112]

27 Note, part (a) and a repeated application of Theorem 1 give that any minimizer x ∈ DV of g is a partially optimal labeling f aonry yf m, i . [sent-116, score-0.232]

28 m fiz hear sx a ∈m Dinimizer x∗ ∈ LV such that xi∗ = xi for all iwith xi o. [sent-118, score-0.154]

29 sider the case of an arbitrary tree T, and present an efficient algorithm for minimizing function g with T-convex unary terms. [sent-120, score-0.218]

30 They considered the case when unary functions gi (·) are given by gi (xi) = λid(xi , ci) where λi ≥ 0 and ci is( a constant node in D. [sent-127, score-0.802]

31 m [a1x5i]m suhmow wfleodw t computations on graphs wmiinthi mOiz(|eVd |) v ano |dDes| manadx iOmu(|Em|) fl edges. [sent-129, score-0.153]

32 UOs(ilnogg | aD d|)iv midaex-falnodw- computations (plus 7O] (i m|Dp| loovge d|D t|h)i st tiome O f(olorg bookkeeping). [sent-131, score-0.153]

33 {i,j} λij |xj −xi | with convex + k}V terms gi over x? [sent-137, score-0.299]

34 with T-convex unary terms, and present a self-contained proof of correctness. [sent-158, score-0.157]

35 1 The main step of the algorithm is computing a minimizer y ∈ arg min{g(y) | y ∈ {a, b}V} for some edge {mai,z eb}r ∈y E∈ (t ahrigs can {bge( dyo)n |e yvia ∈ a {ma,abxf}low} algorithm). [sent-159, score-0.245]

36 Algorithm 1 SPLIT(g) Input: function g : DV → R specified by graph (V, E), tIrnepe Tt: f=u (cDti,o En, gd) :, unary te Rrm ssp gi : Dd →y g rRap ahn (dV edge weights λij Output: labeling x ∈ arg min{g(x) | x ∈ DV} 1:if D = {a} return (a,. [sent-164, score-0.682]

37 a ignec(dx) fr =m gg( xb¯y) wfixhienrge x¯i = no xi fso irn ni V∈ −Vc Vand x¯i = c for i∈ V − Vc let xc := SPLITf(orgc i) end for merge labelings xb into labeling x, return x xa, gc Note that function in line 7 is defined on the subgraph of (V, E) induced by Vc. [sent-169, score-0.388]

38 In our view, the new proof shows more clearly why the extension to T-convex unary terms is possible. [sent-172, score-0.157]

39 e G Tiv eisn nm nooddiefi ead ∈ as Dfol alnodws th: (i) aartdidnew noof idtse nbe ∈/ Dhb;o (ii) Nad ∪d new edge {a, b}; (iii) keep nodes c ∈ N as neighbors ∈o fD a, ibi)ut a dmda knee wno eddegse ec ? [sent-180, score-0.13]

40 The following theorem implies that the algorithm is correct; its proof is given in section 3. [sent-185, score-0.167]

41 If in line 9 is a minimizer of over DVcc for each c ∈ {a, b} then labeling x in line 10 is a movienirm Dizer foofr g e over cD ∈V {. [sent-188, score-0.232]

42 To deal with this issue, [7] proposed to expand tree T (and modify the input function accordingly) so that the new tree T? [sent-194, score-0.146]

43 (5) where we assume that gi (b) = gi (a), and ui ∈ R is chosen in such a way that function gi? [sent-219, score-0.733]

44 To summarize, we showed that the SPLI T algorithm remains correct if we replace line 2 with the tree modification step described above, and in line 3 compute y ∈ arg omnin s{tegp? [sent-260, score-0.14]

45 n dA ilnso, l nine l3 in eco m10p we n yee ∈d ator gcomnivne{rtg labeling x∈b {toa ,(bx}b)b? [sent-262, score-0.112]

46 a Ablesfoor,e i merging w witeh Selecting ui It remains to show that value ui for node i∈ V can be set in such a way that function (5) is Tf? [sent-264, score-0.339]

47 i)(a) There holds uimin ≤ uimax, and for any ui ∈ [uimin, uimax] and ? [sent-272, score-0.231]

48 Proof of theorems 4 and 5 The proof is based on the theorem below. [sent-279, score-0.133]

49 In part (b) we exploit the fact that unary functions gi are T-convex, and make use of a well-known result about the parametric maxflow problem [8]. [sent-282, score-0.783]

50 Part (a) It is straightforward to check that the following holds for nodes i∈ V and edges {i, j} ∈ E respectively: gi(xi) λijd(xi,xj) = ? [sent-302, score-0.14]

51 This is a well-known fact abou∨t t hye parametric maxflow problem ([8], Lemma 2. [sent-353, score-0.349]

52 = (0, 1) We say that a family of binary labelings y = (yab ∈ {a, b}V | {a, b} ∈ E}) is consistent if there exists labeling x ,∈b }DV| s{ua,chb }th ∈at Ex}[)ab] i = co ynasibs feonrt a ifll {haer,e eb }e i∈s Es . [sent-387, score-0.25]

53 Family y is consistent iff for every for any pair of edges {a, b}, {b, c} ∈ E with a c and any node pi a∈i rV o fth eedrgee sho {ldas, b(}y,ia{b,b ,yicbc}) E(a w, cit)h. [sent-393, score-0.165]

54 Let us fix a node i ∈ V , and denote yi = (yaib | {a, b} ∈ E}). [sent-395, score-0.179]

55 Clearly, ∈there is one-to-one corrre- spondence }be∈t we Een} possible labelings yi aen-tdo -oornieent caotriorrnesof tree T. [sent-396, score-0.288]

56 Namely, to each yi we associate a directed graph G[yi] = (D, with ={(a, b) | {a, b} ∈E, yiab=b}. [sent-397, score-0.153]

57 t,hbe}re∈ eEx,isyts xi ∈ D with yaib = for all {a, b} ∈ E) iff graph G[yi] has exactly one sink, i. [sent-400, score-0.191]

58 This is equivalent to the condition that each node a ∈ D has at imso esqtu one outgoing edge iitni oGn[ tyhia] . [sent-403, score-0.157]

59 Consider the following algorithm for constructing a family of binary labelings y. [sent-409, score-0.138]

60 Initially, we set y = (yab) where yab = y is the labeling chosen in Theorem 4(b). [sent-410, score-0.208]

61 By Theorem 7(b), the constructed family of binary labelings y satisfies the following: ∈ arg min{g(y) | y ∈ {a? [sent-446, score-0.205]

62 Tyh ieso creomns i7st(aen) implies ∈tha Dt x is a minimizer of g. [sent-452, score-0.154]

63 Implementation details In this section we sketch implementation details of Algorithm 1 applied to the function constructed in section 2 (so T is a star graph with nodes D = L ∪ {o}). [sent-456, score-0.115]

64 Thus, computations can ebne Dde=s cri {bae,do i}n oterr msosm oef a a binary htruese, whose nodes correspond to subsets oflabels A ⊆ L (Fig. [sent-460, score-0.225]

65 Therefore, these maxflow computations can be treated as a single maxflow on the graph of the original size. [sent-467, score-0.79]

66 m For each A ∈ Ω we set up a graph with the set of nodes VAF ∪o r{ es,a ct}h aAnd ∈ th Ωe wcuet sfuetn uctpio an g fA(S∪{s},T∪{t})= ? [sent-472, score-0.115]

67 gi(a),−gi(o) +a m∈Ainrgi(a)] (9) where we use the current value of gi (o) (it is zero initially and then gets decreased). [sent-489, score-0.299]

68 For a leaf A = {a} we use a adinfdfer theennt intepretation: s corresponds fto A Alab =el a a}nd w te corresponds to label o, therefore uiA = gi (o) − gi (a). [sent-490, score-0.63]

69 We perform all maxflow computations on a single graph. [sent-491, score-0.45]

70 We maintain values ui for nodes i∈ V that give the current cut functions encoded by tnhoed reess i d ∈ua Vl graph. [sent-493, score-0.239]

71 Avef ttehre computing tm fuanxcfltoiown sa te a noodne-dle bayf node A the residual graph is modified as follows. [sent-494, score-0.112]

72 Set ui := ui − λij and uj := uj λij ; this simulates pushing flow λij along the path t → j → i→ s. [sent-496, score-0.334]

73 Now we need to set unary costs for maxflow computations at the children A? [sent-501, score-0.601]

74 Suppose that node i ∈ V ended up at a leaf {a} ∈ Ω. [sent-509, score-0.129]

75 2 uubtiArac aciA} caAfi { ciA 2It can be shown that we only need to know a∗ ∈ arg mina∈L gi (a), gi (a∗ ) and gi (o) for that. [sent-517, score-0.964]

76 22332244 Such monotonicity implies that computations at non-leaf nodes fall into the framework of parametric maxflow of Gallo et al. [sent-520, score-0.608]

77 As shown in [8], all computations can be done with the same worst-case complexity as a single maxflow computation. [sent-522, score-0.45]

78 An important question is how to obtain an optimal flow correspoding to this computation; as reported in [1], using this flow speeds up the alpha expansion algorithm. [sent-528, score-0.337]

79 It suffices to specify the flow ξij for each arc (i → j) with {i, j} ∈ E (the flow from the source cahn da tco tihe → →sin jk) can hth {ei,n b}e easily computed). [sent-529, score-0.165]

80 For each node i∈ V we also store leaf Ai ∈ Ω at which node i eenacdhed n up. [sent-535, score-0.17]

81 It was shown that if f is a quadratic pseudo-Boolean function then the tightest bisubmodular relaxation is equivalent to the roof duality relaxation [10]. [sent-580, score-0.395]

82 It was also proved that bisubmodular relaxations possess the persistency, or partial optimality property. [sent-581, score-0.23]

83 Let g be a k-submodular relaxation of f and y∗ ∈ DV be a minimizer of g. [sent-584, score-0.203]

84 Function f has a minimizer x∈∗ D∈ LV such that xi∗ = yi∗ for all i ∈ V with yi∗ ∈ L. [sent-585, score-0.12]

85 Labeling x L∗, aisn a minimizer of f since f(x∗) = g((x ? [sent-603, score-0.12]

86 y∗) ≤ g(x) = f(x) Thus, k-submodular relaxations can be viewed as a generalization of the roof duality relaxation to the case of multiple labels. [sent-606, score-0.26]

87 (This was proved by showing the tightness of the Basic LP relaxation (BLP); when g is a sum of unary and pairwise terms, BLP is equivalent to the standard Schlesinger’s LP [27]. [sent-608, score-0.216]

88 j}∈E (11) where g˜i is a k-submodular relaxation of fi and d is the tree metric used in section 2. [sent-616, score-0.243]

89 22332255 The proposition below shows that minimizing ˜g yields the same or fewer number of labeled nodes compared to the Kovtun’s approach. [sent-618, score-0.23]

90 Although a k-submodular relaxation of the Potts energy turns out to be worse than Kovtun’s approach, there are clear similarities between the two (e. [sent-624, score-0.139]

91 As a by-product, k-sub Kovtun produces a labeling which we call a Kovtun labeling: pixel iis assigned the label a where it ended up, as described in Sec. [sent-637, score-0.14]

92 Matching costs The number of labeled pixels strongly depends on the method for computing matching costs fi(·) and on the regularization parameter λ (which is the same for all edges). [sent-640, score-0.125]

93 It is reported in [1] that the flow from Kovtun’s computations speeds the alpha-expansion algorithm. [sent-661, score-0.254]

94 However, we observed that initializing FastPD with the Kovtun’s labeling speeds it up compared to the “? [sent-663, score-0.149]

95 Quality ofthe Kovtun’s labeling We found that in the majority of cases Kovtun’s labeling actually has a lower error rate compared to the alpha-expansion solution (even though the energy of the latter is better) - see Fig. [sent-667, score-0.28]

96 Not surprisingly, Kovtun’s labeling is more reliable in the labeled part, i. [sent-671, score-0.141]

97 If the number of persistent pixels is low for a given application then one could use the cost aggregation trick to get more discriminative unary functions; as we saw for stereo, this only improves the accuracy. [sent-680, score-0.131]

98 For time-critical applications one could potentially skip the second phase and use the Kovtun’s labeling as the final output. [sent-681, score-0.145]

99 On the theoretical side, we introduced several concepts (such as k-submodular relaxations) that may turn out to be 5In this method we set xi ∈ arg mina fi(a). [sent-682, score-0.144]

100 Approximate labeling via graph cuts based on linear programming. [sent-805, score-0.155]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('kovtun', 0.538), ('gi', 0.299), ('maxflow', 0.297), ('computations', 0.153), ('ui', 0.135), ('potts', 0.132), ('minimizer', 0.12), ('alpha', 0.119), ('dv', 0.118), ('labeling', 0.112), ('yi', 0.11), ('labelings', 0.105), ('unary', 0.103), ('ijd', 0.102), ('ij', 0.101), ('bisubmodular', 0.096), ('yab', 0.096), ('proposition', 0.087), ('fi', 0.087), ('relaxation', 0.083), ('theorem', 0.079), ('fastpd', 0.077), ('xi', 0.077), ('const', 0.074), ('relaxations', 0.074), ('tree', 0.073), ('nodes', 0.072), ('node', 0.069), ('arg', 0.067), ('flow', 0.064), ('uia', 0.063), ('lv', 0.062), ('ssd', 0.062), ('edge', 0.058), ('iab', 0.058), ('uimax', 0.058), ('uimin', 0.058), ('ab', 0.057), ('energy', 0.056), ('proof', 0.054), ('expansion', 0.053), ('komodakis', 0.052), ('roof', 0.052), ('parametric', 0.052), ('ivn', 0.051), ('duality', 0.051), ('vc', 0.049), ('costs', 0.048), ('stereo', 0.046), ('alahari', 0.045), ('db', 0.044), ('graph', 0.043), ('minimizing', 0.042), ('blp', 0.038), ('coarea', 0.038), ('cwi', 0.038), ('gridchyn', 0.038), ('igor', 0.038), ('persistency', 0.038), ('schlesinger', 0.038), ('spli', 0.038), ('thapper', 0.038), ('toa', 0.038), ('vui', 0.038), ('yaib', 0.038), ('ybc', 0.038), ('holds', 0.038), ('speeds', 0.037), ('da', 0.037), ('dg', 0.035), ('implies', 0.034), ('uisn', 0.034), ('abr', 0.034), ('sfeotr', 0.034), ('shekhovtsov', 0.034), ('deg', 0.034), ('family', 0.033), ('phase', 0.033), ('fth', 0.033), ('iff', 0.033), ('leaf', 0.032), ('xb', 0.032), ('gc', 0.032), ('functions', 0.032), ('optimality', 0.032), ('bth', 0.032), ('gallo', 0.032), ('yj', 0.031), ('min', 0.031), ('let', 0.03), ('edges', 0.03), ('xa', 0.03), ('equivalent', 0.03), ('cia', 0.03), ('minimizers', 0.03), ('labeled', 0.029), ('ended', 0.028), ('persistent', 0.028), ('partial', 0.028), ('prove', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000017 324 iccv-2013-Potts Model, Parametric Maxflow and K-Submodular Functions

Author: Igor Gridchyn, Vladimir Kolmogorov

Abstract: The problem of minimizing the Potts energy function frequently occurs in computer vision applications. One way to tackle this NP-hard problem was proposed by Kovtun [20, 21]. It identifies a part of an optimal solution by running k maxflow computations, where k is the number of labels. The number of “labeled” pixels can be significant in some applications, e.g. 50-93% in our tests for stereo. We show how to reduce the runtime to O(log k) maxflow computations (or one parametric maxflow computation). Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. To derive our technique, we generalize the algorithm of Felzenszwalb et al. [7] for Tree Metrics. We also show a connection to k-submodular functions from combinatorial optimization, and discuss k-submodular relaxations for general energy functions.

2 0.10994345 238 iccv-2013-Learning Graphs to Match

Author: Minsu Cho, Karteek Alahari, Jean Ponce

Abstract: Many tasks in computer vision are formulated as graph matching problems. Despite the NP-hard nature of the problem, fast and accurate approximations have led to significant progress in a wide range of applications. Learning graph models from observed data, however, still remains a challenging issue. This paper presents an effective scheme to parameterize a graph model, and learn its structural attributes for visual object matching. For this, we propose a graph representation with histogram-based attributes, and optimize them to increase the matching accuracy. Experimental evaluations on synthetic and real image datasets demonstrate the effectiveness of our approach, and show significant improvement in matching accuracy over graphs with pre-defined structures.

3 0.10746416 448 iccv-2013-Weakly Supervised Learning of Image Partitioning Using Decision Trees with Structured Split Criteria

Author: Christoph Straehle, Ullrich Koethe, Fred A. Hamprecht

Abstract: We propose a scheme that allows to partition an image into a previously unknown number of segments, using only minimal supervision in terms of a few must-link and cannotlink annotations. We make no use of regional data terms, learning instead what constitutes a likely boundary between segments. Since boundaries are only implicitly specified through cannot-link constraints, this is a hard and nonconvex latent variable problem. We address this problem in a greedy fashion using a randomized decision tree on features associated with interpixel edges. We use a structured purity criterion during tree construction and also show how a backtracking strategy can be used to prevent the greedy search from ending up in poor local optima. The proposed strategy is compared with prior art on natural images.

4 0.10459989 309 iccv-2013-Partial Enumeration and Curvature Regularization

Author: Carl Olsson, Johannes Ulén, Yuri Boykov, Vladimir Kolmogorov

Abstract: Energies with high-order non-submodular interactions have been shown to be very useful in vision due to their high modeling power. Optimization of such energies, however, is generally NP-hard. A naive approach that works for small problem instances is exhaustive search, that is, enumeration of all possible labelings of the underlying graph. We propose a general minimization approach for large graphs based on enumeration of labelings of certain small patches. This partial enumeration technique reduces complex highorder energy formulations to pairwise Constraint Satisfaction Problems with unary costs (uCSP), which can be efficiently solved using standard methods like TRW-S. Our approach outperforms a number of existing state-of-the-art algorithms on well known difficult problems (e.g. curvature regularization, stereo, deconvolution); it gives near global minimum and better speed. Our main application of interest is curvature regularization. In the context of segmentation, our partial enumeration technique allows to evaluate curvature directly on small patches using a novel integral geometry approach. 1

5 0.096419878 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

Author: Jan Lellmann, Evgeny Strekalovskiy, Sabrina Koetter, Daniel Cremers

Abstract: While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories.

6 0.093915962 171 iccv-2013-Fix Structured Learning of 2013 ICCV paper k2opt.pdf

7 0.088947728 290 iccv-2013-New Graph Structured Sparsity Model for Multi-label Image Annotations

8 0.086099952 19 iccv-2013-A Learning-Based Approach to Reduce JPEG Artifacts in Image Matting

9 0.086080857 404 iccv-2013-Structured Forests for Fast Edge Detection

10 0.084252544 132 iccv-2013-Efficient 3D Scene Labeling Using Fields of Trees

11 0.082111664 237 iccv-2013-Learning Graph Matching: Oriented to Category Modeling from Cluttered Scenes

12 0.081914112 81 iccv-2013-Combining the Right Features for Complex Event Recognition

13 0.080347344 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables

14 0.079025023 429 iccv-2013-Tree Shape Priors with Connectivity Constraints Using Convex Relaxation on General Graphs

15 0.078772813 317 iccv-2013-Piecewise Rigid Scene Flow

16 0.077766143 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map

17 0.077501856 150 iccv-2013-Exemplar Cut

18 0.077416733 42 iccv-2013-Active MAP Inference in CRFs for Efficient Semantic Segmentation

19 0.073613115 116 iccv-2013-Directed Acyclic Graph Kernels for Action Recognition

20 0.072004445 433 iccv-2013-Understanding High-Level Semantics by Modeling Traffic Patterns


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.163), (1, -0.015), (2, -0.034), (3, -0.017), (4, 0.002), (5, 0.057), (6, -0.061), (7, 0.027), (8, 0.045), (9, -0.109), (10, -0.073), (11, 0.015), (12, 0.014), (13, 0.036), (14, 0.089), (15, 0.095), (16, -0.044), (17, -0.061), (18, -0.022), (19, 0.039), (20, 0.012), (21, -0.01), (22, -0.037), (23, -0.014), (24, 0.079), (25, 0.013), (26, 0.031), (27, -0.047), (28, 0.057), (29, 0.017), (30, -0.019), (31, 0.005), (32, 0.035), (33, 0.019), (34, 0.05), (35, 0.009), (36, -0.064), (37, -0.003), (38, 0.012), (39, -0.019), (40, -0.008), (41, -0.064), (42, 0.023), (43, -0.066), (44, 0.033), (45, 0.052), (46, 0.065), (47, -0.024), (48, 0.01), (49, -0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95454884 324 iccv-2013-Potts Model, Parametric Maxflow and K-Submodular Functions

Author: Igor Gridchyn, Vladimir Kolmogorov

Abstract: The problem of minimizing the Potts energy function frequently occurs in computer vision applications. One way to tackle this NP-hard problem was proposed by Kovtun [20, 21]. It identifies a part of an optimal solution by running k maxflow computations, where k is the number of labels. The number of “labeled” pixels can be significant in some applications, e.g. 50-93% in our tests for stereo. We show how to reduce the runtime to O(log k) maxflow computations (or one parametric maxflow computation). Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. To derive our technique, we generalize the algorithm of Felzenszwalb et al. [7] for Tree Metrics. We also show a connection to k-submodular functions from combinatorial optimization, and discuss k-submodular relaxations for general energy functions.

2 0.77228791 171 iccv-2013-Fix Structured Learning of 2013 ICCV paper k2opt.pdf

Author: empty-author

Abstract: Submodular functions can be exactly minimized in polynomial time, and the special case that graph cuts solve with max flow [19] has had significant impact in computer vision [5, 21, 28]. In this paper we address the important class of sum-of-submodular (SoS) functions [2, 18], which can be efficiently minimized via a variant of max flow called submodular flow [6]. SoS functions can naturally express higher order priors involving, e.g., local image patches; however, it is difficult to fully exploit their expressive power because they have so many parameters. Rather than trying to formulate existing higher order priors as an SoS function, we take a discriminative learning approach, effectively searching the space of SoS functions for a higher order prior that performs well on our training set. We adopt a structural SVM approach [15, 34] and formulate the training problem in terms of quadratic programming; as a result we can efficiently search the space of SoS priors via an extended cutting-plane algorithm. We also show how the state-of-the-art max flow method for vision problems [11] can be modified to efficiently solve the submodular flow problem. Experimental comparisons are made against the OpenCVimplementation ofthe GrabCut interactive seg- mentation technique [28], which uses hand-tuned parameters instead of machine learning. On a standard dataset [12] our method learns higher order priors with hundreds of parameter values, and produces significantly better segmentations. While our focus is on binary labeling problems, we show that our techniques can be naturally generalized to handle more than two labels.

3 0.6794315 429 iccv-2013-Tree Shape Priors with Connectivity Constraints Using Convex Relaxation on General Graphs

Author: Jan Stühmer, Peter Schröder, Daniel Cremers

Abstract: We propose a novel method to include a connectivity prior into image segmentation that is based on a binary labeling of a directed graph, in this case a geodesic shortest path tree. Specifically we make two contributions: First, we construct a geodesic shortest path tree with a distance measure that is related to the image data and the bending energy of each path in the tree. Second, we include a connectivity prior in our segmentation model, that allows to segment not only a single elongated structure, but instead a whole connected branching tree. Because both our segmentation model and the connectivity constraint are convex, a global optimal solution can be found. To this end, we generalize a recent primal-dual algorithm for continuous convex optimization to an arbitrary graph structure. To validate our method we present results on data from medical imaging in angiography and retinal blood vessel segmentation.

4 0.63551706 42 iccv-2013-Active MAP Inference in CRFs for Efficient Semantic Segmentation

Author: Gemma Roig, Xavier Boix, Roderick De_Nijs, Sebastian Ramos, Koljia Kuhnlenz, Luc Van_Gool

Abstract: Most MAP inference algorithms for CRFs optimize an energy function knowing all the potentials. In this paper, we focus on CRFs where the computational cost of instantiating the potentials is orders of magnitude higher than MAP inference. This is often the case in semantic image segmentation, where most potentials are instantiated by slow classifiers fed with costly features. We introduce Active MAP inference 1) to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest of the parameters of the potentials unknown, and 2) to estimate the MAP labeling from such incomplete energy function. Results for semantic segmentation benchmarks, namely PASCAL VOC 2010 [5] and MSRC-21 [19], show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains.

5 0.62136108 290 iccv-2013-New Graph Structured Sparsity Model for Multi-label Image Annotations

Author: Xiao Cai, Feiping Nie, Weidong Cai, Heng Huang

Abstract: In multi-label image annotations, because each image is associated to multiple categories, the semantic terms (label classes) are not mutually exclusive. Previous research showed that such label correlations can largely boost the annotation accuracy. However, all existing methods only directly apply the label correlation matrix to enhance the label inference and assignment without further learning the structural information among classes. In this paper, we model the label correlations using the relational graph, and propose a novel graph structured sparse learning model to incorporate the topological constraints of relation graph in multi-label classifications. As a result, our new method will capture and utilize the hidden class structures in relational graph to improve the annotation results. In proposed objective, a large number of structured sparsity-inducing norms are utilized, thus the optimization becomes difficult. To solve this problem, we derive an efficient optimization algorithm with proved convergence. We perform extensive experiments on six multi-label image annotation benchmark data sets. In all empirical results, our new method shows better annotation results than the state-of-the-art approaches.

6 0.61637115 234 iccv-2013-Learning CRFs for Image Parsing with Adaptive Subgradient Descent

7 0.60729671 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

8 0.60126895 448 iccv-2013-Weakly Supervised Learning of Image Partitioning Using Decision Trees with Structured Split Criteria

9 0.59354204 200 iccv-2013-Higher Order Matching for Consistent Multiple Target Tracking

10 0.58707511 120 iccv-2013-Discriminative Label Propagation for Multi-object Tracking with Sporadic Appearance Features

11 0.57876658 364 iccv-2013-SGTD: Structure Gradient and Texture Decorrelating Regularization for Image Decomposition

12 0.5769363 309 iccv-2013-Partial Enumeration and Curvature Regularization

13 0.57542765 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map

14 0.55584818 238 iccv-2013-Learning Graphs to Match

15 0.54725039 208 iccv-2013-Image Co-segmentation via Consistent Functional Maps

16 0.54706573 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

17 0.54686552 224 iccv-2013-Joint Optimization for Consistent Multiple Graph Matching

18 0.54518574 167 iccv-2013-Finding Causal Interactions in Video Sequences

19 0.54286838 148 iccv-2013-Example-Based Facade Texture Synthesis

20 0.52599043 140 iccv-2013-Elastic Net Constraints for Shape Matching


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.067), (7, 0.024), (13, 0.014), (16, 0.018), (26, 0.078), (27, 0.029), (31, 0.058), (40, 0.021), (42, 0.111), (48, 0.023), (64, 0.032), (73, 0.029), (85, 0.212), (89, 0.154), (95, 0.011), (98, 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81881976 324 iccv-2013-Potts Model, Parametric Maxflow and K-Submodular Functions

Author: Igor Gridchyn, Vladimir Kolmogorov

Abstract: The problem of minimizing the Potts energy function frequently occurs in computer vision applications. One way to tackle this NP-hard problem was proposed by Kovtun [20, 21]. It identifies a part of an optimal solution by running k maxflow computations, where k is the number of labels. The number of “labeled” pixels can be significant in some applications, e.g. 50-93% in our tests for stereo. We show how to reduce the runtime to O(log k) maxflow computations (or one parametric maxflow computation). Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. To derive our technique, we generalize the algorithm of Felzenszwalb et al. [7] for Tree Metrics. We also show a connection to k-submodular functions from combinatorial optimization, and discuss k-submodular relaxations for general energy functions.

2 0.77042019 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

Author: Lingqiao Liu, Lei Wang

Abstract: To achieve a good trade-off between recognition accuracy and computational efficiency, it is often needed to reduce high-dimensional visual data to medium-dimensional ones. For this task, even applying a simple full-matrixbased linear projection causes significant computation and memory use. When the number of visual data is large, how to efficiently learn such a projection could even become a problem. The recent feature merging approach offers an efficient way to reduce the dimensionality, which only requires a single scan of features to perform reduction. However, existing merging algorithms do not scale well with highdimensional data, especially in the unsupervised case. To address this problem, we formulate unsupervised feature merging as a PCA problem imposed with a special structure constraint. By exploiting its connection with kmeans, we transform this constrained PCA problem into a feature clustering problem. Moreover, we employ the hashing technique to improve its scalability. These produce a scalable feature merging algorithm for our dimensional- ity reduction task. In addition, we develop an extension of this method by leveraging the neighborhood structure in the data to further improve dimensionality reduction performance. In further, we explore the incorporation of bipolar merging a variant of merging function which allows the subtraction operation into our algorithms. Through three applications in visual recognition, we demonstrate that our methods can not only achieve good dimensionality reduction performance with little computational cost but also help to create more powerful representation at both image level and local feature level. – –

3 0.75152701 44 iccv-2013-Adapting Classification Cascades to New Domains

Author: Vidit Jain, Sachin Sudhakar Farfade

Abstract: Classification cascades have been very effective for object detection. Such a cascade fails to perform well in data domains with variations in appearances that may not be captured in the training examples. This limited generalization severely restricts the domains for which they can be used effectively. A common approach to address this limitation is to train a new cascade of classifiers from scratch for each of the new domains. Building separate detectors for each of the different domains requires huge annotation and computational effort, making it not scalable to a large number of data domains. Here we present an algorithm for quickly adapting a pre-trained cascade of classifiers using a small number oflabeledpositive instancesfrom a different yet similar data domain. In our experiments with images of human babies and human-like characters from movies, we demonstrate that the adapted cascade significantly outperforms both of the original cascade and the one trained from scratch using the given training examples. –

4 0.7215426 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

5 0.71419507 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

Author: Bo Wang, Zhuowen Tu, John K. Tsotsos

Abstract: In graph-based semi-supervised learning approaches, the classification rate is highly dependent on the size of the availabel labeled data, as well as the accuracy of the similarity measures. Here, we propose a semi-supervised multi-class/multi-label classification scheme, dynamic label propagation (DLP), which performs transductive learning through propagation in a dynamic process. Existing semi-supervised classification methods often have difficulty in dealing with multi-class/multi-label problems due to the lack in consideration of label correlation; our algorithm instead emphasizes dynamic metric fusion with label information. Significant improvement over the state-of-the-art methods is observed on benchmark datasets for both multiclass and multi-label tasks.

6 0.71352851 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

7 0.71344572 180 iccv-2013-From Where and How to What We See

8 0.71245998 376 iccv-2013-Scene Text Localization and Recognition with Oriented Stroke Detection

9 0.71154529 349 iccv-2013-Regionlets for Generic Object Detection

10 0.71146882 188 iccv-2013-Group Sparsity and Geometry Constrained Dictionary Learning for Action Recognition from Depth Maps

11 0.71135283 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization

12 0.71135056 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

13 0.71026242 327 iccv-2013-Predicting an Object Location Using a Global Image Representation

14 0.71019274 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition

15 0.70969212 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions

16 0.70948553 208 iccv-2013-Image Co-segmentation via Consistent Functional Maps

17 0.70883572 277 iccv-2013-Multi-channel Correlation Filters

18 0.70874286 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition

19 0.70826417 197 iccv-2013-Hierarchical Joint Max-Margin Learning of Mid and Top Level Representations for Visual Recognition

20 0.70817077 206 iccv-2013-Hybrid Deep Learning for Face Verification