cvpr cvpr2013 cvpr2013-95 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mathieu Salzmann
Abstract: In this paper, we tackle the problem of performing inference in graphical models whose energy is a polynomial function of continuous variables. Our energy minimization method follows a dual decomposition approach, where the global problem is split into subproblems defined over the graph cliques. The optimal solution to these subproblems is obtained by making use of a polynomial system solver. Our algorithm inherits the convergence guarantees of dual decomposition. To speed up optimization, we also introduce a variant of this algorithm based on the augmented Lagrangian method. Our experiments illustrate the diversity of computer vision problems that can be expressed with polynomial energies, and demonstrate the benefits of our approach over existing continuous inference methods.
Reference: text
sentIndex sentText sentNum sentScore
1 au Abstract In this paper, we tackle the problem of performing inference in graphical models whose energy is a polynomial function of continuous variables. [sent-4, score-0.797]
2 Our energy minimization method follows a dual decomposition approach, where the global problem is split into subproblems defined over the graph cliques. [sent-5, score-0.696]
3 The optimal solution to these subproblems is obtained by making use of a polynomial system solver. [sent-6, score-0.557]
4 Our algorithm inherits the convergence guarantees of dual decomposition. [sent-7, score-0.649]
5 To speed up optimization, we also introduce a variant of this algorithm based on the augmented Lagrangian method. [sent-8, score-0.154]
6 Our experiments illustrate the diversity of computer vision problems that can be expressed with polynomial energies, and demonstrate the benefits of our approach over existing continuous inference methods. [sent-9, score-0.686]
7 Introduction Many computer vision problems can be expressed as the minimization of an energy that is a polynomial function of the variables that define the problem. [sent-11, score-0.781]
8 With the availability of techniques such as homotopy continuation [19, 11], or Groebner basis [4, 21, 10] to solve systems of polynomial equations, one could think of solving such minimization problems by finding the zeros of the gradient of the energy. [sent-13, score-0.809]
9 However, existing polynomial system solvers can only cope with small numbers of variables, which precludes following this approach for most realistic computer vision problems. [sent-14, score-0.594]
10 A popular approach to tackling large computer vision problems consists in exploiting the structure of the variables of interest, and, in particular, the fact that these variables form a graph with cliques of reasonably small sizes (i. [sent-15, score-0.535]
11 A solution to the task at hand is then obtained by performing inference in the corresponding graphical model. [sent-18, score-0.262]
12 More recently, several inference techniques for continuous variables have been introduced. [sent-23, score-0.337]
13 However, these methods either are restricted to modeling very specific energies (e. [sent-24, score-0.32]
14 , Gaussian belief propagation [2]), or do not offer convergence guarantees for general graphs (e. [sent-26, score-0.575]
15 , nonparametric belief propagation [22], particle belief propagation [7, 15], fusion moves [12]). [sent-28, score-0.626]
16 In this paper, we introduce an approach to performing inference in graphical models with continuous variables and whose energies are polynomial functions. [sent-29, score-1.188]
17 By exploiting the structure of the graph, our method lets us cope with much larger numbers of variables than polynomial system solvers. [sent-30, score-0.744]
18 Furthermore, it comes with convergence guarantees for general graphs, while allowing us to handle more general energies than Gaussian belief propagation. [sent-31, score-0.749]
19 More specifically, we follow a dual decomposition approach [1, 9] and split the graph encoding the global problem into subsets of its variables. [sent-32, score-0.479]
20 Since the global energy is a polynomial function, so are the energies of the individual subgraphs. [sent-33, score-0.848]
21 However, these energies now depend on much smaller numbers of variables. [sent-34, score-0.354]
22 Therefore, we can exploit polynomial system solvers to find the minimum energy configuration of each subgraph by computing the zeros of its energy gradient. [sent-35, score-0.834]
23 The standard iterative dual decomposition procedure can be employed to encourage the solutions of the subgraphs to agree on their shared variables. [sent-36, score-0.504]
24 This just introduces an additional linear term in the original subgraph energies, which thus remain polynomial. [sent-37, score-0.104]
25 Since the optimal solution for each subgraph can be obtained, our method inherits the convergence guarantees of dual decomposition, even though the global energy is non-convex. [sent-38, score-0.895]
26 Furthermore, it can easily be parallelized and can inherently handle graphs with cycles and with cliques of size larger than two. [sent-39, score-0.227]
27 Finally, we introduce a variant of our algorithm that exploits the augmented Lagrangian method to speed up the optimization process. [sent-40, score-0.154]
28 Our experiments demonstrate the effectiveness of our approach, as well as its benefits over particle belief propagation and fusion moves. [sent-42, score-0.343]
29 For instance, minimal problems [21, 10] are commonly formulated in terms of polynomial equations, and addressed with Groebner basis solvers [4]. [sent-45, score-0.494]
30 Similarly, shape-from-shading has been expressed as the solution to a polynomial system [5] obtained by homotopy continuation [19, 11]. [sent-46, score-0.847]
31 Nonetheless, polynomial solvers can only handle small numbers of variables. [sent-47, score-0.496]
32 Alternatively, inference methods in graphical models have been widely used to minimize energy functions of many variables by accounting for their graph structure. [sent-49, score-0.527]
33 While most of the literature assumes discrete variables (e. [sent-50, score-0.179]
34 , [24, 14, 8]), several methods have been proposed to address the continuous case. [sent-52, score-0.08]
35 For instance, Gaussian belief propagation [2] minimizes quadratic energies, which form a special case of the ones handled by our approach. [sent-53, score-0.259]
36 Instead of quadratic energies, piecewise convex functions were recently utilized [30]. [sent-54, score-0.13]
37 Our approach can cope with non-convex energies without requiring piecewise approximations. [sent-55, score-0.409]
38 Kernel belief propagation [20] can make use of general energies. [sent-56, score-0.222]
39 Nonparametric belief propagation [22] can also exploit general energies, but approximates the messages passed between the graph nodes as mixtures of Gaussians, thus modifying the original problem. [sent-58, score-0.259]
40 Currently, to the best of our knowledge, the most successful approaches to performing continuous inference in graphical models are particle (convex) belief propagation (PCBP) [7, 15] and Fusion Moves [12]. [sent-59, score-0.579]
41 The former iteratively samples particles to form candidate variable assignments and finds a MAP estimate of the resulting discrete problem with (convex) belief propagation. [sent-60, score-0.125]
42 While both approaches have shown good performance, they lack convergence guarantees for general graphs. [sent-64, score-0.304]
43 In this paper, we tackle the problem of performing con- tinuous inference in graphical models where the energy function is polynomial. [sent-67, score-0.331]
44 Since our method is based on dual decomposition [1, 9], it inherits its convergence guarantees, while still allowing us to model a large class of energies. [sent-68, score-0.615]
45 , [28, 17]) have also been proposed to minimize different convex energies, or non-convex ones by convex approximations [6]. [sent-71, score-0.126]
46 Our Approach In this section, we present our approach to performing inference in graphical models with continuous variables and polynomial energies. [sent-73, score-0.868]
47 We first introduce our main algorithm and then propose a variant to speed up minimization. [sent-74, score-0.079]
48 Dual Decomposition with Polynomial Energies Let x ∈ RD be the vector containing the D continuous randLeotm x va ∈ri aRbles of the problem of interest. [sent-78, score-0.08]
49 We consider the case where these random variables form a graph. [sent-79, score-0.179]
50 Thus the energy of a specific realization of these variables can be expressed as = E(x) ? [sent-80, score-0.331]
51 However, we assume that they are polynomial in x. [sent-84, score-0.386]
52 Our goal is to find the configuration x∗ that corresponds to the minimum energy E(x∗). [sent-85, score-0.108]
53 To achieve this while accounting for the graph structure, we make use of dual decomposition [1, 9]. [sent-86, score-0.48]
54 Dual decomposition proceeds by introducing auxiliary variables {xiαi } and formulating the minidmuizcaintigo nau oxfi lEiar(yx) v as {xmiαi n},x ? [sent-87, score-0.395]
55 The Lagrange dual function of this problem can be written as g({λi}) ={x miαiin},x? [sent-91, score-0.263]
56 , (3) λi where is the vector of Lagrange multipliers for the constraints on the variables involved in fi. [sent-96, score-0.219]
57 The original variables x can be eliminated from by explicitly performing ctahen m bein eilmimiziantaiotend w fritohm respect }to) x. [sent-97, score-0.267]
58 This lets us write the Lagrange dual problem {mλi}a∈xΛ? [sent-102, score-0.31]
59 (5) Following standard practice, we use a projected subgradient method to solve the problem in Eq. [sent-106, score-0.132]
60 It can be shown [9] that the subgradient ∇gi = ¯ xiαi where ¯x iαi(λi) is the optimal solution to the ith slave. [sent-108, score-0.171]
61 This yields an iterative algorithm where, at each iteration t, the optimal solution to each slave is computed given the ? [sent-109, score-0.319]
62 a iTehnits projection can be achieved by subtracting from each variable in ¯x iαi(λi) its mean value over all slaves that contain this variable. [sent-117, score-0.16]
63 Similarly, the primal solution can be obtained by averaging over the variables shared across multiple slaves. [sent-118, score-0.311]
64 Under some conditions discussed below, dual decomposition with subgradient ascent is guaranteed to converge to the global optimum of Eq. [sent-119, score-0.645]
65 Note, however, that this translates into global convergence of the original problem only when the variables shared by the subproblems agree and when the duality gap is zero. [sent-121, score-0.511]
66 Convergence of dual decomposition is conditioned on (λi) (λi), the ability to compute the optimal solution to each slave in Eq. [sent-123, score-0.695]
67 To this end, we search for a solution such that the gradient of the energy vanishes. [sent-125, score-0.147]
68 For each slave, this yields the system wher xiα,j⎪⎧⎩⎨is∂thfei/j∂t/hx∂iαvxaiα,r|ia1b|l+eoλf|i1αsila|ve=. [sent-126, score-0.087]
69 As long as each slave depends on a reasonable number of variables, this system can be solved using standard techniques. [sent-130, score-0.303]
70 In particular, we make use of homotopy continuation methods [19, 11]. [sent-131, score-0.323]
71 Homotopy continuation methods proceed by first replacing the original polynomial system P(x) = 0 with a simpler one oPri0g (inx)a p=o y0n tohmati hla ssy sthteem same degree wanitdh tah sei same onunemb Per of roots. [sent-132, score-0.609]
72 ) Although relying on nau nmuemriecrailc mle ctohnotdins-, homotopy continuation guarantees to find all the complex roots of a polynomial system with probability 1. [sent-134, score-1.123]
73 0 is the fastest solver available, it still requires 9 hours to solve a system of 20 quadratic equations in 20 variables. [sent-139, score-0.183]
74 Many of these roots can be discarded based on the fact that we are not interested in complex values. [sent-143, score-0.109]
75 The variable assignment corresponding to the global minimum can then be obtained by comparing the energy value of the remaining roots. [sent-144, score-0.142]
76 Convergence of dual decomposition also depends on the properties of the sequence of subgradient step sizes ηt [1, 9]. [sent-145, score-0.54]
77 In our experiments, we considered two such sequences: The adaptive rule introduced in [9], and a non-summable diminishing step length rule [1]. [sent-146, score-0.265]
78 mτ0aδxt(τi1fδ gtt,iδm)prootvheedrw biyse δt, (7) where gt and ∇gt denote the dual function value and subgradient aat nitde r∇atgion t, respectively, and gt∗ is the best dual value obtained so far. [sent-150, score-0.753]
79 Augmented Lagrangian Formulation (8) While dual decomposition with subgradient ascent guarantees convergence to the global optimum of Eq. [sent-161, score-0.949]
80 To speed up the process, we follow an approach based on the Alternating Direction Method of Multipliers (ADMM), which exploits the augmented Lagrangian of the original problem [13]. [sent-163, score-0.118]
81 This boils down to introducing a quadratic penalty in the Lagrange dual function, which yields g˜({λi})={x miαiin},x? [sent-164, score-0.368]
82 oWnsitthra ithntis { new ∈ter Λm, that involves the original variables x, the objective of the corresponding dual problem does not decouple anymore. [sent-172, score-0.442]
83 Gecitve ton { a fix}ed a x, txh ies d puear-l problem decomposes into slaves of the form ˜gi(λi) = mxiαinifi(xiαi) +? [sent-174, score-0.16]
84 10 (ADMM-Poly only) for i= 1to #slaves do Find the roots of the system in Eq. [sent-184, score-0.164]
85 9 Find the lowest energy root ¯ xiαi among the roots end for Comp? [sent-186, score-0.217]
86 g the shared variables in { x¯iαi } endC ofmorp λi + obtained by solving polynomial equations of the form ∂fi/∂xiαi,j + λji + ρt (xiαi ,j − xαi ,j )=0 , (9) where xαi ,j is fixed, and finding the solution with the lowest energy among the roots. [sent-194, score-0.849]
87 With fixed {xiαi }, the augmented Lagrangian n hgas t a croloostse. [sent-195, score-0.075]
88 d- Wfoirtmh isxoeludti {oxn f}or, x, awuhgicmhe corresponds to the primal value of the original problem, and can thus be obtained by averaging over the variables shared across multiple slaves. [sent-196, score-0.272]
89 While, for some values of ρt and ηt, and for convex energies, ADMM has convergence guarantees, they do not apply to general polynomial energies. [sent-197, score-0.558]
90 In practice, we observed good convergence of the algorithm, especially when the augmented Lagrangian penalty was activated only after several iterations of the algorithm. [sent-198, score-0.236]
91 This inspired us the rule ρt= ρ0+1 + expρ(1−−γ ρ(0t − t0)) ,(10) with γ = 1/4, t0 = 50 and ρ0 = 0 (or a small value) in our experiments. [sent-199, score-0.106]
92 We also found that the ADMM approach was more stable with ηt following the rule in Eq. [sent-201, score-0.106]
93 Graphs with cycles: As opposed to most existing approaches, the convergence guarantees of DD-Poly remain unchanged by the presence of cycles in the graph. [sent-210, score-0.413]
94 Higher order cliques: The cliques in the graph may be of order higher than two without affecting the convergence guarantees of DD-Poly. [sent-211, score-0.449]
95 The bottleneck comes from the speed of the polynomial system solver, which depends on the number of variables and on the degree of the polynomials. [sent-212, score-0.663]
96 In practice, we used cliques of size 18, 5 and 4. [sent-213, score-0.108]
97 The slave problems can be solved in parallel on individual cores. [sent-215, score-0.311]
98 o {λi} Non-polynomial energies: A straightforward extension of our method to non-polynomial energies can be achieved by approximating such functions with polynomials. [sent-217, score-0.354]
99 The most common polynomial function approximation methods are the Taylor and Chebyshev expansions. [sent-218, score-0.386]
100 Instead of such global approximations, better accuracy may be achieved by approximating the slave energies with piecewise polynomial functions. [sent-219, score-1.068]
wordName wordTfidf (topN-words)
[('polynomial', 0.386), ('energies', 0.32), ('dual', 0.263), ('slave', 0.248), ('homotopy', 0.186), ('guarantees', 0.179), ('variables', 0.179), ('slaves', 0.16), ('decomposition', 0.145), ('continuation', 0.137), ('subgradient', 0.132), ('belief', 0.125), ('convergence', 0.125), ('xi', 0.121), ('lagrangian', 0.111), ('roots', 0.109), ('energy', 0.108), ('cliques', 0.108), ('rule', 0.106), ('propagation', 0.097), ('gt', 0.095), ('ifi', 0.093), ('graphical', 0.09), ('admm', 0.088), ('inherits', 0.082), ('continuous', 0.08), ('groebner', 0.08), ('inifi', 0.08), ('inference', 0.078), ('subproblems', 0.077), ('solvers', 0.076), ('augmented', 0.075), ('lagrange', 0.073), ('ascent', 0.071), ('nau', 0.071), ('pcbp', 0.071), ('cycles', 0.07), ('fusion', 0.067), ('mxi', 0.066), ('subgraph', 0.065), ('moves', 0.061), ('ute', 0.059), ('shared', 0.057), ('system', 0.055), ('performing', 0.055), ('particle', 0.054), ('nicta', 0.053), ('diminishing', 0.053), ('comp', 0.05), ('mathieu', 0.05), ('graphs', 0.049), ('equations', 0.047), ('convex', 0.047), ('lets', 0.047), ('piecewise', 0.046), ('gi', 0.044), ('expressed', 0.044), ('solver', 0.044), ('speed', 0.043), ('cope', 0.043), ('ji', 0.043), ('mx', 0.043), ('multipliers', 0.04), ('remain', 0.039), ('solution', 0.039), ('agree', 0.039), ('graph', 0.037), ('quadratic', 0.037), ('penalty', 0.036), ('variant', 0.036), ('primal', 0.036), ('zeros', 0.036), ('xse', 0.035), ('loef', 0.035), ('chebyshev', 0.035), ('endc', 0.035), ('iters', 0.035), ('otop', 0.035), ('randleotm', 0.035), ('xmi', 0.035), ('accounting', 0.035), ('iin', 0.035), ('numbers', 0.034), ('global', 0.034), ('approximating', 0.034), ('diversity', 0.033), ('ainr', 0.033), ('ctahen', 0.033), ('npu', 0.033), ('oatn', 0.033), ('ofmorp', 0.033), ('illustrate', 0.033), ('minimization', 0.032), ('approximations', 0.032), ('yields', 0.032), ('problems', 0.032), ('parallel', 0.031), ('fi', 0.031), ('taylor', 0.031), ('wanitdh', 0.031)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999917 95 cvpr-2013-Continuous Inference in Graphical Models with Polynomial Energies
Author: Mathieu Salzmann
Abstract: In this paper, we tackle the problem of performing inference in graphical models whose energy is a polynomial function of continuous variables. Our energy minimization method follows a dual decomposition approach, where the global problem is split into subproblems defined over the graph cliques. The optimal solution to these subproblems is obtained by making use of a polynomial system solver. Our algorithm inherits the convergence guarantees of dual decomposition. To speed up optimization, we also introduce a variant of this algorithm based on the augmented Lagrangian method. Our experiments illustrate the diversity of computer vision problems that can be expressed with polynomial energies, and demonstrate the benefits of our approach over existing continuous inference methods.
2 0.16684532 308 cvpr-2013-Nonlinearly Constrained MRFs: Exploring the Intrinsic Dimensions of Higher-Order Cliques
Author: Yun Zeng, Chaohui Wang, Stefano Soatto, Shing-Tung Yau
Abstract: This paper introduces an efficient approach to integrating non-local statistics into the higher-order Markov Random Fields (MRFs) framework. Motivated by the observation that many non-local statistics (e.g., shape priors, color distributions) can usually be represented by a small number of parameters, we reformulate the higher-order MRF model by introducing additional latent variables to represent the intrinsic dimensions of the higher-order cliques. The resulting new model, called NC-MRF, not only provides the flexibility in representing the configurations of higher-order cliques, but also automatically decomposes the energy function into less coupled terms, allowing us to design an efficient algorithmic framework for maximum a posteriori (MAP) inference. Based on this novel modeling/inference framework, we achieve state-of-the-art solutions to the challenging problems of class-specific image segmentation and template-based 3D facial expression tracking, which demonstrate the potential of our approach.
3 0.15161204 23 cvpr-2013-A Practical Rank-Constrained Eight-Point Algorithm for Fundamental Matrix Estimation
Author: Yinqiang Zheng, Shigeki Sugimoto, Masatoshi Okutomi
Abstract: Due to its simplicity, the eight-point algorithm has been widely used in fundamental matrix estimation. Unfortunately, the rank-2 constraint of a fundamental matrix is enforced via a posterior rank correction step, thus leading to non-optimal solutions to the original problem. To address this drawback, existing algorithms need to solve either a very high order polynomial or a sequence of convex relaxation problems, both of which are computationally ineffective and numerically unstable. In this work, we present a new rank-2 constrained eight-point algorithm, which directly incorporates the rank-2 constraint in the minimization process. To avoid singularities, we propose to solve seven subproblems and retrieve their globally optimal solutions by using tailored polynomial system solvers. Our proposed method is noniterative, computationally efficient and numerically stable. Experiment results have verified its superiority over existing algebraic error based algorithms in terms of accuracy, as well as its advantages when used to initialize geometric error based algorithms.
4 0.12639996 262 cvpr-2013-Learning for Structured Prediction Using Approximate Subgradient Descent with Working Sets
Author: Aurélien Lucchi, Yunpeng Li, Pascal Fua
Abstract: We propose a working set based approximate subgradient descent algorithm to minimize the margin-sensitive hinge loss arising from the soft constraints in max-margin learning frameworks, such as the structured SVM. We focus on the setting of general graphical models, such as loopy MRFs and CRFs commonly used in image segmentation, where exact inference is intractable and the most violated constraints can only be approximated, voiding the optimality guarantees of the structured SVM’s cutting plane algorithm as well as reducing the robustness of existing subgradient based methods. We show that the proposed method obtains better approximate subgradients through the use of working sets, leading to improved convergence properties and increased reliability. Furthermore, our method allows new constraints to be randomly sampled instead of computed using the more expensive approximate inference techniques such as belief propagation and graph cuts, which can be used to reduce learning time at only a small cost of performance. We demonstrate the strength of our method empirically on the segmentation of a new publicly available electron microscopy dataset as well as the popular MSRC data set and show state-of-the-art results.
Author: Jörg Hendrik Kappes, Markus Speth, Gerhard Reinelt, Christoph Schnörr
Abstract: Discrete graphical models (also known as discrete Markov random fields) are a major conceptual tool to model the structure of optimization problems in computer vision. While in the last decade research has focused on fast approximative methods, algorithms that provide globally optimal solutions have come more into the research focus in the last years. However, large scale computer vision problems seemed to be out of reach for such methods. In this paper we introduce a promising way to bridge this gap based on partial optimality and structural properties of the underlying problem factorization. Combining these preprocessing steps, we are able to solve grids of size 2048 2048 in less than 90 seconds. On the hitherto unsolva2b04le8 C×h2i0ne4s8e character dataset of Nowozin et al. we obtain provably optimal results in 56% of the instances and achieve competitive runtimes on other recent benchmark problems. While in the present work only generalized Potts models are considered, an extension to general graphical models seems to be feasible.
6 0.10876636 165 cvpr-2013-Fast Energy Minimization Using Learned State Filters
7 0.10460063 24 cvpr-2013-A Principled Deep Random Field Model for Image Segmentation
8 0.099561639 143 cvpr-2013-Efficient Large-Scale Structured Learning
9 0.094296932 300 cvpr-2013-Multi-target Tracking by Lagrangian Relaxation to Min-cost Network Flow
10 0.093057536 164 cvpr-2013-Fast Convolutional Sparse Coding
11 0.089500397 317 cvpr-2013-Optimal Geometric Fitting under the Truncated L2-Norm
12 0.089087151 6 cvpr-2013-A Comparative Study of Modern Inference Techniques for Discrete Energy Minimization Problems
13 0.088677339 171 cvpr-2013-Fast Trust Region for Segmentation
14 0.087393478 128 cvpr-2013-Discrete MRF Inference of Marginal Densities for Non-uniformly Discretized Variable Space
15 0.086476624 74 cvpr-2013-CLAM: Coupled Localization and Mapping with Efficient Outlier Handling
16 0.085891575 121 cvpr-2013-Detection- and Trajectory-Level Exclusion in Multiple Object Tracking
17 0.084532723 397 cvpr-2013-Simultaneous Super-Resolution of Depth and Images Using a Single Camera
18 0.079109192 377 cvpr-2013-Sample-Specific Late Fusion for Visual Category Recognition
19 0.0787559 20 cvpr-2013-A New Model and Simple Algorithms for Multi-label Mumford-Shah Problems
20 0.076431163 466 cvpr-2013-Whitened Expectation Propagation: Non-Lambertian Shape from Shading and Shadow
topicId topicWeight
[(0, 0.146), (1, 0.059), (2, -0.012), (3, 0.023), (4, 0.06), (5, -0.006), (6, 0.036), (7, -0.027), (8, -0.064), (9, 0.019), (10, 0.081), (11, 0.034), (12, -0.108), (13, -0.028), (14, -0.071), (15, 0.037), (16, -0.021), (17, 0.037), (18, 0.139), (19, -0.003), (20, -0.091), (21, -0.054), (22, -0.041), (23, 0.045), (24, 0.124), (25, 0.004), (26, -0.136), (27, 0.099), (28, 0.01), (29, -0.077), (30, 0.085), (31, -0.031), (32, 0.055), (33, -0.035), (34, -0.128), (35, 0.009), (36, 0.076), (37, -0.017), (38, -0.048), (39, -0.035), (40, 0.012), (41, -0.04), (42, -0.085), (43, -0.024), (44, 0.003), (45, 0.041), (46, 0.071), (47, -0.028), (48, -0.022), (49, 0.102)]
simIndex simValue paperId paperTitle
same-paper 1 0.97901911 95 cvpr-2013-Continuous Inference in Graphical Models with Polynomial Energies
Author: Mathieu Salzmann
Abstract: In this paper, we tackle the problem of performing inference in graphical models whose energy is a polynomial function of continuous variables. Our energy minimization method follows a dual decomposition approach, where the global problem is split into subproblems defined over the graph cliques. The optimal solution to these subproblems is obtained by making use of a polynomial system solver. Our algorithm inherits the convergence guarantees of dual decomposition. To speed up optimization, we also introduce a variant of this algorithm based on the augmented Lagrangian method. Our experiments illustrate the diversity of computer vision problems that can be expressed with polynomial energies, and demonstrate the benefits of our approach over existing continuous inference methods.
Author: Jörg Hendrik Kappes, Markus Speth, Gerhard Reinelt, Christoph Schnörr
Abstract: Discrete graphical models (also known as discrete Markov random fields) are a major conceptual tool to model the structure of optimization problems in computer vision. While in the last decade research has focused on fast approximative methods, algorithms that provide globally optimal solutions have come more into the research focus in the last years. However, large scale computer vision problems seemed to be out of reach for such methods. In this paper we introduce a promising way to bridge this gap based on partial optimality and structural properties of the underlying problem factorization. Combining these preprocessing steps, we are able to solve grids of size 2048 2048 in less than 90 seconds. On the hitherto unsolva2b04le8 C×h2i0ne4s8e character dataset of Nowozin et al. we obtain provably optimal results in 56% of the instances and achieve competitive runtimes on other recent benchmark problems. While in the present work only generalized Potts models are considered, an extension to general graphical models seems to be feasible.
3 0.8124491 6 cvpr-2013-A Comparative Study of Modern Inference Techniques for Discrete Energy Minimization Problems
Author: Jörg H. Kappes, Bjoern Andres, Fred A. Hamprecht, Christoph Schnörr, Sebastian Nowozin, Dhruv Batra, Sungwoong Kim, Bernhard X. Kausler, Jan Lellmann, Nikos Komodakis, Carsten Rother
Abstract: Seven years ago, Szeliski et al. published an influential study on energy minimization methods for Markov random fields (MRF). This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenominal success of random field models means that the kinds of inference problems we solve have changed significantly. Specifically, the models today often include higher order interactions, flexible connectivity structures, large label-spaces of different cardinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of 24 state-of-art techniques on a corpus of 2,300 energy minimization instances from 20 diverse computer vision applications. To ensure reproducibility, we evaluate all methods in the OpenGM2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of Szeliski et al. for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types.
4 0.80208009 448 cvpr-2013-Universality of the Local Marginal Polytope
Author: unkown-author
Abstract: We show that solving the LP relaxation of the MAP inference problem in graphical models (also known as the minsum problem, energy minimization, or weighted constraint satisfaction) is not easier than solving any LP. More precisely, any polytope is linear-time representable by a local marginal polytope and any LP can be reduced in linear time to a linear optimization (allowing infinite weights) over a local marginal polytope.
5 0.76805687 308 cvpr-2013-Nonlinearly Constrained MRFs: Exploring the Intrinsic Dimensions of Higher-Order Cliques
Author: Yun Zeng, Chaohui Wang, Stefano Soatto, Shing-Tung Yau
Abstract: This paper introduces an efficient approach to integrating non-local statistics into the higher-order Markov Random Fields (MRFs) framework. Motivated by the observation that many non-local statistics (e.g., shape priors, color distributions) can usually be represented by a small number of parameters, we reformulate the higher-order MRF model by introducing additional latent variables to represent the intrinsic dimensions of the higher-order cliques. The resulting new model, called NC-MRF, not only provides the flexibility in representing the configurations of higher-order cliques, but also automatically decomposes the energy function into less coupled terms, allowing us to design an efficient algorithmic framework for maximum a posteriori (MAP) inference. Based on this novel modeling/inference framework, we achieve state-of-the-art solutions to the challenging problems of class-specific image segmentation and template-based 3D facial expression tracking, which demonstrate the potential of our approach.
6 0.76678908 41 cvpr-2013-An Iterated L1 Algorithm for Non-smooth Non-convex Optimization in Computer Vision
7 0.75730908 128 cvpr-2013-Discrete MRF Inference of Marginal Densities for Non-uniformly Discretized Variable Space
8 0.74154556 23 cvpr-2013-A Practical Rank-Constrained Eight-Point Algorithm for Fundamental Matrix Estimation
9 0.68562621 51 cvpr-2013-Auxiliary Cuts for General Classes of Higher Order Functionals
10 0.66578388 262 cvpr-2013-Learning for Structured Prediction Using Approximate Subgradient Descent with Working Sets
11 0.64854211 24 cvpr-2013-A Principled Deep Random Field Model for Image Segmentation
12 0.63622612 9 cvpr-2013-A Fast Semidefinite Approach to Solving Binary Quadratic Problems
13 0.62338632 171 cvpr-2013-Fast Trust Region for Segmentation
14 0.58627492 20 cvpr-2013-A New Model and Simple Algorithms for Multi-label Mumford-Shah Problems
15 0.58366632 143 cvpr-2013-Efficient Large-Scale Structured Learning
16 0.56451005 466 cvpr-2013-Whitened Expectation Propagation: Non-Lambertian Shape from Shading and Shadow
17 0.54358274 7 cvpr-2013-A Divide-and-Conquer Method for Scalable Low-Rank Latent Matrix Pursuit
18 0.52980644 317 cvpr-2013-Optimal Geometric Fitting under the Truncated L2-Norm
19 0.524257 165 cvpr-2013-Fast Energy Minimization Using Learned State Filters
topicId topicWeight
[(10, 0.132), (16, 0.015), (26, 0.066), (33, 0.306), (57, 0.241), (67, 0.041), (69, 0.031), (87, 0.078)]
simIndex simValue paperId paperTitle
1 0.92548662 141 cvpr-2013-Efficient Computation of Shortest Path-Concavity for 3D Meshes
Author: Henrik Zimmer, Marcel Campen, Leif Kobbelt
Abstract: In the context of shape segmentation and retrieval object-wide distributions of measures are needed to accurately evaluate and compare local regions ofshapes. Lien et al. [16] proposed two point-wise concavity measures in the context of Approximate Convex Decompositions of polygons measuring the distance from a point to the polygon ’s convex hull: an accurate Shortest Path-Concavity (SPC) measure and a Straight Line-Concavity (SLC) approximation of the same. While both are practicable on 2D shapes, the exponential costs of SPC in 3D makes it inhibitively expensive for a generalization to meshes [14]. In this paper we propose an efficient and straight forward approximation of the Shortest Path-Concavity measure to 3D meshes. Our approximation is based on discretizing the space between mesh and convex hull, thereby reducing the continuous Shortest Path search to an efficiently solvable graph problem. Our approach works outof-the-box on complex mesh topologies and requires no complicated handling of genus. Besides presenting a rigorous evaluation of our method on a variety of input meshes, we also define an SPC-based Shape Descriptor and show its superior retrieval and runtime performance compared with the recently presented results on the Convexity Distribution by Lian et al. [12].
same-paper 2 0.8863517 95 cvpr-2013-Continuous Inference in Graphical Models with Polynomial Energies
Author: Mathieu Salzmann
Abstract: In this paper, we tackle the problem of performing inference in graphical models whose energy is a polynomial function of continuous variables. Our energy minimization method follows a dual decomposition approach, where the global problem is split into subproblems defined over the graph cliques. The optimal solution to these subproblems is obtained by making use of a polynomial system solver. Our algorithm inherits the convergence guarantees of dual decomposition. To speed up optimization, we also introduce a variant of this algorithm based on the augmented Lagrangian method. Our experiments illustrate the diversity of computer vision problems that can be expressed with polynomial energies, and demonstrate the benefits of our approach over existing continuous inference methods.
3 0.87311989 79 cvpr-2013-Cartesian K-Means
Author: Mohammad Norouzi, David J. Fleet
Abstract: A fundamental limitation of quantization techniques like the k-means clustering algorithm is the storage and runtime cost associated with the large numbers of clusters required to keep quantization errors small and model fidelity high. We develop new models with a compositional parameterization of cluster centers, so representational capacity increases super-linearly in the number of parameters. This allows one to effectively quantize data using billions or trillions of centers. We formulate two such models, Orthogonal k-means and Cartesian k-means. They are closely related to one another, to k-means, to methods for binary hash function optimization like ITQ [5], and to Product Quantization for vector quantization [7]. The models are tested on largescale ANN retrieval tasks (1M GIST, 1B SIFT features), and on codebook learning for object recognition (CIFAR-10). 1. Introduction and background Techniques for vector quantization, like the well-known k-means algorithm, are used widely in vision and learning. Common applications include codebook learning for object recognition [16], and approximate nearest neighbor (ANN) search for information retrieval [12, 14]. In general terms, such techniques involve partitioning an input vector space into multiple regions {Si}ik=1, mapping points x in each region uinlttiop region-specific representatives {ci}ik=1, known as cioennte irnst.o A resg siuonch-,s a quantizer, qse(nxt)a,t can b {ec expressed as k q(x) = ?1(x∈Si)ci, (1) i?= ?1 where 1(·) is the usual indicator function. eTrhee 1 quality oef u a quantizer oisr expressed in terms of expected distortion, a common measure of which is squared error ?x − q(x) ?22. In this case, given centers {ci}, the region t ?ox xw −hic qh(x a point nis t assigned gwivitehn m ceinnitemrasl {dcis}to,r tthioen r eisobtained by Euclidean nearest neighbor (NN) search. The k-means algorithm can be used to learn centers from data. To reduce expected distortion, crucial for many applications, one can shrink region volumes by increasing k, the number of regions. In practice, however, increasing k results in prohibitive storage and run-time costs. Even if one resorts to ANN search with approximate k-means [14] or hierarchical k-means [12], it is hard to scale to large k (e.g., k = 264), as storing the centers is untenable. This paper concerns methods for scalable quantization with tractable storage and run-time costs. Inspired by Product Quantization (PQ), a state-of-the-art algorithm for ANN search with high-dimensional data (e.g., [7]), compositionality is one key. By expressing data in terms of recurring, reusable parts, the representational capacity of compositional models grows exponentially in the number ofparameters. Compression techniques like JPEG accomplish this by encoding images as disjoint rectangular patches. PQ divides the feature space into disjoint subspaces that are quantized independently. Other examples include part-based recognition models (e.g., [18]), and tensor-based models for stylecontent separation (e.g., [17]). Here, with a compositional parameterization of region centers, we find a family of models that reduce the enc√oding cost of k centers down from k to between log2 k and √k. A model parameter controls the trade-off between model fidelity and compactness. We formulate two related algorithms, Orthogonal kmeans (ok-means) and Cartesian k-means (ck-means). They are natural extensions of k-means, and are closely related to other hashing and quantization methods. The okmeans algorithm is a generalization of the Iterative Quantization (ITQ) algorithm for finding locality-sensitive binary codes [5]. The ck-means model is an extension of okmeans, and can be viewed as a generalization of PQ.1 We then report empirical results on large-scale ANN search tasks, and on codebook learning for recognition. For ANN search we use datasets of 1M GIST and 1B SIFT features, with both symmetric and asymmetric distance measures on the latent codes. We consider codebook learning for a generic form of recognition on CIFAR-10. 2. k-means Given a dataset of n p-dim points, D ≡ {xj }jn=1, the kmeans algorithm partitions ithme n points in ≡to { kx c}lusters, and 1A very similar generalization of PQ, was developed concurrently by Ge, He, Ke and Sun, and also appears in CVPR 2013 [4]. 333000111755 represents each cluster by a center point. Let C ∈ Rp×k breep a mseantrtsix e wachhos celu sctoelurm byns a comprise tihnet .k Lceltus Cter ∈ centers, i.e., C = [c1, c2 , · · · , ck] . The k-means objective is to minimize the within,-c··lu·s ,tecr squared distances: ?k-means(C) = = x?∈Dmiin?x − ci?22 x?∈Db∈mHin1/k?x − Cb?22 (2) where H1/k ≡ {b | b ∈ {0, 1}k and ?b? = 1}, i.e., b is a binary vee Hctor comprising a ,1-1o}f-ka encoding. Lloyd’s b k- imse aa bni-s algorithm [11] finds a local minimum of (2) by iterative, alternating optimization with respect to C and the b’s. The k-means model is simple and intuitive, using NN search to assign points to centers. The assignment of points to centers can be represented with a log k-bit index per data point. The cost of storing the centers grows linearly with k. 3. Orthogonal k-means with 2m centers With a compositional model one can represent cluster centers more efficiently. One such approach is to re- construct each input with an additive combination of the columns of C. To this end, instead of the 1-of-k encoding in (2), we let b be a general m-bit vector, b ∈ Hm ≡ {0, 1}m, (a2n)d, wCe e∈ l Rt bp× bme. aA gse nsuerchal, meac-bhi tc vluesctteorr c, ebnt ∈er H His th≡e sum o}f a asundbs Cet o∈f Rthe columns of C. There are 2m possible subsets, and therefore k = 2m centers. While the number of parameters in the quantizer is linear in m, the number of centers increases exponentially. While efficient in representing cluster centers, the approach is problematic, because solving bˆ = abrg∈Hmmin?x − Cb?22,(3) is intractable; i.e., the discrete optimization is not submodular. Obviously, for small 2m one could generate all possible centers and then perform NN search to find the optimal solution, but this would not scale well to large values of m. One key observation is that if the columns of C are orthogonal, then optimization (3) becomes tractable. To explain this, without loss of generality, assume the bits belong to {−1, 1} instead of {0, 1}, i.e., b? ∈ Hm? ≡ {−1, 1}m. tToh e{−n, ?x − Cb??22 = xTx + b?TCTCb? − 2xTCb? .(4) For diagonal CTC, the middle quadratic term on the RHS becomes trace(CTC), independent of b?. As a consequence, when C has orthogonal columns, abr?g∈mH?min?x − Cb??22 = sgn(CTx) ,(5) where sgn(·) is the element-wise sign function. Cluster centers are depicted by dots, and cluster boundaries by dashed lines. (Left) Clusters formed by a 2-cube with no rotation, scaling or translation; centers = {b? |b? ∈ H2? }. (Right) Rotation, scaling and translation are used to reduce distances between data points and cluster centers; centers = {μ + RDb? | b? ∈ H2? }. To reduce quantization error further we can also introduce an offset, denoted μ, to translate x. Taken together with (5), this leads to the loss function for the model we call orthogonal k-means (ok-means):2 ?ok-means(C,μ) = x?∈Db?m∈iHn?m?x − μ − Cb??22. (6) Clearly, with a change of variables, b? = 2b −1, we can define new versions of μ and C, with ide=ntic 2abl −lo1s,s, w wfoer c wanh dicehthe unknowns are binary, but in {0, 1}m. T uhnek nook-wmnesa anrse quantizer uetn icnod {e0,s 1ea}ch data point as a vertex of a transformed m-dimensional unit hypercube. The transform, via C and μ, maps the hypercube vertices onto the input feature space, ideally as close as possible to the data points. The matrix C has orthogonal columns and can therefore be expressed in terms of rotation and scaling; i.e., C ≡ RD, where R ∈ Rp×m has orthonormal cinoglu;m i.ne.s, ( CRT ≡R = R DIm,) w, ahnerde eD R Ris ∈ diagonal and positive definite. The goal of learning is to find the parameters, R, D, and μ, which minimize quantization error. Fig. 1 depicts a small set of 2D data points (red x’s) and two possible quantizations. Fig. 1 (left) depicts the vertices of a 2-cube with C = I2 and zero translation. The cluster regions are simply the four quadrants of the 2D space. The distances between data points and cluster centers, i.e., the quantization errors, are relatively large. By comparison, Fig. 1 (right) shows how a transformed 2-cube, the full model, can greatly reduce quantization errors. 3.1. Learning ok-means To derive the learning algorithm for ok-means we first rewrite the objective in matrix terms. Given n data points, let X = [x1, x2, · · · , xn] ∈ Rp×n. Let B? ∈ {−1, 1}m×n denote the corresponding ∈clu Rster assignment {co−ef1f,ic1i}ents. Our 2While closely related to ITQ, we use the the relationship to k-means. term ok-means to emphasize 333000111 686 goal is to find the assignment coefficients B? and the transformation parameters, namely, the rotation R, scaling D, and translation μ, that minimize ?ok-means(B?, R, D,μ) = ?X − μ1T − RDB??f2 (7) = ?X? − RDB??f2 (8) where ?·?f denotes the Frobenius norm, X? ≡ X − μ1T, R iws hceornes ?tr·a?ined to have orthonormal columns≡ ≡(R XTR − μ=1 Im), and D is a positive diagonal matrix. Like k-means, coordinate descent is effective for optimizing (8). We first initialize μ and R, and then iteratively optimize ?ok-means with respect to B?, D, R, and μ: • Optimize B? and D: With straightforward algebraic manipulation, one can show that ?ok-means = ?RTX?−DB??f2 + ?R⊥TX??f2 , (9) where columns of R⊥ span the orthogonal complement of the column-space of R (i.e., the block matrix [R R⊥] is orthogonal). It follows that, given X? and R, we can optimize the first term in (9) to solve for B? and D. Here, DB? is the leastsquares approximation to RTX?, where RTX? and DB? × are m n. Further, the ith row of DB? can only contain earleem men×tsn .fr Fomur t{h−erd,i t,h +e dii} where di = Dii. Wherever tehleem corresponding delement} }o fw RheTrXe? d is positive (negative) we should put a positive (negative) value in DB?. The optimal di is determined by the mean absolute value of the elements in the ith row of RTX?: • • B? ← sgn ?RTX?? (10) D ← Diag?(mroewan?(abs(RTX?))) (11) Optimize R: Using the original objective (8), find R that minimizes ?X? −RA?f2 , subject to RTR = Im, and Am n≡i mDizBes?. ?TXhis i−s equivalent to an Orthogonal Procrustes problem [15], and can be solved exactly using SVD. In particular, by adding p − m rows of zeros to the bottom poaf rDtic, uDlaBr, bb eyc aodmdeins p p× − n. mT rhoewn sR o ifs z square a tnhde orthogoonf aDl ,a DndB can boem eessti pm ×at end. Twhiethn RSV isD s. qBuuatr es ainncde oDrtBho gisdegenerate we are only interested in the first m columns of R; the remaining columns are unique only up to rotation of the null-space.) Optimize μ: Given R, B? and D, the optimal μ is given by the column average of X −RDB?: μ ← mcoeluamn(X−RDB?) (12) 3.2. Approximate nearest neighbor search One application of scalable quantization is ANN search. Before introducing more advanced quantization techniques, we describe some experimental results with ok-means on Euclidean ANN search. The benefits of ok-means for ANN search are two-fold. Storage costs for the centers is reduced to O(log k), from O(k) with k-means. Second, substantial speedups are possible by exploiting fast methods for NN search on binary codes in Hamming space (e.g., [13]). Generally, in terms of a generic quantizer q(·), there are twoG neanteurraalll ways etrom messti omfa ate g ethneer dicis qtaunanceti z beetrw qe(·e)n, ttwheor vectors, v and u [7]. Using the Symmetric quantizer distance (SQD) ?v−u? is approximated by ?q(v)−q(u) ? . Using the Asymmetric quantizer doixsimtanacteed (A byQ ?Dq()v, only one o Uf sthineg tw thoe vectors is quantized, and ?v−u? is estimated as ?v−q(u) ? . vWechtiloer sS iQs qDu might b, aen slightly f?as itse ers t iom compute, vA−QqD(u i)?n-. curs lower quantization errors, and hence is more accurate. For ANN search, in a pre-processing stage, each database entry, u, is encoded into a binary vector corresponding to the cluster center index to which u is assigned. At test time, the queries may or may not be encoded into indices, depending on whether one uses SQD or AQD. In the ok-means model, the quantization of an input x is straightforwardly shown to be qok(x) = μ + RD sgn(RT(x − μ)) . (13) The corresponding m-bit cluster index is sgn(RT(x − μ)). Given two indices, namely b?1 , b?2 ∈ {−1, +1}m,( txhe − symmetric ok-means quantizer distance∈ ∈is { SQDok(b?1, b?2) = ?μ + RDb?1 − μ − RDb?2 ?22 = ?D(b?1 − b?2)?22 .(14) In effect, SQDok is a weighted Hamming distance. It is the sum of the squared diagonal entries of D corresponding to bits where b?1 and b?2 differ. Interestingly, in our experiments with ok-means, Hamming and weighted Hamming distances yield similar results. Thus, in ok-means experiments we simply report results for Hamming distance, to facilitate comparisons with other techniques. When the scaling in ok-means is constrained to be isotropic (i.e., D = αIm for α ∈ R+), then SQDok becomes a constant multiple off othre α αu ∈sua Rl Hamming distance. As discussed in Sec. 5, this isotropic ok-means is closely related to ITQ [5]. The ok-means AQD between a feature vector x and a cluster index b?, is defined as AQDok(x, b?) = ?x −μ − RDb??22 = ?RTx? − Db??22 + ?R⊥Tx??22 , (15) where x? = x−μ. For ANN search, in comparing distances from query x t−o a .d Faotars AetN oNf binary i,n idni ccoesm, ptharei snegc odnisdta tnecrems on the RHS of (15) is irrelevant, since it does not depend on b?. Without this term, AQDok becomes a form of asymmetric Hamming (AH) distance between RTx? and b?. While previous work [6] discussed various ad hoc AH distance measures for binary hashing, interestingly, the optimal AH distance for ok-means is derived directly in our model. 333000111977 1M SIFT, 64−bit encoding (k = 264) lRc@aeR0 . 206148 10 RIoPT1Qk K−Q m( AHeDa)Hn )s (AH)10K Figure 2. Euclidean ANN retrieval results for different methods and distance functions on the 1M SIFT dataset. 3.3. Experiments with ok-means Following [7], we report ANN search results on 1M SIFT, a corpus of 128D SIFT descriptors with disjoint sets of 100K training, 1M base, and 10K test vectors. The training set is used to train models. The base set is the database, and the test points are queries. The number of bits m is typically less than p, but no pre-processing is needed for dimensionality reduction. Rather, to initialize learning, R is a random rotation of the first m principal directions of the training data, and μ is the mean of the data. For each query we find R nearest neighbors, and compute Recall@R, the fraction of queries for which the ground-truth Euclidean NN is found in the R retrieved items. Fig. 2 shows the Recall@R plots for ok-means with Hamming (H) ≈ SQDok and asymmetric Hamming (AH) H≡a mAmQiDngok ( Hd)is t≈an SceQ, vs. ITQ [5] and PQ [7]. The PQ ≡met AhoQdD exploits a more complex asymmetric distance func- tion denoted AD ≡ AQDck (defined in Sec. 6. 1). Note first tthioant od ke-nmoeteadns A improves upon ITQ, with both Hamming and asymmetric Hamming distances. This is due to the scaling parameters (i.e., D) in ok-means. If one is interested in Hamming distance efficient retrieval, ok-means is prefered over ITQ. But better results are obtained with the asymmetric distance function. Fig. 2 also shows that PQ achieves superior recall rates. This stems from its joint encoding of multiple feature dimensions. In ok-means, each bit represents a partition ofthe feature space into two clusters, separated by a hyperplane. The intersection of m orthogonal hyperplanes yields 2m regions. Hence we obtain just two clusters per dimension, and each dimension is encoded independently. In PQ, by comparison, multiple dimensions are encoded jointly, with arbitrary numbers of clusters. PQ thereby captures statistical dependencies among different dimensions. We next extend ok-means to jointly encode multiple dimensions. 4. Cartesian k-means In the Cartesian k-means (ck-means) model, each region center is expressed parametrically as an additive combination of multiple subcenters. Let there be m sets of subcen- Fidg2ure31.Ddep4icton5fCartde?1si2nqdu?4ati5z?3aton 4qDck(da)t= ,?wd i?5134t?h the first (last) two dimensions sub-quantized on the left (right). Cartesian k-means quantizer denoted qck, combines the subquantizations in subspaces, and produces a 4D reconstruction. ters, each with h elements.3 Let C(i) be a matrix whose columns comprise the elements of the ith subcenter set; C(i) ∈ Rp×h. Finally, assume that each cluster center, c, is the∈ sum of exactly one element from each subcenter set: = ?C(i)b(i) m c i?= ?1 , (16) where b(i) ∈ H1/h is a 1-of-h encoding. As a conc∈re Hte example (see Fig. 3), suppose we are given 4D inputs, x ∈ R4, and we split each datum into m = 2 parts: z(1) = ?I2 0? x , and Then, suppose w?e z(2) = ?0 I2? x .(17) qu?antize each part, z(?1) and? z(2) , sepa- × rately. As depicted in Fig. 3 (left and middle), we could use h = 5 subcenters for each one. Placing the corresponding subcenters in the columns of 4 5 matrices C(1) and C(2) , C(1)=?d1d20d2×35d4d5?, C(2)=?d?1d?20d2×?35d?4d?5?, we obtain a model (16) that provides 52 possible centers with which to quantize the data. More generally, the total number of model centers is k = hm. Each center is a member of the Cartesian product of the subcenter sets, hence the name Cartesian k-means. Importantly, while the number of centers is hm, the number of subcenters is only mh. The model provides a super-linear number of centers with a linear number of parameters. The learning objective for Cartesian k-means is ?ck-means(C) =x?∈D{b(mi)}inim=1???x −i?=m1C(i)b(i)??22 where b(i) ∈ H1/h, and C ≡ [C(1), ··· , (18) C(m)] ∈ Rp×mh. [b(1)T, ··· ,b(m)T] If we let bT ≡ then the second sum in (18) can be expressed succinctly as Cb. 3While here we assume a fixed cardinality for all subcenter sets, the model is easily extended to allow sets with different cardinalities. 333000112088 The key problem with this formulation is that the min- imization of (18) with respect to the b(i) ’s is intractable. Nevertheless, motivated by orthogonal k-means (Sec. 3), encoding can be shown to be both efficient and exact if we impose orthogonality constraints on the sets of subcenters. To that end, assume that all subcenters in different sets are pairwise orthogonal; i.e., ∀i,j | i = j C(i)TC(j) = 0h×h .(19) Each subcenter matrix C(i) spans a linear subspace of Rp, and the linear subspaces for different subcenter sets do not intersect. Hence, (19) implies that only the subcenters in C(i) can explain the projection of x onto the C(i) subspace. In the example depicted in Fig. 3, the input features are simply partitioned (17), and the subspaces clearly satisfy the orthogonality constraints. It is also clear that C ≡ [ C(1) C(2)] is block diagonal, Iwtit ihs 2 a ×lso o5 c bleloacrks t,h adte Cnote ≡d D(1) and D(]2 i)s . bTlohec quantization error t×he5re bfolorcek bse,c doemnoeste ?x − Cb?22 = ???zz((12))?−?D0(1) D0(2)? ?b ( 21) ? ?2 = ?????z(1)−D(1)b(1)??2+???z(2)−D(2??)b(2)??2. In words, the squa??zred quantization?? erro??r zis the sum of t??he squared errors on the subspaces. One can therefore solve for the binary coefficients of the subcenter sets independently. In the general case, assuming (19) is satisfied, C can be expressed as a product R D, where R has orthonormal columns, and D is block diagonal; i.e., C = R D where Ra=nd[hRe(n1c),e·C(,i)R=(mR)]i,Dan(di).DW=i⎢t⎡⎣⎢hDs0i.(1≡)Dra0(n2)k. C.(Di)0(.m,i)t⎦⎥ ⎤fo,l(2w0)s that D(i) ∈ Rsi×h and R(i) ∈ Rp×≡sira. Clearly, ? si ≤ p, because of∈ ∈th Re orthogonality ∈con Rstraints. Replacing C(i) with R(i)D(i) in the RHS of (18?), we find ?x−Cb?22 = ?m?z(i)−D(i)b(i)?22+?R⊥Tx?22, (21) i?= ?1 where z(i)≡R(i)Tx, and R⊥is the orthogonal complement of R. This≡ ≡shRows that, with orthogonality constraints (19), the ck-means encoding problem can be split into m independent sub-encoding problems, one for each subcenter set. To find the b(i) that minimizes ??z(i) we perform NN search with z(i) again??st h si-dim vec??tors in D(i) . This entails a cost of O(hsi).? Fortunately, all? the elements of b can be found very efficiently, in O(hs), where s ≡ ? si. If we also include the cost of rotating x to −D(i)b(i)?22, Taocbkml-emt1he.aoAnds um#ceh2nkmrtyeofskm-#lobgeiatknsh,cOo-rm( pOec(2oamkn+spt),hpan)dkO- m(pOecosa(n+mst(hipns)khtesro)m of number of centers, number of bits needed for indices (i.e., log #centers), and the storage cost of representation, which is the same as the encoding cost to convert inputs to indices. The last column shows the costs given an assumption that C has a rank of s ≥ m. obtain each z(i) , the total encoding cost is O(ps + hs), i.e., O(p2+hp). Alternatively, one could perform NN search in p-dim C(i) ’s, to find the b(i) ’s, which costs O(mhp). Table 1 summerizes the models in terms of their number of centers, index size, and cost of storage and encoding. 4.1. Learning ck-means We can re-write the ck-means objective (18) in matrix form with the Frobenius norm; i.e., ?ck-means(R, D, B) = ? X − RDB ?f2 (22) where the columns of X and B comprise the data points and the subcenter assignment coefficients. The input to the learning algorithm is the training data X, the number of subcenter sets m, the cardinality of the subcenter sets h, and an upper bound on the rank of C, i.e., s. In practice, we also let each rotation matrix R(i) have the same number of columns, i.e., si = s/m. The outputs are the matrices {R(i) } and {D(i) } that provide a local minima of (22). Learning begins hwaitth p trohev idneit aia lloizcaatlio mni noimf Ra oanf d(2 D2)., followed by iterative coordinate descent in B, D, and R: • Optimize B and D: With R fixed, the objective is given by (21) where ?R⊥TX?f2 R(i)TX, is constant. Given data pro- jections Z(i) ≡ to optimize for B and D we perform one step oRf k-means for each subcenter set: – Assignment: Perform NN searches for each subcenter set to find the assignment coefficients, B(i) , B(i)← arBg(mi)in?Z(i)− D(i)B(i)?f2 – • Update: D(i)← arDg(mi)in?Z(i)− D(i)B(i)?f2 Optimize R: Placing the D(i) ’s along the diagonal of D, as in (20), and concatenating B(i) ’s as rows of B, [B(1)T, ...,B(m)T], i.e., BT = the optimization of R reduces to the orthogonal Procrustes problem: R ← argRmin?X − RDB?f2. In experiments below, R ∈ Rp×p, and rank(C) ≤ p is unIcnon esxtpraeirniemde. tFso rb high-dimensional adnadta r awnhke(rCe ) ra ≤nk( pX is) ?np, fnostrr efficiency ri th may dbime eusnesfiuoln atol dcoantast wrahiner er anr akn(kC(X). 333000112199 5. Relations and related work As mentioned above, there are close mathematical relationships between ok-means, ck-means, ITQ for binary hashing [5], and PQ for vector quantization [7]. It is instructive to specify these relationships in some detail. Iterative Quantization vs. Orthogonal k-means. ITQ [5] is a variant of locality-sensitive hashing, mapping data to binary codes for fast retrieval. To extract m-bit codes, ITQ first zero-centers the data matrix to obtain X?. PCA is then used for dimensionality reduction, fromp down to m dimensions, after which the subspace representation is randomly rotated. The composition of PCA and the random rotation can be expressed as WTX? where W ∈ Rp×m. ITQ then solves for the m×m rotation mawthriex,r eR W, t ∈hatR minimizes ?ITQ(B?, R) = ?RTWTX?− B??f2 , s.t. RTR = Im×m, (23) where B? ∈ {−1, +1}n×p. ITQ ro∈tate {s− t1h,e+ subspace representation of the data to match the binary codes, and then minimizes quantization error within the subspace. By contrast, ok-means maps the binary codes into the original input space, and then considers both the quantization error within the subspace and the out-of-subspace projection error. A key difference is the inclusion of ?R⊥X? ?f2 in the ok-means objective (9). This is important s ?inRce one can often reduce quantization errors by projecting out significant portions of the feature space. Another key difference between ITQ and ok-means is the inclusion of non-uniform scaling in ok-means. This is important when the data are not isotropic, and contributes to the marked improvement in recall rates in Fig. 2. Orthogonal k-means vs. Cartesian k-means. We next show that ok-means is a special case of ck-means with h = 2, where each subcenter set has two elements. To this end, let C(i) = and let b(i) = be the ith subcenter matrix selection vector. Since b(i) is a 1-of-2 encoding (?10? or ?01?), it follows that: [c(1i) c2(i)], b1(i)c(1i)+ b(2i)c2(i) = [b(1i) b2(i)]T c(1i)+2 c2(i)+ bi?c1(i)−2 c2(i), (24) b1(i) − b2(i) where bi? ≡ ∈ {−1, +1}. With the following setting of t≡he bok-m−e abns parameters, μ=?i=m1c1(i)+2c(2i),andC=?c1(1)−2c2(1),...,c1(m)−2c2(m)?, it should be clear that ?i C(i)b(i) = μ + Cb?, where b? ∈ {−1, +1}m, and b??i is the ith bit of b?, used in (24). Similarly, one can also map ok-means parameters onto corresponding subcenters for ck-means. Thus, there is a 1-to-1 mapping between the parameterizations of cluster centers in ok-means and ck-means for h = 2. The benefits of ok-means are its small number of parameters, and its intuitive formulation. The benefit of the ck-means generalization is its joint encoding of multiple dimensions with an arbitrary number of centers, allowing it to capture intrinsic dependence among data dimensions. Product Quantization vs. Cartesian k-means. PQ first splits the input vectors into m disjoint sub-vectors, and then quantizes each sub-vector separately using a subquantizer. Thus PQ is a special case of ck-means where the rotation R is not optimized; rather, R is fixed in both learning and retrieval. This is important because the sub-quantizers act independently, thereby ignoring intrasubspace statistical dependence. Thus the selection of subspaces is critical for PQ [7, 8]. J e´gou et al. [8] suggest using PCA, followed by random rotation, before applying PQ to high-dimensional vectors. Finding the rotation to minimize quantization error is mentioned in [8], but it was considered too difficult to estimate. Here we show that one can find a rotation to minimize the quantization error. The ck-means learning algorithm optimizes sub-quantizers in the inner loop, but they interact in an outer loop that optimizes the rotation (Sec. 4.1). 6. Experiments 6.1. Euclidean ANN search Euclidean ANN is a useful task for comparing quantization techniques. Given a database of ck-means indices, and a query, we use Asymmetric and Symmetric ck-means quantizer distance, denoted AQDck and SQDck. The AQDck between a query x and a binary index b ≡ ?b(1)T, ...,b(m)T?T, derived in (21), is AQDck(x,b) ?m?z(i)−D(i)b(i)?22+ ?R⊥Tx?22.(25) = i?= ?1 ?z(i)−D(i)b(i)?22is the distance between the ithpro- Here, jection?? of x, i.e., z(i) , ?a?nd a subcenter projection from D(i) selecte?d by b(i) . Give?n a query, these distances for each i, and all h possible values of b(i) can be pre-computed and stored in a query-specific h×m lookup table. Once created, tshtoer lookup qtaubelery i-ss pusecedif itoc compare akullp pda tatbablea.se O points etaot tehde, query. So computing AQDck entails m lookups and additions, somewhat more costly than Hamming distance. The last term on the RHS of (25) is irrelevant for NN search. Since PQ is a special case of ck-means with pre-defined subspaces, the same distances are used for PQ (cf. [7]). The SQDck between binary codes b1 and b2 is given by SQDck(b1,b2) = ?m?D(i)b1(i)− D(i)b2(i)?22.(26) i?= ?1 Since b1(i) and b(2i) are 1-of-?h encodings, an m×h?×h lookup table can be createadre t 1o- sft-ohre e nacllo pairwise msu×bh×-dhis ltaonockeusp. 333000222200 1M SIFT, 64−bit encoding (k = 264) 1M GIST, 64−bit encoding (k = 264) 1B SIFT, 64−bit encoding (k = 264) Re@acl0 .4186021RcIPoTQk0−Q m( SAeDaH)n s (SADH)1K0 .1642081 0RIocP1TkQ −K m (SAeDHa)n s (ASHD1)0K .186420 1 R0IoPc1TkQ −KQm( ASe DaHn) s (SADH1)0K Figure 4. Euclidean NN recall@R (number of items retrieved) based on different quantizers and corresponding distance functions on the 1M SIFT, 1M GIST, and 1B SIFT datasets. The dashed curves use symmetric distance. (AH ≡ AQDok, SD ≡ SQDck, AD ≡ AQDck) While the cost of computing SQDck is the same as AQDck, SQDck could also be used to estimate the distance between the indexed database entries, for diversifying the retrieval results, or to detect near duplicates, for example. Datasets. We use the 1M SIFT dataset, as in Sec. 3.3, along with two others, namely, 1M GIST (960D features) and 1B SIFT, both comprising disjoint sets of training, base and test vectors. 1M GIST has 500K training, 1M base, and 1K query vectors. 1B SIFT has 100M training, 1B base, and 10K query points. In each case, recall rates are averaged over queries in test set for a database populated from the base set. For expedience, we use only the first 1M training points for the 1B SIFT experiments. Parameters. In ANN experiments below, for both ckmeans and PQ, we use m = 8 and h = 256. Hence the number of clusters is k = 2568 = 264, so 64-bits are used as database indices. Using h = 256 is particularly attractive because the resulting lookup tables are small, encoding is fast, and each subcenter index fits into one byte. As h increases we expect retrieval results to improve, but encoding and indexing of a query to become slower. Initialization. To initialize the Di’s for learning, as in kmeans, we simply begin with random samples from the set of Zi’s (see Sec. 4.1). To initialize R we consider the different methods that J e´gou et al. [7] proposed for splitting the feature dimensions into m sub-vectors for PQ: (1) natural: sub-vectors comprise consecutive dimensions, (2) structured: dimensions with the same index modulo 8 are grouped, and (3) random: random permutations are used. For PQ in the experiments below, we use the orderings that produced the best results in [7], namely, the structured ordering for 960D GIST, and the natural ordering for 128D SIFT. For learning ck-means, R is initialized to the identity with SIFT corpora. For 1M GIST, where the PQ ordering is significant, we consider all three orderings to initialize R. Results. Fig. 4 shows Recall@R plots for ck-means and PQ [7] with symmetric and asymmetric distances (SD ≡ PSQQD [7c]k wanitdh A syDm m≡e rAicQ aDncdk) a on tmhee t3r cda dtaissteatns.c Tsh (eS Dhor ≡izontal axainsd represents tQheD number of retrieved items, R, on a log-scale. The results consistently favor ck-means. On 1M GIST, 64−bit encoding (k = 264) lae@Rc0 .261084 1R0Pc Qk −1m(K 321e )a (A nD s )(132) (A D )10K Figure 5. PQ and ck-means results using natural (1), structured (2), and random (3) ordering to define the (initial) subspaces. the high-dimensional GIST data, ck-means with AD significantly outperforms other methods; even ck-means with SD performs on par with PQ with AD. On 1M SIFT, the Recall@ 10 numbers for PQ and ck-means, both using AD, × are 59.9% and 63.7%. On 1B SIFT, Recall@ 100 numbers are 56.5% and 64.9%. As expected, with increasing dataset size, the difference between methods is more significant. In 1B SIFT, each feature vector is 128 bytes, hence a total of 119 GB. Using any method in Fig. 4 (including ckmeans) to index the database into 64 bits, this storage cost reduces to only 7.5 GB. This allows one to work with much larger datasets. In the experiments we use linear scan to find the nearest items according to quantizer distances. For NN search using 10K SIFT queries on 1B SIFT this takes about 8 hours for AD and AH and 4 hours for Hamming distance on a 2 4-core computer. Search can be sped up significantly; using a coarse tienri.tia Sl quantization sapnedd an pin svigenritefidfile structure for AD and AH, as suggested by [7, 1], and using the multi-index hashing method of [13] for Hamming distance. In this paper we did not implement these efficiencies as we focus primarily on the quality of quantization. Fig. 5 compares ck-means to PQ when R in ck-means is initialized using the 3 orderings of [7]. It shows that ckmeans is superior in all cases. Simiarly interesting, it also shows that despite the non-convexity ofthe optimization objective, ck-means learning tends to find similarly good encodings under different initial conditions. Finally, Fig. 6 compares the methods under different code dimensionality. 333000222311 1M GIST, encoding with 64, 96, and 128 bits Rl@ace0 .814206 1 R0cP kQ − m1 69e4216a−8 Knb −itsb 1t69248− bit10K Figure 6. PQ and ck-means results using different number of bits for encoding. In all cases asymmetric distance is used. Table2.Rcokg-nmiteoamPnQCesac(ondksue=r(bkaoc416y=0ko26)n40t2h[eC]IFAR7c598-u1.72096r%a tcesyuingdfferent codebook learning algorithms. 6.2. Learning visual codebooks While the ANN seach tasks above require too many clusters for k-means, it is interesing to compare k-means and ck-means on a task with a moderate number of clusters. To this end we consider codebook learning for bag-ofword√s models [3, 10]. We use ck-means with m = 2 and h = √k, and hence k centers. The main advantage of ck- × here is that finding the closest cluster center is done in O(√k) time, much faster than standard NN search with k-means in O(k). Alternatives for k-means, to improve efficiency, include approximate k-means [14], and hierarchical k-means [12]. Here we only compare to exact k-means. CIFAR-10 [9] comprises 50K training and 10K test images (32 32 pixels). Each image is one of 10 classes (airplane, 3b2i×rd,3 car, cat, )d.e Eera,c dog, frog, sh oonrsee o, ship, laansds etrsu (caki)r.We use publicly available code from Coates et al [2], with changes to the codebook learning and cluster assignment modules. Codebooks are built on 6×6 whitened color image patches. .O Cnoed histogram i sb ucirleta otend 6 per image quadrant, aagnde a linear SVM is applied to 4k-dim feature vectors. Recognition accuracy rates on the test set for different models and k are given in Table 2. Despite having fewer parameters, ck-means performs on par or better than kmeans. This is consistent for different initializations of the algorithms. Although k-means has higher fedility than ckmeans, with fewer parameters, ck-means may be less susceptible to overfitting. Table 2, also compares with the approach of [19], where PQ without learning a rotation is used for clustering features. As expected, learning the rotation has a significant impact on recognition rates, outperforming all three initializations of PQ. mean√s 7. Conclusions We present the Cartesian k-means algorithm, a generalization of k-means with a parameterization of the clus- ter centers such that number of centers is super-linear in the number of parameters. The method is also shown to be a generalization of the ITQ algorithm and Product Quantization. In experiments on large-scale retrieval and codebook learning for recognition the results are impressive, outperforming product quantization by a significant margin. An implementation of the method is available at https : / / github . com/norou z i ckmeans . / References [1] A. Babenko and V. Lempitsky. The inverted multi-index. CVPR, 2012. [2] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. AISTATS, 2011. [3] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray. Visual categorization with bags of keypoints. ECCV Workshop Statistical Learning in Computer Vision, 2004. [4] T. Ge, K. He, Q. Ke, and J. Sun. Optimized product quantization for approximate nearest neighbor search. CVPR, 2013. [5] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. CVPR, 2011. [6] A. Gordo and F. Perronnin. Asymmetric distances for binary embeddings. CVPR, 2011. [7] H. J ´egou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE Trans. PAMI, 2011. [8] H. J ´egou, M. Douze, C. Schmid, and P. P ´erez. Aggregating local descriptors into a compact image representation. CVPR, 2010. [9] A. Krizhevsky. Learning multiple layers of features from tiny images. MSc Thesis, Univ. Toronto, 2009. [10] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. CVPR, 2006. [11] S. P. Lloyd. Least squares quantization in pcm. IEEE Trans. IT, 28(2): 129–137, 1982. [12] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. CVPR, 2006. [13] M. Norouzi, A. Punjani, and D. J. Fleet. Fast search in hamming space with multi-index hashing. CVPR, 2012. [14] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. CVPR, 2007. [15] P. Sch o¨nemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 3 1, 1966. [16] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. ICCV, 2003. [17] J. Tenenbaum and W. Freeman. Separating style and content with bilinear models. Neural Comp., 12: 1247–1283, 2000. [18] A. Torralba, K. Murphy, and W. Freeman. Sharing visual features for multiclass and multiview object detection. IEEE T. PAMI, 29(5):854–869, 2007. [19] S. Wei, X. Wu, and D. Xu. Partitioned k-means clustering for fast construction of unbiased visual vocabulary. The Era of Interactive Media, 2012. 333000222422
4 0.85981387 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition
Author: Peng Wang, Jingdong Wang, Gang Zeng, Weiwei Xu, Hongbin Zha, Shipeng Li
Abstract: In visual recognition tasks, the design of low level image feature representation is fundamental. The advent of local patch features from pixel attributes such as SIFT and LBP, has precipitated dramatic progresses. Recently, a kernel view of these features, called kernel descriptors (KDES) [1], generalizes the feature design in an unsupervised fashion and yields impressive results. In this paper, we present a supervised framework to embed the image level label information into the design of patch level kernel descriptors, which we call supervised kernel descriptors (SKDES). Specifically, we adopt the broadly applied bag-of-words (BOW) image classification pipeline and a large margin criterion to learn the lowlevel patch representation, which makes the patch features much more compact and achieve better discriminative ability than KDES. With this method, we achieve competitive results over several public datasets comparing with stateof-the-art methods.
5 0.85797471 255 cvpr-2013-Learning Separable Filters
Author: Roberto Rigamonti, Amos Sironi, Vincent Lepetit, Pascal Fua
Abstract: Learning filters to produce sparse image representations in terms of overcomplete dictionaries has emerged as a powerful way to create image features for many different purposes. Unfortunately, these filters are usually both numerous and non-separable, making their use computationally expensive. In this paper, we show that such filters can be computed as linear combinations of a smaller number of separable ones, thus greatly reducing the computational complexity at no cost in terms of performance. This makes filter learning approaches practical even for large images or 3D volumes, and we show that we significantly outperform state-of-theart methods on the linear structure extraction task, in terms ofboth accuracy and speed. Moreover, our approach is general and can be used on generic filter banks to reduce the complexity of the convolutions.
6 0.83487874 362 cvpr-2013-Robust Monocular Epipolar Flow Estimation
7 0.81822819 69 cvpr-2013-Boosting Binary Keypoint Descriptors
8 0.8169595 242 cvpr-2013-Label Propagation from ImageNet to 3D Point Clouds
9 0.81672084 121 cvpr-2013-Detection- and Trajectory-Level Exclusion in Multiple Object Tracking
10 0.81643182 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image
11 0.8162756 143 cvpr-2013-Efficient Large-Scale Structured Learning
12 0.81588566 360 cvpr-2013-Robust Estimation of Nonrigid Transformation for Point Set Registration
13 0.815561 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities
14 0.81550348 104 cvpr-2013-Deep Convolutional Network Cascade for Facial Point Detection
15 0.81508356 267 cvpr-2013-Least Soft-Threshold Squares Tracking
16 0.81507301 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path
17 0.81505132 248 cvpr-2013-Learning Collections of Part Models for Object Recognition
18 0.81494725 187 cvpr-2013-Geometric Context from Videos
19 0.81493849 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking
20 0.81492978 14 cvpr-2013-A Joint Model for 2D and 3D Pose Estimation from a Single Image