nips nips2007 nips2007-23 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Pawan Mudigonda, Vladimir Kolmogorov, Philip Torr
Abstract: The problem of obtaining the maximum a posteriori estimate of a general discrete random field (i.e. a random field defined using a finite and discrete set of labels) is known to be NP-hard. However, due to its central importance in many applications, several approximate algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) LP - S: the linear programming (LP) relaxation proposed by Schlesinger [20] for a special case and independently in [4, 12, 23] for the general case; (ii) QP - RL: the quadratic programming (QP) relaxation by Ravikumar and Lafferty [18]; and (iii) SOCP - MS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki [16] for two label problems and later extended in [14] for a general label set. We show that the SOCP - MS and the QP - RL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints/objective function offered by QP and SOCP, the LP - S relaxation strictly dominates (i.e. provides a better approximation than) QP - RL and SOCP - MS. We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations which is dominated by the LP - S relaxation. Based on these results we propose some novel SOCP relaxations which strictly dominate the previous approaches.
Reference: text
sentIndex sentText sentNum sentScore
1 We show that the SOCP - MS and the QP - RL relaxations are equivalent. [sent-20, score-0.193]
2 Furthermore, we prove that despite the flexibility in the form of the constraints/objective function offered by QP and SOCP, the LP - S relaxation strictly dominates (i. [sent-21, score-0.433]
3 We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations which is dominated by the LP - S relaxation. [sent-24, score-0.26]
4 Based on these results we propose some novel SOCP relaxations which strictly dominate the previous approaches. [sent-25, score-0.268]
5 A particular labelling of variables v is specified by a function f whose domain corresponds to the indices of the random variables and whose range is the index of the label set, i. [sent-35, score-0.162]
6 In other words, random variable va takes label lf (a) . [sent-38, score-0.259]
7 (a, b) ∈ E if, and only if, va and vb are neighbouring random variables. [sent-42, score-0.373]
8 The energy Q(f ; D, θ) is given by 1 2 1 Q(f ; D, θ) = va ∈v θa;f (a) + (a,b)∈E θab;f (a)f (b) . [sent-45, score-0.256]
9 The term θa;f (a) is called a unary potential 2 since its value depends on the labelling of one random variable at a time. [sent-46, score-0.123]
10 For simplicity, we assume 1 2 that θab;f (a)f (b) = w(a, b)d(f (a), f (b)) where w(a, b) is the weight that indicates the strength of the pairwise relationship between variables va and vb , with w(a, b) = 0 if (a, b) ∈ E, and d(·, ·) is / a distance function on the labels. [sent-48, score-0.404]
11 Specifically, we consider: (i) LP - S, the linear programming (LP) relaxation of [4, 12, 20, 23]; (ii) QP - RL, the quadratic programming (QP) relaxation of [18]; and (iii) SOCP - MS, the second order cone programming (SOCP) relaxation of [14, 16]. [sent-53, score-1.1]
12 We denote the element of x at index a · h + i as xa;i where va ∈ v and li ∈ l. [sent-57, score-0.309]
13 We say that the variable xa;i belongs to variable va since it defines which label va does (or does not) take. [sent-59, score-0.491]
14 We refer to the (a · h + i, b · h + j)th element of the matrix X as Xab;ij where va , vb ∈ v and li , lj ∈ l. [sent-61, score-0.497]
15 it is equivalent to the MAP estimation problem: IP : x∗ = arg minx va ,li 1 θa;i (1+xa;i ) 2 + (a,b)∈E,li ,lj 2 θab;ij (1+xa;i +xb;j +Xab;ij ) 4 x ∈ {−1, 1}nh, li ∈l xa;i = 2 − h, s. [sent-64, score-0.393]
16 Note that one uniqueness constraint is specified for each variable va . [sent-70, score-0.309]
17 2 Linear Programming Relaxation The LP relaxation (proposed by Schlesinger [20] for a special case and independently in [4, 12, 23] for the general case), which we call LP - S, is given as follows: s. [sent-75, score-0.287]
18 x∗ = arg minx va ,li 1 θa;i (1+xa;i ) + 2 nh (a,b)∈E,li ,lj lj ∈l Xab;ij LP - S : = (2 − h)xa;i , 2 θab;ij x ∈ [−1, 1] , X ∈ [−1, 1]nh×nh, li ∈l xa;i = 2 − h, Xab;ij = Xba;ji , 1 + xa;i + xb;j + Xab;ij ≥ 0. [sent-77, score-0.587]
19 (1+xa;i +xb;j +Xab;ij ) 4 (4) (5) (6) (7) (8) In the LP - S relaxation only those elements Xab;ij of X are used for which (a, b) ∈ E and li , lj ∈ l. [sent-78, score-0.457]
20 Unlike the IP, the feasibility region of the above problem is relaxed such that the variables xa;i and Xab;ij lie in the interval [−1, 1]. [sent-79, score-0.125]
21 Further, the constraint (3) is replaced by equation (6) which is called the marginalization constraint [23]. [sent-80, score-0.2]
22 One marginalization constraint is specified for each 2 (a, b) ∈ E and li ∈ l. [sent-81, score-0.2]
23 These constraints (7) and (8) are defined for all (a, b) ∈ E and li , lj ∈ l. [sent-84, score-0.237]
24 it is possible to specify other constraints for the problem of MAP estimation (as will be seen in the different relaxations described in the subsequent sections). [sent-87, score-0.317]
25 3 Quadratic Programming Relaxation We now describe the QP relaxation for the MAP estimation IP which was proposed by Ravikumar and Lafferty [18]. [sent-89, score-0.317]
26 To this end, it would be convenient to reformulate the objective function of the IP ˆ using a vector of unary potentials of length nh (denoted by θ 1 ) and a matrix of pairwise potentials 2 ˆ of size nh × nh (denoted by θ 2 ). [sent-90, score-0.585]
27 The element of the unary potential vector at index (a · h + i) is 1 2 ˆ1 defined as θa;i = θa;i − vc ∈v lk ∈l |θac;ik |, where va ∈ v and li ∈ l. [sent-91, score-0.421]
28 The (a · h + i, b · h + j)th ˆ element of the pairwise potential matrix θ2 is defined such that ˆ2 θab;ij = vc ∈v lk ∈l 2 θab;ij 2 |θac;ik |, if a = b, i = j, otherwise, (9) where va , vb ∈ v and li , lj ∈ l. [sent-92, score-0.622]
29 In other words, the potentials are modified by defining a pairwise ˆ2 potential θaa;ii and subtracting the value of that potential from the corresponding unary potential ˆ2 θ1 . [sent-93, score-0.169]
30 We call the above polynomial time since θ relaxation QP - RL. [sent-103, score-0.287]
31 Note that in [18], the QP - RL relaxation was described using the variable y = 1+x . [sent-104, score-0.287]
32 4 Semidefinite Programming Relaxation The SDP relaxation of the MAP estimation problem replaces the non-convex constraint X = xx⊤ by the convex semidefinite constraint X − xx⊤ 0 [6, 15] which can be expressed as 1 x x⊤ X 0, (13) using Schur’s complement [2]. [sent-107, score-0.493]
33 Further, like LP - S, it relaxes the integer constraints by allowing the variables xa;i and Xab;ij to lie in the interval [−1, 1] with Xaa;ii = 1 for all va ∈ v, li ∈ l. [sent-108, score-0.408]
34 The SDP relaxation is a well-studied approach which provides accurate solutions for the MAP estimation problem (e. [sent-109, score-0.317]
35 However, due to its computational inefficiency, it is not practically useful for large scale problems with nh > 1000. [sent-112, score-0.148]
36 5 Second Order Cone Programming Relaxation We now describe the SOCP relaxation that was proposed by Muramatsu and Suzuki [16] for the MAXCUT problem (i. [sent-115, score-0.287]
37 This relaxation, which we call SOCP - MS, is based on the technique of Kim and Kojima [10] who observed that the SDP constraint can be further relaxed to second order cone (SOC) constraints. [sent-118, score-0.139]
38 In order to describe the SOCP - MS relaxation, we consider a pair of neighbouring variables va and vb , i. [sent-126, score-0.405]
39 For each such pair of variables and labels, the SOCP - MS relaxation specifies two SOC constraints which involve only the above variables [14, 16]. [sent-130, score-0.418]
40 Using the variables va and vb (where (a, b) ∈ E) and labels li and lj , we define the submatrices x(a,b,i,j) and X(a,b,i,j) of x and X respectively as: x(a,b,i,j) = xa;i xb;j , X(a,b,i,j) = 3 Xaa;ii Xba;ji Xab;ij Xbb;jj . [sent-132, score-0.557]
41 (15) The SOCP - MS relaxation specifies SOC constraints of the form (14) for all pairs of neighbouring variables (a, b) ∈ E and labels li , lj ∈ l. [sent-133, score-0.63]
42 2 Comparing Relaxations In order to compare the relaxations described above, we require the following definitions. [sent-139, score-0.193]
43 We say that a relaxation A dominates the relaxation B (alternatively, B is dominated by A) if and only if min (x,X)∈F ( A) e(x, X; θ) ≥ min (x,X)∈F ( B) e(x, X; θ), ∀θ, (21) where F (A) and F (B ) are the feasibility regions of the relaxations A and B respectively. [sent-140, score-0.991]
44 the energy of the possibly fractional labelling (x, X)) for the MAP estimation problem defined over the CRF with parameter θ. [sent-143, score-0.125]
45 Thus the optimal value of the dominating relaxation A is always greater than or equal to the optimal value of relaxation B. [sent-144, score-0.574]
46 We note here that the concept of domination has been used previously in [4] (to compare LP - S with the linear programming relaxation in [11]). [sent-145, score-0.339]
47 Relaxations A and B are said to be equivalent if A dominates B and B dominates A, i. [sent-146, score-0.223]
48 A relaxation A is said to strictly dominate relaxation B if A dominates B but B does not dominate A . [sent-149, score-0.784]
49 (22) Note that, by definition, the optimal value of any relaxation would always be less than or equal to the energy of the optimal (i. [sent-151, score-0.311]
50 Hence, the optimal value of a strictly dominating relaxation A is closer to the optimal value of the MAP estimation IP compared to that of relaxation B. [sent-154, score-0.647]
51 Our Results: We prove that LP - S strictly dominates SOCP - MS (see section 3). [sent-156, score-0.146]
52 This implies that LP - S strictly dominates the QP - RL relaxation. [sent-158, score-0.146]
53 In section 5 we generalize the above results by proving that a large class of SOCP (and equivalent QP) relaxations is dominated by LP - S. [sent-159, score-0.26]
54 Based on these results, we propose a novel set of constraints which result in SOCP relaxations that dominate LP - S, QP - RL and SOCP - MS. [sent-160, score-0.292]
55 These relaxations introduce SOC constraints on cycles and cliques formed by the neighbourhood relationship of the CRF. [sent-161, score-0.346]
56 In other words the feasibility region of LP - S is a strict subset of the feasibility region of SOCP - MS (i. [sent-166, score-0.205]
57 Theorem 1: The LP - S relaxation strictly dominates the SOCP - MS relaxation. [sent-170, score-0.433]
58 For this vector, we show that the values of the objective functions of the QP - RL and SOCP - MS relaxations are equal. [sent-178, score-0.193]
59 Theorem 2: The QP - RL relaxation and the SOCP - MS relaxation are equivalent. [sent-181, score-0.574]
60 Theorems 1 and 2 prove that the LP - S relaxation strictly dominates the QP - RL and SOCP - MS relaxations. [sent-182, score-0.433]
61 5 QP and SOCP Relaxations over Trees and Cycles We now generalize the results of Theorem 1 by defining a large class of SOCP relaxations which is dominated by LP - S. [sent-194, score-0.243]
62 Specifically, we consider the SOCP relaxations which relax the non-convex constraint X = xx⊤ using a set of second order cone (SOC) constraints of the form k k k ⊤ where C = U (U ) ||(Uk )⊤ x|| ≤ Ck • X, k = 1, · · · , nC 0, for all k = 1, · · · , nC . [sent-195, score-0.399]
63 (23) Note that each SOCP relaxation belonging to this class would define an equivalent QP relaxation (similar to the equivalent QP - RL relaxation defined by the SOCP - MS relaxation). [sent-196, score-0.895]
64 Hence, all these QP relaxations will also be dominated by the LP - S relaxation. [sent-197, score-0.243]
65 This CRF is defined over four variables v = {va , vb , vc , vd } (connected to form a cycle of length 4), each of which take a label from the set l = {l0 , l1 }. [sent-211, score-0.277]
66 In other words E k is the subset of the edges in the graphical model of the CRF such that Ck specifies constraints for the random variables corresponding to those edges. [sent-214, score-0.118]
67 Recall that w(a, b) specifies the strength of the pairwise relationship between two neighbouring variables va and vb . [sent-224, score-0.45]
68 Theorem 4: SOCP relaxations (and the equivalent QP relaxations) which define constraints only using graphs Gk = (V k , E k ) which form (arbitrarily large) trees are dominated by the LP - S relaxation. [sent-229, score-0.327]
69 Theorem 5: When d(i, j) ≥ 0 for all li , lj ∈ l, the SOCP relaxations which define constraints only using non-overlapping graphs Gk which form (arbitrarily large) even cycles with all positive or all negative weights are dominated by the LP - S relaxation. [sent-232, score-0.536]
70 6 Some Useful SOC Constraints We now describe two SOCP relaxations which include all the marginalization constraints specified in LP - S. [sent-238, score-0.306]
71 1 The SOCP-C Relaxation The SOCP - C relaxation (where C denotes cycles) defines second order cone (SOC) constraints using positive semidefinite matrices C such that the graph G (defined in section 5) form cycles. [sent-241, score-0.436]
72 Let the variables corresponding to vertices of one such cycle G of length c be denoted as vC = {vb |b ∈ {a1 , a2 , · · · , ac }}. [sent-242, score-0.138]
73 In addition to the marginalization constraints, the SOCP - C relaxation specifies the following SOC constraint: ||U⊤ x|| ≤ C • X, (28) such that the graph G defined by the above constraint forms a cycle. [sent-244, score-0.43]
74 We choose to define the SOC constraint (28) for only those set of labels lC which satisfy the following: 2 Dc (k, l)θak al ;ik il ≥ (ak ,al )∈E 2 Dc (k, l)θak al ;jk jl , ∀{j1 , j2 , · · · , jc }. [sent-250, score-0.249]
75 (31) (ak ,al )∈E Note that this choice is motivated by the fact that the variables Xak al ;ik il corresponding to these sets vC and lC are assigned trivial values by the LP - S relaxation in the presence of non-submodular terms. [sent-251, score-0.417]
76 Since marginalization constraints are included in the SOCP - C relaxation, the value of the objective function obtained by solving this relaxation would at least be equal to the value obtained by the LP - S relaxation (i. [sent-252, score-0.687]
77 We can further show that in the case where |l| = 2 and the constraint (28) is defined over a frustrated cycle (i. [sent-255, score-0.175]
78 a cycle with an odd number of non-submodular terms) SOCP - C strictly dominates LP - S. [sent-257, score-0.23]
79 The constraint defined in equation (28) is similar to the (linear) cycle inequality constraints [1] which are given by Dc (k, l)Xak al ;ik il ≥ 2 − c. [sent-260, score-0.305]
80 (32) k,l We believe that the feasibility region defined by cycle inequalities is a strict subset of the feasibility region defined by equation (28). [sent-261, score-0.27]
81 In other words a relaxation defined by adding cycle inequalities to LP - S would strictly dominate SOCP - C. [sent-262, score-0.465]
82 2 The SOCP-Q Relaxation In this previous section we saw that LP - S dominates SOCP relaxations whose constraints are defined on trees. [sent-266, score-0.363]
83 However, the SOCP - C relaxation, which defines its constraints using cycles, strictly dominates LP - S. [sent-267, score-0.213]
84 This raises the question whether matrices C, which result in more complicated graphs G, would provide an even better relaxation for the MAP estimation problem. [sent-268, score-0.317]
85 To this end, we define an SOCP relaxation which specifies constraints such that the resulting graph G from a clique. [sent-270, score-0.374]
86 We denote this relaxation by SOCP - Q (where Q indicates cliques). [sent-271, score-0.287]
87 The SOCP - Q relaxation contains the marginalization constraint and the cycle inequalities (defined above). [sent-272, score-0.494]
88 Given this set of variables vQ and labels lQ , we define an SOC constraint using a matrix C of size nh × nh which is zero everywhere except for the elements Cak al ;ik il = 1. [sent-276, score-0.562]
89 This implies that C 0, which enables us to obtain the following SOC constraint: 2 xak ;ik ≤q+ k Xak al ;ik il . [sent-278, score-0.156]
90 (33) k,l We choose to specify the above constraint only for the set of labels lQ which satisfy the following condition: 2 2 θak al ;ik il ≥ θak al ;jk jl , ∀{j1 , j2 , · · · , jq }. [sent-279, score-0.276]
91 (34) (ak ,al )∈E (ak ,al )∈E Again, this choice is motivated by the fact that the variables Xak al ;ik il corresponding to these sets vQ and lQ are assigned trivial values by the LP - S relaxation in the presence of non-submodular pairwise potentials. [sent-280, score-0.462]
92 When the clique contains a frustrated cycle, it can be shown that SOCP - Q dominates the LP - S relaxation (similar to SOCP - C). [sent-281, score-0.425]
93 Further, using a counter-example, it can proved that the feasibility region given by cycle inequalities is not a subset of the feasibility region defined by constraint (33). [sent-282, score-0.347]
94 The surprising result of our work is that despite the flexibility in the form of the objective function/constraints offered by QP and SOCP, the LP - S relaxation dominates a large class of QP and SOCP relaxations. [sent-285, score-0.39]
95 It appears that the authors who have previously used SOCP relaxations in the Combinatorial Optimization literature [16] and those who have reported QP relaxation in the Machine Learning literature [18] were unaware of this result. [sent-286, score-0.48]
96 We also proposed two new SOCP relaxations (SOCP - C and SOCP - Q) and presented some examples to prove that they provide a better approximation than LP - S. [sent-287, score-0.193]
97 Approximation algorithms for the metric labelling problem via a new linear programming formulation. [sent-313, score-0.123]
98 Second-order cone programming relaxation of nonconvex quadratic optimization problems. [sent-348, score-0.422]
99 A new second-order cone programming relaxation for max-cut problems. [sent-390, score-0.401]
100 Quadratic programming relaxations for metric labelling and Markov random field MAP estimation. [sent-402, score-0.316]
wordName wordTfidf (topN-words)
[('socp', 0.59), ('relaxation', 0.287), ('qp', 0.247), ('va', 0.232), ('lp', 0.219), ('relaxations', 0.193), ('xab', 0.186), ('xa', 0.185), ('soc', 0.181), ('nh', 0.148), ('ms', 0.139), ('rl', 0.109), ('crf', 0.108), ('dominates', 0.103), ('ij', 0.098), ('vb', 0.095), ('lj', 0.093), ('constraint', 0.077), ('li', 0.077), ('dc', 0.074), ('xb', 0.073), ('feasibility', 0.071), ('labelling', 0.071), ('constraints', 0.067), ('ab', 0.064), ('cycle', 0.063), ('ik', 0.062), ('cone', 0.062), ('vc', 0.06), ('lc', 0.058), ('xak', 0.058), ('map', 0.058), ('cycles', 0.056), ('ip', 0.053), ('programming', 0.052), ('il', 0.052), ('ak', 0.051), ('ck', 0.05), ('dominated', 0.05), ('semide', 0.048), ('neighbouring', 0.046), ('xba', 0.046), ('marginalization', 0.046), ('al', 0.046), ('lq', 0.046), ('vq', 0.046), ('pairwise', 0.045), ('strictly', 0.043), ('ravikumar', 0.04), ('nc', 0.037), ('minx', 0.037), ('brookes', 0.035), ('frustrated', 0.035), ('muramatsu', 0.035), ('xaa', 0.035), ('unary', 0.032), ('potentials', 0.032), ('dominate', 0.032), ('variables', 0.032), ('everywhere', 0.031), ('cab', 0.03), ('neighbourhood', 0.03), ('xx', 0.03), ('estimation', 0.03), ('ii', 0.028), ('labels', 0.028), ('submatrix', 0.028), ('specify', 0.027), ('label', 0.027), ('ji', 0.025), ('kumar', 0.024), ('energy', 0.024), ('cak', 0.023), ('hammer', 0.023), ('maxcut', 0.023), ('roof', 0.023), ('schlesinger', 0.023), ('suzuki', 0.023), ('veksler', 0.023), ('xbb', 0.023), ('convex', 0.022), ('uk', 0.022), ('region', 0.022), ('ac', 0.022), ('quadratic', 0.021), ('inequalities', 0.021), ('odd', 0.021), ('vertices', 0.021), ('graph', 0.02), ('torr', 0.02), ('es', 0.02), ('gk', 0.02), ('kolmogorov', 0.02), ('potential', 0.02), ('sdp', 0.019), ('words', 0.019), ('eld', 0.019), ('rutgers', 0.018), ('theorem', 0.018), ('equivalent', 0.017)]
simIndex simValue paperId paperTitle
same-paper 1 0.9999997 23 nips-2007-An Analysis of Convex Relaxations for MAP Estimation
Author: Pawan Mudigonda, Vladimir Kolmogorov, Philip Torr
Abstract: The problem of obtaining the maximum a posteriori estimate of a general discrete random field (i.e. a random field defined using a finite and discrete set of labels) is known to be NP-hard. However, due to its central importance in many applications, several approximate algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) LP - S: the linear programming (LP) relaxation proposed by Schlesinger [20] for a special case and independently in [4, 12, 23] for the general case; (ii) QP - RL: the quadratic programming (QP) relaxation by Ravikumar and Lafferty [18]; and (iii) SOCP - MS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki [16] for two label problems and later extended in [14] for a general label set. We show that the SOCP - MS and the QP - RL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints/objective function offered by QP and SOCP, the LP - S relaxation strictly dominates (i.e. provides a better approximation than) QP - RL and SOCP - MS. We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations which is dominated by the LP - S relaxation. Based on these results we propose some novel SOCP relaxations which strictly dominate the previous approaches.
2 0.20229171 134 nips-2007-Multi-Task Learning via Conic Programming
Author: Tsuyoshi Kato, Hisashi Kashima, Masashi Sugiyama, Kiyoshi Asai
Abstract: When we have several related tasks, solving them simultaneously is shown to be more effective than solving them individually. This approach is called multi-task learning (MTL) and has been studied extensively. Existing approaches to MTL often treat all the tasks as uniformly related to each other and the relatedness of the tasks is controlled globally. For this reason, the existing methods can lead to undesired solutions when some tasks are not highly related to each other, and some pairs of related tasks can have significantly different solutions. In this paper, we propose a novel MTL algorithm that can overcome these problems. Our method makes use of a task network, which describes the relation structure among tasks. This allows us to deal with intricate relation structures in a systematic way. Furthermore, we control the relatedness of the tasks locally, so all pairs of related tasks are guaranteed to have similar solutions. We apply the above idea to support vector machines (SVMs) and show that the optimization problem can be cast as a second order cone program, which is convex and can be solved efficiently. The usefulness of our approach is demonstrated through simulations with protein super-family classification and ordinal regression problems.
3 0.16798699 120 nips-2007-Linear programming analysis of loopy belief propagation for weighted matching
Author: Sujay Sanghavi, Dmitry Malioutov, Alan S. Willsky
Abstract: Loopy belief propagation has been employed in a wide variety of applications with great empirical success, but it comes with few theoretical guarantees. In this paper we investigate the use of the max-product form of belief propagation for weighted matching problems on general graphs. We show that max-product converges to the correct answer if the linear programming (LP) relaxation of the weighted matching problem is tight and does not converge if the LP relaxation is loose. This provides an exact characterization of max-product performance and reveals connections to the widely used optimization technique of LP relaxation. In addition, we demonstrate that max-product is effective in solving practical weighted matching problems in a distributed fashion by applying it to the problem of self-organization in sensor networks. 1
4 0.15915146 92 nips-2007-Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations
Author: Amir Globerson, Tommi S. Jaakkola
Abstract: We present a novel message passing algorithm for approximating the MAP problem in graphical models. The algorithm is similar in structure to max-product but unlike max-product it always converges, and can be proven to find the exact MAP solution in various settings. The algorithm is derived via block coordinate descent in a dual of the LP relaxation of MAP, but does not require any tunable parameters such as step size or tree weights. We also describe a generalization of the method to cluster based potentials. The new method is tested on synthetic and real-world problems, and compares favorably with previous approaches. Graphical models are an effective approach for modeling complex objects via local interactions. In such models, a distribution over a set of variables is assumed to factor according to cliques of a graph with potentials assigned to each clique. Finding the assignment with highest probability in these models is key to using them in practice, and is often referred to as the MAP (maximum aposteriori) assignment problem. In the general case the problem is NP hard, with complexity exponential in the tree-width of the underlying graph. Linear programming (LP) relaxations have proven very useful in approximating the MAP problem, and often yield satisfactory empirical results. These approaches relax the constraint that the solution is integral, and generally yield non-integral solutions. However, when the LP solution is integral, it is guaranteed to be the exact MAP. For some classes of problems the LP relaxation is provably correct. These include the minimum cut problem and maximum weight matching in bi-partite graphs [8]. Although LP relaxations can be solved using standard LP solvers, this may be computationally intensive for large problems [13]. The key problem with generic LP solvers is that they do not use the graph structure explicitly and thus may be sub-optimal in terms of computational efficiency. The max-product method [7] is a message passing algorithm that is often used to approximate the MAP problem. In contrast to generic LP solvers, it makes direct use of the graph structure in constructing and passing messages, and is also very simple to implement. The relation between max-product and the LP relaxation has remained largely elusive, although there are some notable exceptions: For tree-structured graphs, max-product and LP both yield the exact MAP. A recent result [1] showed that for maximum weight matching on bi-partite graphs max-product and LP also yield the exact MAP [1]. Finally, Tree-Reweighted max-product (TRMP) algorithms [5, 10] were shown to converge to the LP solution for binary xi variables, as shown in [6]. In this work, we propose the Max Product Linear Programming algorithm (MPLP) - a very simple variation on max-product that is guaranteed to converge, and has several advantageous properties. MPLP is derived from the dual of the LP relaxation, and is equivalent to block coordinate descent in the dual. Although this results in monotone improvement of the dual objective, global convergence is not always guaranteed since coordinate descent may get stuck in suboptimal points. This can be remedied using various approaches, but in practice we have found MPLP to converge to the LP 1 solution in a majority of the cases we studied. To derive MPLP we use a special form of the dual LP, which involves the introduction of redundant primal variables and constraints. We show how the dual variables corresponding to these constraints turn out to be the messages in the algorithm. We evaluate the method on Potts models and protein design problems, and show that it compares favorably with max-product (which often does not converge for these problems) and TRMP. 1 The Max-Product and MPLP Algorithms The max-product algorithm [7] is one of the most often used methods for solving MAP problems. Although it is neither guaranteed to converge to the correct solution, or in fact converge at all, it provides satisfactory results in some cases. Here we present two algorithms: EMPLP (edge based MPLP) and NMPLP (node based MPLP), which are structurally very similar to max-product, but have several key advantages: • After each iteration, the messages yield an upper bound on the MAP value, and the sequence of bounds is monotone decreasing and convergent. The messages also have a limit point that is a fixed point of the update rule. • No additional parameters (e.g., tree weights as in [6]) are required. • If the fixed point beliefs have a unique maximizer then they correspond to the exact MAP. • For binary variables, MPLP can be used to obtain the solution to an LP relaxation of the MAP problem. Thus, when this LP relaxation is exact and variables are binary, MPLP will find the MAP solution. Moreover, for any variable whose beliefs are not tied, the MAP assignment can be found (i.e., the solution is partially decodable). Pseudo code for the algorithms (and for max-product) is given in Fig. 1. As we show in the next sections, MPLP is essentially a block coordinate descent algorithm in the dual of a MAP LP relaxation. Every update of the MPLP messages corresponds to exact minimization of a set of dual variables. For EMPLP minimization is over the set of variables corresponding to an edge, and for NMPLP it is over the set of variables corresponding to all the edges a given node appears in (i.e., a star). The properties of MPLP result from its relation to the LP dual. In what follows we describe the derivation of the MPLP algorithms and prove their properties. 2 The MAP Problem and its LP Relaxation We consider functions over n variables x = {x1 , . . . , xn } defined as follows. Given a graph G = (V, E) with n vertices, and potentials θij (xi , xj ) for all edges ij ∈ E, define the function1 f (x; θ) = θij (xi , xj ) . (1) ij∈E The MAP problem is defined as finding an assignment xM that maximizes the function f (x; θ). Below we describe the standard LP relaxation for this problem. Denote by {µij (xi , xj )}ij∈E distributions over variables corresponding to edges ij ∈ E and {µi (xi )}i∈V distributions corresponding to nodes i ∈ V . We will use µ to denote a given set of distributions over all edges and nodes. The set ML (G) is defined as the set of µ where pairwise and singleton distributions are consistent x ˆ xi µij (ˆi , xj ) = µj (xj ) , ˆ xj µij (xi , xj ) = µi (xi ) ∀ij ∈ E, xi , xj ˆ ML (G) = µ ≥ 0 ∀i ∈ V xi µi (xi ) = 1 Now consider the following linear program: MAPLPR : µL∗ = arg max µ∈ML (G) µ·θ. (2) where µ·θ is shorthand for µ·θ = ij∈E xi ,xj θij (xi , xj )µij (xi , xj ). It is easy to show (see e.g., [10]) that the optimum of MAPLPR yields an upper bound on the MAP value, i.e. µL∗ ·θ ≥ f (xM ). Furthermore, when the optimal µi (xi ) have only integral values, the assignment that maximizes µi (xi ) yields the correct MAP assignment. In what follows we show how the MPLP algorithms can be derived from the dual of MAPLPR. 1 P We note that some authors also add a term i∈V θi (xi ) to f (x; θ). However, these terms can be included in the pairwise functions θij (xi , xj ), so we ignore them for simplicity. 2 3 The LP Relaxation Dual Since MAPLPR is an LP, it has an equivalent convex dual. In App. A we derive a special dual of MAPLPR using a different representation of ML (G) with redundant variables. The advantage of this dual is that it allows the derivation of simple message passing algorithms. The dual is described in the following proposition. Proposition 1 The following optimization problem is a convex dual of MAPLPR DMAPLPR : min max xi i s.t. max βki (xk , xi ) (3) k∈N (i) xk βji (xj , xi ) + βij (xi , xj ) = θij (xi , xj ) , where the dual variables are βij (xi , xj ) for all ij, ji ∈ E and values of xi and xj . The dual has an intuitive interpretation in terms of re-parameterizations. Consider the star shaped graph Gi consisting of node i and all its neighbors N (i). Assume the potential on edge ki (for k ∈ N (i)) is βki (xk , xi ). The value of the MAP assignment for this model is max max βki (xk , xi ). This is exactly the term in the objective of DMAPLPR. Thus the dual xi k∈N (i) xk corresponds to individually decoding star graphs around all nodes i ∈ V where the potentials on the graph edges should sum to the original potential. It is easy to see that this will always result in an upper bound on the MAP value. The somewhat surprising result of the duality is that there exists a β assignment such that star decoding yields the optimal value of MAPLPR. 4 Block Coordinate Descent in the Dual To obtain a convergent algorithm we use a simple block coordinate descent strategy. At every iteration, fix all variables except a subset, and optimize over this subset. It turns out that this can be done in closed form for the cases we consider. We begin by deriving the EMPLP algorithm. Consider fixing all the β variables except those corresponding to some edge ij ∈ E (i.e., βij and βji ), and minimizing DMAPLPR over the non-fixed variables. Only two terms in the DMAPLPR objective depend on βij and βji . We can write those as f (βij , βji ) = max λ−j (xi ) + max βji (xj , xi ) + max λ−i (xj ) + max βij (xi , xj ) i j xi where we defined λ−j (xi ) = i xj k∈N (i)\j xi xi (4) λki (xi ) and λki (xi ) = maxxk βki (xk , xi ) as in App. A. Note that the function f (βij , βji ) depends on the other β values only through λ−i (xj ) and λ−j (xi ). j i This implies that the optimization can be done solely in terms of λij (xj ) and there is no need to store the β values explicitly. The optimal βij , βji are obtained by minimizing f (βij , βji ) subject to the re-parameterization constraint βji (xj , xi ) + βij (xi , xj ) = θij (xi , xj ). The following proposition characterizes the minimum of f (βij , βji ). In fact, as mentioned above, we do not need to characterize the optimal βij (xi , xj ) itself, but only the new λ values. Proposition 2 Maximizing the function f (βij , βji ) yields the following λji (xi ) (and the equivalent expression for λij (xj )) 1 −j 1 λji (xi ) = − λi (xi ) + max λ−i (xj ) + θij (xi , xj ) j 2 2 xj The proposition is proved in App. B. The λ updates above result in the EMPLP algorithm, described in Fig. 1. Note that since the β optimization affects both λji (xi ) and λij (xj ), both these messages need to be updated simultaneously. We proceed to derive the NMPLP algorithm. For a given node i ∈ V , we consider all its neighbors j ∈ N (i), and wish to optimize over the variables βji (xj , xi ) for ji, ij ∈ E (i.e., all the edges in a star centered on i), while the other variables are fixed. One way of doing so is to use the EMPLP algorithm for the edges in the star, and iterate it until convergence. We now show that the result of 3 Inputs: A graph G = (V, E), potential functions θij (xi , xj ) for each edge ij ∈ E. Initialization: Initialize messages to any value. Algorithm: • Iterate until a stopping criterion is satisfied: – Max-product: Iterate over messages and update (cji shifts the max to zero) h i mji (xi )← max m−i (xj ) + θij (xi , xj ) − cji j xj – EMPLP: For each ij ∈ E, update λji (xi ) and λij (xj ) simultaneously (the update for λij (xj ) is the same with i and j exchanged) h i 1 1 λji (xi )← − λ−j (xi ) + max λ−i (xj ) + θij (xi , xj ) j i 2 2 xj – NMPLP: Iterate over nodes i ∈ V and update all γij (xj ) where j ∈ N (i) 2 3 X 2 γij (xj )← max 4θij (xi , xj ) − γji (xi ) + γki (xi )5 xi |N (i)| + 1 k∈N(i) • Calculate node “beliefs”: Set biP i ) to be the sum of incoming messages into node i ∈ V (x (e.g., for NMPLP set bi (xi ) = k∈N(i) γki (xi )). Output: Return assignment x defined as xi = arg maxxi b(ˆi ). x ˆ Figure 1: The max-product, EMPLP and NMPLP algorithms. Max-product, EMPLP and NMPLP use mesP sages mij , λij and γij respectively. We use the notation m−i (xj ) = k∈N(j)\i mkj (xj ). j this optimization can be found in closed form. The assumption about β being fixed outside the star implies that λ−i (xj ) is fixed. Define: γji (xi ) = maxxj θij (xi , xj ) + λ−i (xj ) . Simple algebra j j yields the following relation between λ−j (xi ) and γki (xi ) for k ∈ N (i) i 2 λ−j (xi ) = −γji (xi ) + γki (xi ) (5) i |N (i)| + 1 k∈N (i) Plugging this into the definition of γji (xi ) we obtain the NMPLP update in Fig. 1. The messages for both algorithms can be initialized to any value since it can be shown that after one iteration they will correspond to valid β values. 5 Convergence Properties The MPLP algorithm decreases the dual objective (i.e., an upper bound on the MAP value) at every iteration, and thus its dual objective values form a convergent sequence. Using arguments similar to [5] it can be shown that MPLP has a limit point that is a fixed point of its updates. This in itself does not guarantee convergence to the dual optimum since coordinate descent algorithms may get stuck at a point that is not a global optimum. There are ways of overcoming this difficulty, for example by smoothing the objective [4] or using techniques as in [2] (see p. 636). We leave such extensions for further work. In this section we provide several results about the properties of the MPLP fixed points and their relation to the corresponding LP. First, we claim that if all beliefs have unique maxima then the exact MAP assignment is obtained. Proposition 3 If the fixed point of MPLP has bi (xi ) such that for all i the function bi (xi ) has a unique maximizer x∗ , then x∗ is the solution to the MAP problem and the LP relaxation is exact. i Since the dual objective is always greater than or equal to the MAP value, it suffices to show that there exists a dual feasible point whose objective value is f (x∗ ). Denote by β ∗ , λ∗ the value of the corresponding dual parameters at the fixed point of MPLP. Then the dual objective satisfies λ∗ (xi ) = ki max i xi k∈N (i) ∗ max βki (xk , x∗ ) = i i k∈N (i) xk ∗ βki (x∗ , x∗ ) = f (x∗ ) k i i 4 k∈N (i) To see why the second equality holds, note that bi (x∗ ) = maxxi ,xj λ−j (xi ) + βji (xj , xi ) and i i bj (x∗ ) = maxxi ,xj λ−i (xj ) + βij (xi , xj ). By the equalization property in Eq. 9 the arguments of j j the two max operations are equal. From the unique maximum assumption it follows that x∗ , x∗ are i j the unique maximizers of the above. It follows that βji , βij are also maximized by x∗ , x∗ . i j In the general case, the MPLP fixed point may not correspond to a primal optimum because of the local optima problem with coordinate descent. However, when the variables are binary, fixed points do correspond to primal solutions, as the following proposition states. Proposition 4 When xi are binary, the MPLP fixed point can be used to obtain the primal optimum. The claim can be shown by constructing a primal optimal solution µ∗ . For tied bi , set µ∗ (xi ) to 0.5 i and for untied bi , set µ∗ (x∗ ) to 1. If bi , bj are not tied we set µ∗ (x∗ , x∗ ) = 1. If bi is not tied but bj i i ij i j is, we set µ∗ (x∗ , xj ) = 0.5. If bi , bj are tied then βji , βij can be shown to be maximized at either ij i x∗ , x∗ = (0, 0), (1, 1) or x∗ , x∗ = (0, 1), (1, 0). We then set µ∗ to be 0.5 at one of these assignment i j i j ij ∗ pairs. The resulting µ∗ is clearly primal feasible. Setting δi = b∗ we obtain that the dual variables i (δ ∗ , λ∗ , β ∗ ) and primal µ∗ satisfy complementary slackness for the LP in Eq. 7 and therefore µ∗ is primal optimal. The binary optimality result implies partial decodability, since [6] shows that the LP is partially decodable for binary variables. 6 Beyond pairwise potentials: Generalized MPLP In the previous sections we considered maximizing functions which factor according to the edges of the graph. A more general setting considers clusters c1 , . . . , ck ⊂ {1, . . . , n} (the set of clusters is denoted by C), and a function f (x; θ) = c θc (xc ) defined via potentials over clusters θc (xc ). The MAP problem in this case also has an LP relaxation (see e.g. [11]). To define the LP we introduce the following definitions: S = {c ∩ c : c, c ∈ C, c ∩ c = ∅} is the set of intersection between clusters ˆ ˆ ˆ and S(c) = {s ∈ S : s ⊆ c} is the set of overlap sets for cluster c.We now consider marginals over the variables in c ∈ C and s ∈ S and require that cluster marginals agree on their overlap. Denote this set by ML (C). The LP relaxation is then to maximize µ · θ subject to µ ∈ ML (C). As in Sec. 4, we can derive message passing updates that result in monotone decrease of the dual LP of the above relaxation. The derivation is similar and we omit the details. The key observation is that one needs to introduce |S(c)| copies of each marginal µc (xc ) (instead of the two copies in the pairwise case). Next, as in the EMPLP derivation we assume all β are fixed except those corresponding to some cluster c. The resulting messages are λc→s (xs ) from a cluster c to all of its intersection sets s ∈ S(c). The update on these messages turns out to be: 1 1 λ−c (xs ) + max λ−c (xs ) + θc (xc ) λc→s (xs ) = − 1 − ˆ s s ˆ |S(c)| |S(c)| xc\s s∈S(c)\s ˆ where for a given c ∈ C all λc→s should be updated simultaneously for s ∈ S(c), and λ−c (xs ) is s defined as the sum of messages into s that are not from c. We refer to this algorithm as Generalized EMPLP (GEMPLP). It is possible to derive an algorithm similar to NMPLP that updates several clusters simultaneously, but its structure is more involved and we do not address it here. 7 Related Work Weiss et al. [11] recently studied the fixed points of a class of max-product like algorithms. Their analysis focused on properties of fixed points rather than convergence guarantees. Specifically, they showed that if the counting numbers used in a generalized max-product algorithm satisfy certain properties, then its fixed points will be the exact MAP if the beliefs have unique maxima, and for binary variables the solution can be partially decodable. Both these properties are obtained for the MPLP fixed points, and in fact we can show that MPLP satisfies the conditions in [11], so that we obtain these properties as corollaries of [11]. We stress however, that [11] does not address convergence of algorithms, but rather properties of their fixed points, if they converge. MPLP is similar in some aspects to Kolmogorov’s TRW-S algorithm [5]. TRW-S is also a monotone coordinate descent method in a dual of the LP relaxation and its fixed points also have similar 5 guarantees to those of MPLP [6]. Furthermore, convergence to a local optimum may occur, as it does for MPLP. One advantage of MPLP lies in the simplicity of its updates and the fact that it is parameter free. The other is its simple generalization to potentials over clusters of nodes (Sec. 6). Recently, several new dual LP algorithms have been introduced, which are more closely related to our formalism. Werner [12] presented a class of algorithms which also improve the dual LP at every iteration. The simplest of those is the max-sum-diffusion algorithm, which is similar to our EMPLP algorithm, although the updates are different from ours. Independently, Johnson et al. [4] presented a class of algorithms that improve duals of the MAP-LP using coordinate descent. They decompose the model into tractable parts by replicating variables and enforce replication constraints within the Lagrangian dual. Our basic formulation in Eq. 3 could be derived from their perspective. However, the updates in the algorithm and the analysis differ. Johnson et al. also presented a method for overcoming the local optimum problem, by smoothing the objective so that it is strictly convex. Such an approach could also be used within our algorithms. Vontobel and Koetter [9] recently introduced a coordinate descent algorithm for decoding LDPC codes. Their method is specifically tailored for this case, and uses updates that are similar to our edge based updates. Finally, the concept of dual coordinate descent may be used in approximating marginals as well. In [3] we use such an approach to optimize a variational bound on the partition function. The derivation uses some of the ideas used in the MPLP dual, but importantly does not find the minimum for each coordinate. Instead, a gradient like step is taken at every iteration to decrease the dual objective. 8 Experiments We compared NMPLP to three other message passing algorithms:2 Tree-Reweighted max-product (TRMP) [10],3 standard max-product (MP), and GEMPLP. For MP and TRMP we used the standard approach of damping messages using a factor of α = 0.5. We ran all algorithms for a maximum of 2000 iterations, and used the hit-time measure to compare their speed of convergence. This measure is defined as follows: At every iteration the beliefs can be used to obtain an assignment x with value f (x). We define the hit-time as the first iteration at which the maximum value of f (x) is achieved.4 We first experimented with a 10 × 10 grid graph, with 5 values per state. The function f (x) was 5 a Potts model: f (x) = The values for θij and θi (xi ) ij∈E θij I(xi = xj ) + i∈V θi (xi ). were randomly drawn from [−cI , cI ] and [−cF , cF ] respectively, and we used values of cI and cF in the range range [0.1, 2.35] (with intervals of 0.25), resulting in 100 different models. The clusters for GEMPLP were the faces of the graph [14]. To see if NMPLP converges to the LP solution we also used an LP solver to solve the LP relaxation. We found that the the normalized difference between NMPLP and LP objective was at most 10−3 (median 10−7 ), suggesting that NMPLP typically converged to the LP solution. Fig. 2 (top row) shows the results for the three algorithms. It can be seen that while all non-cluster based algorithms obtain similar f (x) values, NMPLP has better hit-time (in the median) than TRMP and MP, and MP does not converge in many cases (see caption). GEMPLP converges more slowly than NMPLP, but obtains much better f (x) values. In fact, in 99% of the cases the normalized difference between the GEMPLP objective and the f (x) value was less than 10−5 , suggesting that the exact MAP solution was found. We next applied the algorithms to the real world problems of protein design. In [13], Yanover et al. show how these problems can be formalized in terms of finding a MAP in an appropriately constructed graphical model.6 We used all algorithms except GNMPLP (since there is no natural choice for clusters in this case) to approximate the MAP solution on the 97 models used in [13]. In these models the number of states per variable is 2 − 158, and there are up to 180 variables per model. Fig. 2 (bottom) shows results for all the design problems. In this case only 11% of the MP runs converged, and NMPLP was better than TRMP in terms of hit-time and comparable in f (x) value. The performance of MP was good on the runs where it converged. 2 As expected, NMPLP was faster than EMPLP so only NMPLP results are given. The edge weights for TRMP corresponded to a uniform distribution over all spanning trees. 4 This is clearly a post-hoc measure since it can only be obtained after the algorithm has exceeded its maximum number of iterations. However, it is a reasonable algorithm-independent measure of convergence. 5 The potential θi (xi ) may be folded into the pairwise potential to yield a model as in Eq. 1. 6 Data available from http://jmlr.csail.mit.edu/papers/volume7/yanover06a/Rosetta Design Dataset.tgz 3 6 (a) (b) (c) 100 (d) 0.6 2000 0.04 0.4 0.02 −50 0 −0.02 −0.04 ∆(Value) 0 1000 ∆(Hit Time) ∆(Value) ∆(Hit Time) 50 0 MP TRMP GMPLP 0 −0.2 −1000 −0.4 −0.06 −100 0.2 MP TRMP GMPLP MP TRMP MP TRMP Figure 2: Evaluation of message passing algorithms on Potts models and protein design problems. (a,c): Convergence time results for the Potts models (a) and protein design problems (c). The box-plots (horiz. red line indicates median) show the difference between the hit-time for the other algorithms and NMPLP. (b,d): Value of integer solutions for the Potts models (b) and protein design problems (d). The box-plots show the normalized difference between the value of f (x) for NMPLP and the other algorithms. All figures are such that better MPLP performance yields positive Y axis values. Max-product converged on 58% of the cases for the Potts models, and on 11% of the protein problems. Only convergent max-product runs are shown. 9 Conclusion We have presented a convergent algorithm for MAP approximation that is based on block coordinate descent of the MAP-LP relaxation dual. The algorithm can also be extended to cluster based functions, which result empirically in improved MAP estimates. This is in line with the observations in [14] that generalized belief propagation algorithms can result in significant performance improvements. However generalized max-product algorithms [14] are not guaranteed to converge whereas GMPLP is. Furthermore, the GMPLP algorithm does not require a region graph and only involves intersection between pairs of clusters. In conclusion, MPLP has the advantage of resolving the convergence problems of max-product while retaining its simplicity, and offering the theoretical guarantees of LP relaxations. We thus believe it should be useful in a wide array of applications. A Derivation of the dual Before deriving the dual, we first express the constraint set ML (G) in a slightly different way. The definition of ML (G) in Sec. 2 uses a single distribution µij (xi , xj ) for every ij ∈ E. In what follows, we use two copies of this pairwise distribution for every edge, which we denote µij (xi , xj ) ¯ and µji (xj , xi ), and we add the constraint that these two copies both equal the original µij (xi , xj ). ¯ For this extended set of pairwise marginals, we consider the following set of constraints which is clearly equivalent to ML (G). On the rightmost column we give the dual variables that will correspond to each constraint (we omit non-negativity constraints). µij (xi , xj ) = µij (xi , xj ) ¯ µji (xj , xi ) = µij (xi , xj ) ¯ x xi µij (ˆi , xj ) = µj (xj ) ˆ ¯ µji (ˆj , xi ) = µi (xi ) ¯ x xj ˆ xi µi (xi ) = 1 ∀ij ∈ E, xi , xj ∀ij ∈ E, xi , xj ∀ij ∈ E, xj ∀ji ∈ E, xi ∀i ∈ V βij (xi , xj ) βji (xj , xi ) λij (xj ) λji (xi ) δi (6) ¯ We denote the set of (µ, µ) satisfying these constraints by ML (G). We can now state an LP that ¯ is equivalent to MAPLPR, only with an extended set of variables and constraints. The equivalent ¯ problem is to maximize µ · θ subject to (µ, µ) ∈ ML (G) (note that the objective uses the original ¯ µ copy). LP duality transformation of the extended problem yields the following LP min i δi s.t. λij (xj ) − βij (xi , xj ) ≥ 0 βij (xi , xj ) + βji (xj , xi ) = θij (xi , xj ) − k∈N (i) λki (xi ) + δi ≥ 0 ∀ij, ji ∈ E, xi , xj ∀ij ∈ E, xi , xj ∀i ∈ V, xi (7) We next simplify the above LP by eliminating some of its constraints and variables. Since each variable δi appears in only one constraint, and the objective minimizes δi it follows that δi = maxxi k∈N (i) λki (xi ) and the constraints with δi can be discarded. Similarly, since λij (xj ) appears in a single constraint, we have that for all ij ∈ E, ji ∈ E, xi , xj λij (xj ) = maxxi βij (xi , xj ) and the constraints with λij (xj ), λji (xi ) can also be discarded. Using the eliminated δi and λji (xi ) 7 variables, we obtain that the LP in Eq. 7 is equivalent to that in Eq. 3. Note that the objective in Eq. 3 is convex since it is a sum of point-wise maxima of convex functions. B Proof of Proposition 2 We wish to minimize f in Eq. 4 subject to the constraint that βij + βji = θij . Rewrite f as f (βij , βji ) = max λ−j (xi ) + βji (xj , xi ) + max λ−i (xj ) + βij (xi , xj ) j i xi ,xj xi ,xj (8) The sum of the two arguments in the max is λ−j (xi ) + λ−i (xj ) + θij (xi , xj ) i j (because of the constraints on β). Thus the minimum must be greater than −j −i 1 2 maxxi ,xj λi (xi ) + λj (xj ) + θij (xi , xj ) . One assignment to β that achieves this minimum is obtained by requiring an equalization condition:7 λ−i (xj ) + βij (xi , xj ) = λ−j (xi ) + βji (xj , xi ) = j i 1 θij (xi , xj ) + λ−j (xi ) + λ−i (xj ) i j 2 (9) which implies βij (xi , xj ) = 1 θij (xi , xj ) + λ−j (xi ) − λ−i (xj ) and a similar expression for βji . i j 2 The resulting λij (xj ) = maxxi βij (xi , xj ) are then the ones in Prop. 2. Acknowledgments The authors acknowledge support from the Defense Advanced Research Projects Agency (Transfer Learning program). Amir Globerson was also supported by the Rothschild Yad-Hanadiv fellowship. References [1] M. Bayati, D. Shah, and M. Sharma. Maximum weight matching via max-product belief propagation. IEEE Trans. on Information Theory (to appear), 2007. [2] D. P. Bertsekas, editor. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995. [3] A. Globerson and T. Jaakkola. Convergent propagation algorithms via oriented trees. In UAI. 2007. [4] J.K. Johnson, D.M. Malioutov, and A.S. Willsky. Lagrangian relaxation for map estimation in graphical models. In Allerton Conf. Communication, Control and Computing, 2007. [5] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(10):1568–1583, 2006. [6] V. Kolmogorov and M. Wainwright. On the optimality of tree-reweighted max-product message passing. In 21st Conference on Uncertainty in Artificial Intelligence (UAI). 2005. [7] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [8] B. Taskar, S. Lacoste-Julien, and M. Jordan. Structured prediction, dual extragradient and bregman projections. Journal of Machine Learning Research, pages 1627–1653, 2006. [9] P.O. Vontobel and R. Koetter. Towards low-complexity linear-programming decoding. In Proc. 4th Int. Symposium on Turbo Codes and Related Topics, 2006. [10] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Map estimation via agreement on trees: messagepassing and linear programming. IEEE Trans. on Information Theory, 51(11):1120–1146, 2005. [11] Y. Weiss, C. Yanover, and T. Meltzer. Map estimation, linear programming and belief propagation with convex free energies. In UAI. 2007. [12] T. Werner. A linear programming approach to max-sum, a review. IEEE Trans. on PAMI, 2007. [13] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation – an empirical study. Jourmal of Machine Learning Research, 7:1887–1907, 2006. [14] J.S. Yedidia, W.T. W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. on Information Theory, 51(7):2282–2312, 2005. 7 Other solutions are possible but may not yield some of the properties of MPLP. 8
5 0.13061735 128 nips-2007-Message Passing for Max-weight Independent Set
Author: Sujay Sanghavi, Devavrat Shah, Alan S. Willsky
Abstract: We investigate the use of message-passing algorithms for the problem of finding the max-weight independent set (MWIS) in a graph. First, we study the performance of loopy max-product belief propagation. We show that, if it converges, the quality of the estimate is closely related to the tightness of an LP relaxation of the MWIS problem. We use this relationship to obtain sufficient conditions for correctness of the estimate. We then develop a modification of max-product – one that converges to an optimal solution of the dual of the MWIS problem. We also develop a simple iterative algorithm for estimating the max-weight independent set from this dual solution. We show that the MWIS estimate obtained using these two algorithms in conjunction is correct when the graph is bipartite and the MWIS is unique. Finally, we show that any problem of MAP estimation for probability distributions over finite domains can be reduced to an MWIS problem. We believe this reduction will yield new insights and algorithms for MAP estimation. 1
6 0.12334572 112 nips-2007-Learning Monotonic Transformations for Classification
7 0.11051696 141 nips-2007-New Outer Bounds on the Marginal Polytope
8 0.099459827 76 nips-2007-Efficient Convex Relaxation for Transductive Support Vector Machine
9 0.09483517 63 nips-2007-Convex Relaxations of Latent Variable Training
10 0.085012481 51 nips-2007-Comparing Bayesian models for multisensory cue combination without mandatory integration
11 0.080111861 172 nips-2007-Scene Segmentation with CRFs Learned from Partially Labeled Images
12 0.072164916 65 nips-2007-DIFFRAC: a discriminative and flexible framework for clustering
13 0.069284901 105 nips-2007-Infinite State Bayes-Nets for Structured Domains
14 0.069229089 187 nips-2007-Structured Learning with Approximate Inference
15 0.058022909 121 nips-2007-Local Algorithms for Approximate Inference in Minor-Excluded Graphs
16 0.057075724 62 nips-2007-Convex Learning with Invariances
17 0.052212834 4 nips-2007-A Constraint Generation Approach to Learning Stable Linear Dynamical Systems
18 0.04513735 193 nips-2007-The Distribution Family of Similarity Distances
19 0.044798214 64 nips-2007-Cooled and Relaxed Survey Propagation for MRFs
20 0.044406999 12 nips-2007-A Spectral Regularization Framework for Multi-Task Structure Learning
topicId topicWeight
[(0, -0.14), (1, 0.001), (2, -0.106), (3, 0.048), (4, -0.058), (5, -0.221), (6, 0.082), (7, 0.163), (8, 0.028), (9, -0.001), (10, 0.032), (11, -0.175), (12, -0.087), (13, 0.125), (14, 0.068), (15, -0.003), (16, -0.078), (17, 0.012), (18, 0.132), (19, -0.027), (20, -0.213), (21, 0.056), (22, -0.064), (23, -0.076), (24, -0.007), (25, -0.042), (26, 0.13), (27, 0.11), (28, 0.111), (29, 0.024), (30, -0.051), (31, 0.059), (32, 0.09), (33, 0.127), (34, 0.019), (35, -0.091), (36, -0.089), (37, 0.004), (38, -0.063), (39, -0.145), (40, -0.016), (41, -0.051), (42, -0.026), (43, -0.004), (44, -0.004), (45, 0.098), (46, 0.063), (47, 0.03), (48, 0.047), (49, -0.07)]
simIndex simValue paperId paperTitle
same-paper 1 0.97034848 23 nips-2007-An Analysis of Convex Relaxations for MAP Estimation
Author: Pawan Mudigonda, Vladimir Kolmogorov, Philip Torr
Abstract: The problem of obtaining the maximum a posteriori estimate of a general discrete random field (i.e. a random field defined using a finite and discrete set of labels) is known to be NP-hard. However, due to its central importance in many applications, several approximate algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) LP - S: the linear programming (LP) relaxation proposed by Schlesinger [20] for a special case and independently in [4, 12, 23] for the general case; (ii) QP - RL: the quadratic programming (QP) relaxation by Ravikumar and Lafferty [18]; and (iii) SOCP - MS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki [16] for two label problems and later extended in [14] for a general label set. We show that the SOCP - MS and the QP - RL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints/objective function offered by QP and SOCP, the LP - S relaxation strictly dominates (i.e. provides a better approximation than) QP - RL and SOCP - MS. We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations which is dominated by the LP - S relaxation. Based on these results we propose some novel SOCP relaxations which strictly dominate the previous approaches.
2 0.67053187 120 nips-2007-Linear programming analysis of loopy belief propagation for weighted matching
Author: Sujay Sanghavi, Dmitry Malioutov, Alan S. Willsky
Abstract: Loopy belief propagation has been employed in a wide variety of applications with great empirical success, but it comes with few theoretical guarantees. In this paper we investigate the use of the max-product form of belief propagation for weighted matching problems on general graphs. We show that max-product converges to the correct answer if the linear programming (LP) relaxation of the weighted matching problem is tight and does not converge if the LP relaxation is loose. This provides an exact characterization of max-product performance and reveals connections to the widely used optimization technique of LP relaxation. In addition, we demonstrate that max-product is effective in solving practical weighted matching problems in a distributed fashion by applying it to the problem of self-organization in sensor networks. 1
3 0.59045273 128 nips-2007-Message Passing for Max-weight Independent Set
Author: Sujay Sanghavi, Devavrat Shah, Alan S. Willsky
Abstract: We investigate the use of message-passing algorithms for the problem of finding the max-weight independent set (MWIS) in a graph. First, we study the performance of loopy max-product belief propagation. We show that, if it converges, the quality of the estimate is closely related to the tightness of an LP relaxation of the MWIS problem. We use this relationship to obtain sufficient conditions for correctness of the estimate. We then develop a modification of max-product – one that converges to an optimal solution of the dual of the MWIS problem. We also develop a simple iterative algorithm for estimating the max-weight independent set from this dual solution. We show that the MWIS estimate obtained using these two algorithms in conjunction is correct when the graph is bipartite and the MWIS is unique. Finally, we show that any problem of MAP estimation for probability distributions over finite domains can be reduced to an MWIS problem. We believe this reduction will yield new insights and algorithms for MAP estimation. 1
4 0.53647429 112 nips-2007-Learning Monotonic Transformations for Classification
Author: Andrew Howard, Tony Jebara
Abstract: A discriminative method is proposed for learning monotonic transformations of the training data while jointly estimating a large-margin classifier. In many domains such as document classification, image histogram classification and gene microarray experiments, fixed monotonic transformations can be useful as a preprocessing step. However, most classifiers only explore these transformations through manual trial and error or via prior domain knowledge. The proposed method learns monotonic transformations automatically while training a large-margin classifier without any prior knowledge of the domain. A monotonic piecewise linear function is learned which transforms data for subsequent processing by a linear hyperplane classifier. Two algorithmic implementations of the method are formalized. The first solves a convergent alternating sequence of quadratic and linear programs until it obtains a locally optimal solution. An improved algorithm is then derived using a convex semidefinite relaxation that overcomes initialization issues in the greedy optimization problem. The effectiveness of these learned transformations on synthetic problems, text data and image data is demonstrated. 1
5 0.47745472 134 nips-2007-Multi-Task Learning via Conic Programming
Author: Tsuyoshi Kato, Hisashi Kashima, Masashi Sugiyama, Kiyoshi Asai
Abstract: When we have several related tasks, solving them simultaneously is shown to be more effective than solving them individually. This approach is called multi-task learning (MTL) and has been studied extensively. Existing approaches to MTL often treat all the tasks as uniformly related to each other and the relatedness of the tasks is controlled globally. For this reason, the existing methods can lead to undesired solutions when some tasks are not highly related to each other, and some pairs of related tasks can have significantly different solutions. In this paper, we propose a novel MTL algorithm that can overcome these problems. Our method makes use of a task network, which describes the relation structure among tasks. This allows us to deal with intricate relation structures in a systematic way. Furthermore, we control the relatedness of the tasks locally, so all pairs of related tasks are guaranteed to have similar solutions. We apply the above idea to support vector machines (SVMs) and show that the optimization problem can be cast as a second order cone program, which is convex and can be solved efficiently. The usefulness of our approach is demonstrated through simulations with protein super-family classification and ordinal regression problems.
6 0.46522713 76 nips-2007-Efficient Convex Relaxation for Transductive Support Vector Machine
7 0.45388013 63 nips-2007-Convex Relaxations of Latent Variable Training
8 0.45249593 141 nips-2007-New Outer Bounds on the Marginal Polytope
9 0.37719774 92 nips-2007-Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations
10 0.30842334 121 nips-2007-Local Algorithms for Approximate Inference in Minor-Excluded Graphs
11 0.28801903 12 nips-2007-A Spectral Regularization Framework for Multi-Task Structure Learning
12 0.27700758 187 nips-2007-Structured Learning with Approximate Inference
13 0.27692568 64 nips-2007-Cooled and Relaxed Survey Propagation for MRFs
14 0.2688809 105 nips-2007-Infinite State Bayes-Nets for Structured Domains
15 0.2672905 4 nips-2007-A Constraint Generation Approach to Learning Stable Linear Dynamical Systems
16 0.26227957 174 nips-2007-Selecting Observations against Adversarial Objectives
17 0.25941423 65 nips-2007-DIFFRAC: a discriminative and flexible framework for clustering
18 0.25064507 190 nips-2007-Support Vector Machine Classification with Indefinite Kernels
19 0.24842663 48 nips-2007-Collective Inference on Markov Models for Modeling Bird Migration
20 0.24408448 175 nips-2007-Semi-Supervised Multitask Learning
topicId topicWeight
[(5, 0.02), (13, 0.029), (18, 0.017), (21, 0.092), (31, 0.029), (34, 0.026), (35, 0.029), (47, 0.056), (49, 0.02), (76, 0.346), (83, 0.124), (85, 0.045), (90, 0.044)]
simIndex simValue paperId paperTitle
same-paper 1 0.73095733 23 nips-2007-An Analysis of Convex Relaxations for MAP Estimation
Author: Pawan Mudigonda, Vladimir Kolmogorov, Philip Torr
Abstract: The problem of obtaining the maximum a posteriori estimate of a general discrete random field (i.e. a random field defined using a finite and discrete set of labels) is known to be NP-hard. However, due to its central importance in many applications, several approximate algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) LP - S: the linear programming (LP) relaxation proposed by Schlesinger [20] for a special case and independently in [4, 12, 23] for the general case; (ii) QP - RL: the quadratic programming (QP) relaxation by Ravikumar and Lafferty [18]; and (iii) SOCP - MS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki [16] for two label problems and later extended in [14] for a general label set. We show that the SOCP - MS and the QP - RL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints/objective function offered by QP and SOCP, the LP - S relaxation strictly dominates (i.e. provides a better approximation than) QP - RL and SOCP - MS. We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations which is dominated by the LP - S relaxation. Based on these results we propose some novel SOCP relaxations which strictly dominate the previous approaches.
2 0.68635708 137 nips-2007-Multiple-Instance Pruning For Learning Efficient Cascade Detectors
Author: Cha Zhang, Paul A. Viola
Abstract: Cascade detectors have been shown to operate extremely rapidly, with high accuracy, and have important applications such as face detection. Driven by this success, cascade learning has been an area of active research in recent years. Nevertheless, there are still challenging technical problems during the training process of cascade detectors. In particular, determining the optimal target detection rate for each stage of the cascade remains an unsolved issue. In this paper, we propose the multiple instance pruning (MIP) algorithm for soft cascades. This algorithm computes a set of thresholds which aggressively terminate computation with no reduction in detection rate or increase in false positive rate on the training dataset. The algorithm is based on two key insights: i) examples that are destined to be rejected by the complete classifier can be safely pruned early; ii) face detection is a multiple instance learning problem. The MIP process is fully automatic and requires no assumptions of probability distributions, statistical independence, or ad hoc intermediate rejection targets. Experimental results on the MIT+CMU dataset demonstrate significant performance advantages. 1
3 0.6205253 66 nips-2007-Density Estimation under Independent Similarly Distributed Sampling Assumptions
Author: Tony Jebara, Yingbo Song, Kapil Thadani
Abstract: A method is proposed for semiparametric estimation where parametric and nonparametric criteria are exploited in density estimation and unsupervised learning. This is accomplished by making sampling assumptions on a dataset that smoothly interpolate between the extreme of independently distributed (or id) sample data (as in nonparametric kernel density estimators) to the extreme of independent identically distributed (or iid) sample data. This article makes independent similarly distributed (or isd) sampling assumptions and interpolates between these two using a scalar parameter. The parameter controls a Bhattacharyya affinity penalty between pairs of distributions on samples. Surprisingly, the isd method maintains certain consistency and unimodality properties akin to maximum likelihood estimation. The proposed isd scheme is an alternative for handling nonstationarity in data without making drastic hidden variable assumptions which often make estimation difficult and laden with local optima. Experiments in density estimation on a variety of datasets confirm the value of isd over iid estimation, id estimation and mixture modeling.
4 0.45225835 94 nips-2007-Gaussian Process Models for Link Analysis and Transfer Learning
Author: Kai Yu, Wei Chu
Abstract: This paper aims to model relational data on edges of networks. We describe appropriate Gaussian Processes (GPs) for directed, undirected, and bipartite networks. The inter-dependencies of edges can be effectively modeled by adapting the GP hyper-parameters. The framework suggests an intimate connection between link prediction and transfer learning, which were traditionally two separate research topics. We develop an efficient learning algorithm that can handle a large number of observations. The experimental results on several real-world data sets verify superior learning capacity. 1
5 0.44440493 175 nips-2007-Semi-Supervised Multitask Learning
Author: Qiuhua Liu, Xuejun Liao, Lawrence Carin
Abstract: A semi-supervised multitask learning (MTL) framework is presented, in which M parameterized semi-supervised classifiers, each associated with one of M partially labeled data manifolds, are learned jointly under the constraint of a softsharing prior imposed over the parameters of the classifiers. The unlabeled data are utilized by basing classifier learning on neighborhoods, induced by a Markov random walk over a graph representation of each manifold. Experimental results on real data sets demonstrate that semi-supervised MTL yields significant improvements in generalization performance over either semi-supervised single-task learning (STL) or supervised MTL. 1
6 0.4431963 209 nips-2007-Ultrafast Monte Carlo for Statistical Summations
7 0.44259778 69 nips-2007-Discriminative Batch Mode Active Learning
8 0.44149113 156 nips-2007-Predictive Matrix-Variate t Models
9 0.44027784 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC
10 0.43980983 186 nips-2007-Statistical Analysis of Semi-Supervised Regression
11 0.43874067 76 nips-2007-Efficient Convex Relaxation for Transductive Support Vector Machine
12 0.43823999 97 nips-2007-Hidden Common Cause Relations in Relational Learning
13 0.43755683 63 nips-2007-Convex Relaxations of Latent Variable Training
14 0.43688536 45 nips-2007-Classification via Minimum Incremental Coding Length (MICL)
15 0.43679407 212 nips-2007-Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes
16 0.43665221 141 nips-2007-New Outer Bounds on the Marginal Polytope
17 0.43661165 128 nips-2007-Message Passing for Max-weight Independent Set
18 0.43572503 201 nips-2007-The Value of Labeled and Unlabeled Examples when the Model is Imperfect
19 0.43571368 88 nips-2007-Fast and Scalable Training of Semi-Supervised CRFs with Application to Activity Recognition
20 0.435626 92 nips-2007-Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations