nips nips2013 nips2013-282 knowledge-graph by maker-knowledge-mining

282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching


Source: pdf

Author: Marcelo Fiori, Pablo Sprechmann, Joshua Vogelstein, Pablo Muse, Guillermo Sapiro

Abstract: Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or collaborative sparsity formulations, as a non-smooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. The method can deal with weighted or unweighted graphs, as well as multimodal data, where different graphs represent different types of data. The proposed approach is also naturally integrated with collaborative graph inference techniques, solving general network inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. The algorithm is tested and compared with state-of-the-art graph matching techniques in both synthetic and real graphs. We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignment-free functional magnetic resonance imaging (fMRI) data. The code is publicly available. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. [sent-11, score-0.37]

2 We propose a robust graph matching algorithm inspired in sparsityrelated techniques. [sent-12, score-0.572]

3 We cast the problem, resembling group or collaborative sparsity formulations, as a non-smooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. [sent-13, score-0.259]

4 The method can deal with weighted or unweighted graphs, as well as multimodal data, where different graphs represent different types of data. [sent-14, score-0.563]

5 The proposed approach is also naturally integrated with collaborative graph inference techniques, solving general network inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. [sent-15, score-0.453]

6 The algorithm is tested and compared with state-of-the-art graph matching techniques in both synthetic and real graphs. [sent-16, score-0.572]

7 We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignment-free functional magnetic resonance imaging (fMRI) data. [sent-17, score-0.761]

8 1 Introduction Problems related to graph isomorphisms have been an important and enjoyable challenge for the scientific community for a long time. [sent-19, score-0.202]

9 The graph isomorphism problem itself consists in determining whether two given graphs are isomorphic or not, that is, if there exists an edge preserving bijection between the vertex sets of the graphs. [sent-20, score-0.633]

10 The graph isomorphism problem is contained in the (harder) graph matching problem, which consists in finding the exact isomorphism between two graphs. [sent-23, score-0.916]

11 Graph matching is therefore a very challenging problem which has several applications, e. [sent-24, score-0.37]

12 In this paper we address the problem of (potentially multimodal) graph matching when the graphs are not exactly isomorphic. [sent-27, score-0.86]

13 This is by far the most common scenario in real applications, since the graphs to be compared are the result of a measuring or description process, which is naturally affected by noise. [sent-28, score-0.288]

14 Given two graphs GA and GB with p vertices, which we will characterize in terms of their p × p adjacency matrices A and B, the graph matching problem consists in finding a correspondence between the nodes of GA and GB minimizing some matching error. [sent-29, score-1.477]

15 In terms of the adjacency matrices, this corresponds to finding a matrix P in the set of permutation matrices P, such that it minimizes some distance between A and PBPT . [sent-30, score-0.405]

16 A common choice is the Frobenius norm ||A − PBPT ||2 , where ||M||2 = ij M2 . [sent-31, score-0.185]

17 The graph matching problem can be then stated as ij F F min ||A − PBPT ||2 = min ||AP − PB||2 . [sent-32, score-0.757]

18 F F P∈P P∈P 1 (1) The combinatorial nature of the permutation search makes this problem NP in general, although polynomial algorithms have been developed for a few special types of graphs, like trees or planar graphs for example (Conte et al. [sent-33, score-0.477]

19 There are several and diverse techniques addressing the graph matching problem, including spectral methods (Umeyama, 1988) and problem relaxations (Zaslavskiy et al. [sent-35, score-0.615]

20 The relaxed version of the problem is ˆ P = arg min ||AP − PB||2 , F P∈D which is a convex problem, though the result is a doubly stochastic matrix instead of a perˆ mutation. [sent-42, score-0.159]

21 The final node correspondence is obtained as the closest permutation matrix to P: ∗ ˆ 2 , which is a linear assignment problem that can be solved in O(p3 ) by P = arg minP∈P ||P − P||F the Hungarian algorithm (Kuhn, 1955). [sent-43, score-0.195]

22 However, this last step lacks any guarantee about the graph matching problem itself. [sent-44, score-0.572]

23 Firstly, we propose a new and versatile formulation for the graph matching problem which is more robust to noise and can naturally manage multimodal data. [sent-52, score-0.888]

24 The technique, which we call GLAG for Group lasso graph matching, is inspired by the recent works on sparse modeling, and in particular group and collaborative sparse coding. [sent-53, score-0.509]

25 In Section 2 we present the proposed graph matching formulation, and we show how to solve the optimization problem in Section 3. [sent-58, score-0.572]

26 The joint collaborative network and permutation learning application is described in Section 4. [sent-59, score-0.297]

27 2 Graph matching formulation We consider the problem of matching two graphs that are not necessarily perfectly isomorphic. [sent-61, score-1.09]

28 We will assume the following model: Assume that we have a noise free graph characterized by an adjacency matrix T. [sent-62, score-0.43]

29 Then we want to match two graphs with adjacency matrices A = T + OA and B = PT TPo + OB , where OA and OB have a sparse number of non-zero elements of arbitrary o magnitude. [sent-63, score-0.498]

30 In this context, the QCP formulation tends to find a doubly stochastic matrix P which minimizes the “average error” between AP and PB. [sent-68, score-0.183]

31 In short, the group Lasso takes a set of groups of coefficients and promotes that only some of these groups are active, while the others remain zero. [sent-71, score-0.203]

32 In this particular graph matching application, we form p2 groups, one per matrix entry (i, j), each one consisting of the 2-dimensional vector (AP)ij , (PB)ij . [sent-73, score-0.621]

33 (2) Ideally we would like to solve the graph matching problem by finding the minimum of f over the set of permutation matrices P. [sent-75, score-0.81]

34 (3) P∈D ˜ As with the Frobenius formulation, the final step simply finds the closest permutation matrix to P. [sent-77, score-0.195]

35 Let us analyze the case when A and B are the adjacency matrices of two isomorphic undirected unweighted graphs with e edges and no self-loops. [sent-78, score-0.655]

36 Since the graphs are isomorphic, there exist a permutation matrix Po such that A = Po BPT . [sent-79, score-0.483]

37 (4) f (P ) = a2 + b2 ≥ k k 2 k k Now, since P is doubly stochastic, the sum of all the entries of AP is equal to the sum of all the entries of A, which is two times the number of edges. [sent-85, score-0.144]

38 In particular, this is true for the permutation Po , which completes the proof of all the statements. [sent-88, score-0.146]

39 This Lemma shows that the fact that the weights in A and B are not compared in magnitude does not affect the matching performance when the two graphs are isomorphic and have equal weights. [sent-89, score-0.778]

40 Indeed, since the group lasso tends to set complete groups to zero, and the actual value of the non-zero coefficients is less important, this allows to group very dissimilar coefficients together, if that would result in fewer active groups. [sent-91, score-0.309]

41 In contrast, the Frobenious-based approaches mentioned in the introduction are very susceptible to differences in edge magnitudes and not appropriate for multimodal matching1 . [sent-94, score-0.193]

42 The idea is to write the optimization problem as an equivalent artificially constrained problem, using two new variables α, β ∈ Rp×p : || αij , β ij ||2 min P∈D s. [sent-97, score-0.185]

43 The first subproblem is decomposable into p2 scalar problems (one for each matrix entry), c c min || αij , β ij ||2 + (αij − (APt )ij + Ut )2 + (β ij − (Pt B)ij + Vt )2 . [sent-107, score-0.483]

44 ij ij αij ,β ij 2 2 From the optimality conditions on the subgradient of this subproblem, it can be seen that this can be solved in closed form by means of the well know vector soft-thresholding operator (Yuan & Lin, λ 2006): Sv (b, λ) = 1 − ||b||2 b . [sent-108, score-0.555]

45 T The second term can be linearized analogously, so the minimization of the second step becomes 1 1 min ||P− Pk + τ AT (αt+1 + Ut − APk ) ||2 + ||P− Pk + τ (β t+1 + Vt − Pk B)BT ||2 F F P∈D 2 2 fixed matrix D fixed matrix C which is simply the projection of the matrix 1 (C + D) over D. [sent-115, score-0.203]

46 4 Application to joint graph inference of not pre-aligned data Estimating the inverse covariance matrix is a very active field of research. [sent-121, score-0.398]

47 In numerous applications, this matrix is known to be sparse, and in this regard the graphical Lasso has proven to be a good estimator for the inverse covariance matrix (Yuan & Lin, 2007; Fiori et al. [sent-123, score-0.244]

48 The graphical Lasso estimator for Σ−1 is the matrix Θ which solves the optimization problem min tr(SΘ) − log det Θ + λ Θ 0 |Θij | , i,j 4 (6) which corresponds to the maximum likelihood estimator for Σ−1 with an l1 regularization. [sent-126, score-0.199]

49 This problem consist of estimating two (or more) matrices Σ−1 and Σ−1 from data matrices XA and XB as above, with the additional prior A B information that the inverse covariance matrices share the same support. [sent-132, score-0.335]

50 , the graphs determined by the adjacency matrices ΘA and ΘB are aligned. [sent-136, score-0.498]

51 Motivated by the formulation presented in Section 2, we propose to overcome this limitation by incorporating a permutation matrix into the optimization problem, and jointly learn it on the estimation process. [sent-138, score-0.257]

52 tr(SA ΘA ) − log det ΘA + tr(SB ΘB ) − log det ΘB + λ min ΘA ,ΘB P∈P 0 i,j (8) Even after the relaxation of the constraint P ∈ P to P ∈ D, the joint minimization of (8) over (ΘA , ΘB ) and P is a non-convex problem. [sent-140, score-0.212]

53 In the second subproblem, since (ΘA , ΘB ) are fixed, the only term to minimize is the last one, which corresponds to the graph matching formulation presented in Section 2. [sent-145, score-0.634]

54 5 Experimental results We now present the performance of our algorithm and compare it with the most recent techniques in several scenarios including synthetic and real graphs, multimodal data, and fMRI experiments. [sent-146, score-0.193]

55 In the cases where there is a “ground truth,” the performance is measured in terms of the matching error, defined as ||Ao − PBo PT ||2 , where P is the obtained permutation matrix and (Ao , Bo ) are F the original adjacency matrices. [sent-147, score-0.683]

56 1 Graph matching: Synthetic graphs We focus here in the traditional graph matching problem of undirected weighted graphs, both with and without noise. [sent-149, score-0.899]

57 More precisely, let Ao be the adjacency matrix of a random weighted graph and Bo a permuted version of it, generated with a random permutation matrix Po , i. [sent-150, score-0.725]

58 o We then add a certain number N of random edges to Ao with the same weight distribution as the original weights, and another N random edges to Bo , and from these noisy versions we try to recover the original matching (or any matching between Ao and Bo , since it may not be unique). [sent-153, score-0.824]

59 These models are representative of a wide range of real-world graphs (Newman, 2010). [sent-156, score-0.288]

60 5 Figure 1 shows the matching error as a function of the noise level for graphs with p = 100 nodes (top row), and for p = 150 nodes (bottom row). [sent-161, score-0.793]

61 The number of edges varies between 200 and 400 for graphs with 100 nodes, and between 300 and 600 for graphs with 150 nodes, depending on the model. [sent-162, score-0.618]

62 This figure shows that our method is more stable, and consistently outperforms the other methods (considered state-of-the-art), specially for noise levels in the low range (for large noise levels, is not clear what a “true” matching is, and in addition the sparsity hypothesis is no longer valid). [sent-164, score-0.539]

63 2 Graph matching: Real graphs We now present similar experiments to those in the previous section but with real graphs. [sent-168, score-0.288]

64 , 2011), and their corresponding adjacency matrices, Ac and Ae , are publicly available. [sent-173, score-0.16]

65 We match both the chemical and the electrical connection graphs against noisy artificially permuted versions of them. [sent-174, score-0.516]

66 The permuted graphs are constructed following the same procedure used in Section 5. [sent-175, score-0.41]

67 3 Multimodal graph matching One of the advantages of the proposed approach is its capability to deal with multimodal data. [sent-182, score-0.765]

68 This allows to match weighted graphs where the weights may follow completely different probability distributions. [sent-184, score-0.375]

69 This is commonly the case when dealing with multimodal data: when a network is measured using significantly different modalities, one expects the underlying connections to be the same but no relation can be assumed between the actual weights of these connections. [sent-185, score-0.241]

70 In what follows, we evaluate the performance of the proposed method in two examples of multimodal graph matching. [sent-187, score-0.395]

71 5 6 5 4 3 2 1 5 10 15 20 25 30 35 40 45 50 Noise 0 5 10 15 20 25 30 35 40 45 50 Noise (a) Electrical connection graph (b) Chemical connection graph Figure 2: Matching error for the C. [sent-193, score-0.404]

72 Note that in the chemical connection graph, the matching error of our algorithm is zero until noise levels of ≈ 50. [sent-196, score-0.499]

73 We first generate an auxiliary binary random graph Ab and a permuted version Bb = PT Ab Po . [sent-197, score-0.324]

74 o Then, we assign weights to the graphs according to distributions pA and pB (that will be specified for each experiment), thus obtaining the weighted graphs A and B. [sent-198, score-0.663]

75 We then add noise consisting of spurious weighted edges following the same distribution as the original graphs (i. [sent-199, score-0.43]

76 Finally, we run all four graph matching methods to recover the permutation. [sent-202, score-0.572]

77 The matching error is measured in the unweighted graphs as ||Ab − PBb PT ||F . [sent-203, score-0.701]

78 Note that while this metric might not be appropriate for the optimization stage when considering multimodal data, it is appropriate for the actual error evaluation, measuring mismatches. [sent-204, score-0.193]

79 Comparing with the original permutation matrix may not be very informative since there is no guarantee that the matrix is unique, even for the original noise-free data. [sent-205, score-0.244]

80 Figures 3(a) and 3(b) show the comparison when the weights in both graphs are Gaussian distributed, but with different means and variances. [sent-206, score-0.336]

81 These results confirm the intuition described above, showing that our method is more suitable for multimodal graphs, specially in the low range of noise. [sent-209, score-0.24]

82 4 Collaborative inference In this last experiment, we illustrate the application of the permuted collaborative graph inference presented in Section 4 with real resting-state fMRI data, publicly available (Nooner, 2012). [sent-217, score-0.617]

83 To illustrate the potential of the proposed framework, we show that using only part of the data in XA and part of the data in a permuted version of XB , we are able to infer a connectivity matrix almost as accurately as using the whole data. [sent-223, score-0.21]

84 Since there is no ground truth for the connectivity, and as mentioned before the collaborative setting (7) has already been proven successful, we take as ground truth the result of the collaborative inference using the empirical covariance matrices of XA and XB , denoted by SA and SB . [sent-225, score-0.503]

85 The result of this collaborative inference procedure are the two inverse covariance matrices ΘA and ΘB . [sent-226, score-0.352]

86 In GT GT short, the gold standard built for this experiment are found by solving (obtained with the entire data) min ΘA 0,ΘB 0 tr(SA ΘA ) − log det ΘA + tr(SB ΘB ) − log det ΘB + λ ΘA , ΘB ||2 . [sent-227, score-0.212]

87 ij ij i,j the first 550 samples of XB , which correspond be the first 550 samples of X , Now, to a little less than 6 minutes of study. [sent-228, score-0.429]

88 We compute the empirical covariance matrices SA and SB H H ˜B of these data matrices, and we artificially permute the second one: S = PT SB Po . [sent-229, score-0.151]

89 With these two A let XA H and XB H H B o H ˜ matrices SA and SH we run the algorithm described in Section 4, which alternately computes the H inverse covariance matrices ΘA and ΘB and the matching P between them. [sent-230, score-0.613]

90 Let ΘA and ΘB be the results of the graphical Lasso (6) using SA and SB : s s ΘK = argmin tr(SK Θ) − log det Θ + λ s Θ 0 |Θij | , for K = {A, B}. [sent-232, score-0.15]

91 Using less than 6 minutes of each H study, with the variables not pre-aligned, the permuted collaborative inference procedure proposed in Section 4 outperforms the classical graphical Lasso using the full 10 minutes of study. [sent-236, score-0.485]

92 5 4 3 2 1 0 6 1 2 3 Subject 4 5 Conclusions We have presented a new formulation for the graph matching problem, and proposed an optimization algorithm for minimizing the corresponding cost function. [sent-240, score-0.634]

93 The reported results show its suitability for the graph matching problem of weighted graphs, outperforming previous state-of-the-art methods, both in synthetic and real graphs. [sent-241, score-0.611]

94 Since in the problem formulation the weights of the graphs are not compared explicitly, the method can deal with multimodal data, outperforming the other compared methods. [sent-242, score-0.591]

95 In addition, the proposed formulation naturally fits into the pre-alignment-free collaborative network inference framework, where the permutation is estimated together with the underlying common network, with promising preliminary results in applications with real data. [sent-243, score-0.409]

96 A linear programming approach for the weighted graph matching problem. [sent-247, score-0.611]

97 Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses. [sent-304, score-0.195]

98 Community structure and scale-free collections of Erd˝ so R´ nyi graphs. [sent-320, score-0.144]

99 Fast approximate quadratic programming for large (brain) graph matching. [sent-357, score-0.202]

100 A path following algorithm for the graph matching problem. [sent-372, score-0.613]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('matching', 0.37), ('graphs', 0.288), ('graph', 0.202), ('multimodal', 0.193), ('ap', 0.187), ('ij', 0.185), ('collaborative', 0.151), ('pb', 0.148), ('permutation', 0.146), ('nyi', 0.144), ('faq', 0.141), ('po', 0.131), ('permuted', 0.122), ('erd', 0.122), ('fiori', 0.118), ('zaslavskiy', 0.118), ('adjacency', 0.118), ('fmri', 0.115), ('det', 0.106), ('xb', 0.103), ('ao', 0.103), ('vogelstein', 0.098), ('pt', 0.095), ('bter', 0.094), ('conte', 0.094), ('elegans', 0.094), ('glag', 0.094), ('sb', 0.093), ('matrices', 0.092), ('sa', 0.089), ('lasso', 0.086), ('ut', 0.081), ('xa', 0.078), ('doubly', 0.072), ('isomorphic', 0.072), ('pablo', 0.072), ('bo', 0.072), ('apt', 0.071), ('isomorphism', 0.071), ('mus', 0.071), ('pbpt', 0.071), ('qcp', 0.071), ('group', 0.07), ('chemical', 0.068), ('subproblem', 0.064), ('marcelo', 0.062), ('apk', 0.062), ('formulation', 0.062), ('noise', 0.061), ('lin', 0.06), ('minutes', 0.059), ('covariance', 0.059), ('tr', 0.057), ('yuan', 0.056), ('linearized', 0.056), ('pk', 0.052), ('durham', 0.051), ('gt', 0.051), ('inference', 0.05), ('matrix', 0.049), ('sapiro', 0.049), ('vt', 0.049), ('weights', 0.048), ('specially', 0.047), ('almohamad', 0.047), ('blica', 0.047), ('caenorhabditis', 0.047), ('chiquet', 0.047), ('craddock', 0.047), ('duffuaa', 0.047), ('nooner', 0.047), ('seshadhri', 0.047), ('umeyama', 0.047), ('modalities', 0.046), ('groups', 0.045), ('graphical', 0.044), ('unweighted', 0.043), ('promotes', 0.043), ('et', 0.043), ('publicly', 0.042), ('duke', 0.042), ('edges', 0.042), ('ob', 0.042), ('barab', 0.042), ('oa', 0.042), ('varshney', 0.042), ('uruguay', 0.042), ('path', 0.041), ('brain', 0.04), ('weighted', 0.039), ('cially', 0.039), ('connectivity', 0.039), ('minp', 0.038), ('electrical', 0.038), ('active', 0.038), ('convex', 0.038), ('nodes', 0.037), ('entries', 0.036), ('loh', 0.036), ('varoquaux', 0.036)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999958 282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching

Author: Marcelo Fiori, Pablo Sprechmann, Joshua Vogelstein, Pablo Muse, Guillermo Sapiro

Abstract: Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or collaborative sparsity formulations, as a non-smooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. The method can deal with weighted or unweighted graphs, as well as multimodal data, where different graphs represent different types of data. The proposed approach is also naturally integrated with collaborative graph inference techniques, solving general network inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. The algorithm is tested and compared with state-of-the-art graph matching techniques in both synthetic and real graphs. We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignment-free functional magnetic resonance imaging (fMRI) data. The code is publicly available. 1

2 0.18589784 300 nips-2013-Solving the multi-way matching problem by permutation synchronization

Author: Deepti Pachauri, Risi Kondor, Vikas Singh

Abstract: The problem of matching not just two, but m different sets of objects to each other arises in many contexts, including finding the correspondence between feature points across multiple images in computer vision. At present it is usually solved by matching the sets pairwise, in series. In contrast, we propose a new method, Permutation Synchronization, which finds all the matchings jointly, in one shot, via a relaxation to eigenvector decomposition. The resulting algorithm is both computationally efficient, and, as we demonstrate with theoretical arguments as well as experimental results, much more stable to noise than previous methods. 1

3 0.14306514 5 nips-2013-A Deep Architecture for Matching Short Texts

Author: Zhengdong Lu, Hang Li

Abstract: Many machine learning problems can be interpreted as learning for matching two types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner product in a certain feature space, while the modeling effort focuses on mapping of objects from the original space to the feature space. This schema, although proven successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains. More specifically, we apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture naturally combines the localness and hierarchy intrinsic to the natural language problems, and therefore greatly improves upon the state-of-the-art models. 1

4 0.12269153 207 nips-2013-Near-optimal Anomaly Detection in Graphs using Lovasz Extended Scan Statistic

Author: James L. Sharpnack, Akshay Krishnamurthy, Aarti Singh

Abstract: The detection of anomalous activity in graphs is a statistical problem that arises in many applications, such as network surveillance, disease outbreak detection, and activity monitoring in social networks. Beyond its wide applicability, graph structured anomaly detection serves as a case study in the difficulty of balancing computational complexity with statistical power. In this work, we develop from first principles the generalized likelihood ratio test for determining if there is a well connected region of activation over the vertices in the graph in Gaussian noise. Because this test is computationally infeasible, we provide a relaxation, called the Lov´ sz extended scan statistic (LESS) that uses submodularity to approximate the a intractable generalized likelihood ratio. We demonstrate a connection between LESS and maximum a-posteriori inference in Markov random fields, which provides us with a poly-time algorithm for LESS. Using electrical network theory, we are able to control type 1 error for LESS and prove conditions under which LESS is risk consistent. Finally, we consider specific graph models, the torus, knearest neighbor graphs, and ǫ-random graphs. We show that on these graphs our results provide near-optimal performance by matching our results to known lower bounds. 1

5 0.11418654 73 nips-2013-Convex Relaxations for Permutation Problems

Author: Fajwel Fogel, Rodolphe Jenatton, Francis Bach, Alexandre D'Aspremont

Abstract: Seriation seeks to reconstruct a linear order between variables using unsorted similarity information. It has direct applications in archeology and shotgun gene sequencing for example. We prove the equivalence between the seriation and the combinatorial 2-SUM problem (a quadratic minimization problem over permutations) over a class of similarity matrices. The seriation problem can be solved exactly by a spectral algorithm in the noiseless case and we produce a convex relaxation for the 2-SUM problem to improve the robustness of solutions in a noisy setting. This relaxation also allows us to impose additional structural constraints on the solution, to solve semi-supervised seriation problems. We present numerical experiments on archeological data, Markov chains and gene sequences. 1

6 0.11188758 291 nips-2013-Sensor Selection in High-Dimensional Gaussian Trees with Nuisances

7 0.10997204 25 nips-2013-Adaptive Anonymity via $b$-Matching

8 0.10147551 289 nips-2013-Scalable kernels for graphs with continuous attributes

9 0.10040281 87 nips-2013-Density estimation from unweighted k-nearest neighbor graphs: a roadmap

10 0.098888859 125 nips-2013-From Bandits to Experts: A Tale of Domination and Independence

11 0.097888857 227 nips-2013-Online Learning in Markov Decision Processes with Adversarially Chosen Transition Probability Distributions

12 0.09354452 303 nips-2013-Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis

13 0.091755763 306 nips-2013-Speeding up Permutation Testing in Neuroimaging

14 0.086336017 146 nips-2013-Large Scale Distributed Sparse Precision Estimation

15 0.085664652 109 nips-2013-Estimating LASSO Risk and Noise Level

16 0.084790766 353 nips-2013-When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

17 0.080152087 108 nips-2013-Error-Minimizing Estimates and Universal Entry-Wise Error Bounds for Low-Rank Matrix Completion

18 0.075079963 213 nips-2013-Nonparametric Multi-group Membership Model for Dynamic Networks

19 0.073805317 318 nips-2013-Structured Learning via Logistic Regression

20 0.072374165 116 nips-2013-Fantope Projection and Selection: A near-optimal convex relaxation of sparse PCA


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.222), (1, 0.07), (2, 0.043), (3, 0.022), (4, 0.007), (5, 0.031), (6, -0.028), (7, -0.068), (8, -0.054), (9, 0.003), (10, 0.096), (11, -0.129), (12, 0.152), (13, -0.094), (14, -0.059), (15, -0.054), (16, -0.013), (17, -0.069), (18, -0.14), (19, -0.006), (20, 0.0), (21, -0.103), (22, -0.008), (23, -0.054), (24, 0.038), (25, -0.118), (26, 0.019), (27, 0.103), (28, -0.139), (29, 0.127), (30, 0.073), (31, 0.085), (32, 0.041), (33, 0.067), (34, 0.006), (35, 0.077), (36, 0.01), (37, -0.118), (38, -0.045), (39, 0.1), (40, -0.04), (41, 0.0), (42, 0.065), (43, 0.041), (44, 0.016), (45, 0.011), (46, -0.077), (47, 0.041), (48, 0.021), (49, 0.109)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9637574 282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching

Author: Marcelo Fiori, Pablo Sprechmann, Joshua Vogelstein, Pablo Muse, Guillermo Sapiro

Abstract: Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or collaborative sparsity formulations, as a non-smooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. The method can deal with weighted or unweighted graphs, as well as multimodal data, where different graphs represent different types of data. The proposed approach is also naturally integrated with collaborative graph inference techniques, solving general network inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. The algorithm is tested and compared with state-of-the-art graph matching techniques in both synthetic and real graphs. We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignment-free functional magnetic resonance imaging (fMRI) data. The code is publicly available. 1

2 0.75218791 300 nips-2013-Solving the multi-way matching problem by permutation synchronization

Author: Deepti Pachauri, Risi Kondor, Vikas Singh

Abstract: The problem of matching not just two, but m different sets of objects to each other arises in many contexts, including finding the correspondence between feature points across multiple images in computer vision. At present it is usually solved by matching the sets pairwise, in series. In contrast, we propose a new method, Permutation Synchronization, which finds all the matchings jointly, in one shot, via a relaxation to eigenvector decomposition. The resulting algorithm is both computationally efficient, and, as we demonstrate with theoretical arguments as well as experimental results, much more stable to noise than previous methods. 1

3 0.71833646 25 nips-2013-Adaptive Anonymity via $b$-Matching

Author: Krzysztof M. Choromanski, Tony Jebara, Kui Tang

Abstract: The adaptive anonymity problem is formalized where each individual shares their data along with an integer value to indicate their personal level of desired privacy. This problem leads to a generalization of k-anonymity to the b-matching setting. Novel algorithms and theory are provided to implement this type of anonymity. The relaxation achieves better utility, admits theoretical privacy guarantees that are as strong, and, most importantly, accommodates a variable level of anonymity for each individual. Empirical results confirm improved utility on benchmark and social data-sets.

4 0.70808929 207 nips-2013-Near-optimal Anomaly Detection in Graphs using Lovasz Extended Scan Statistic

Author: James L. Sharpnack, Akshay Krishnamurthy, Aarti Singh

Abstract: The detection of anomalous activity in graphs is a statistical problem that arises in many applications, such as network surveillance, disease outbreak detection, and activity monitoring in social networks. Beyond its wide applicability, graph structured anomaly detection serves as a case study in the difficulty of balancing computational complexity with statistical power. In this work, we develop from first principles the generalized likelihood ratio test for determining if there is a well connected region of activation over the vertices in the graph in Gaussian noise. Because this test is computationally infeasible, we provide a relaxation, called the Lov´ sz extended scan statistic (LESS) that uses submodularity to approximate the a intractable generalized likelihood ratio. We demonstrate a connection between LESS and maximum a-posteriori inference in Markov random fields, which provides us with a poly-time algorithm for LESS. Using electrical network theory, we are able to control type 1 error for LESS and prove conditions under which LESS is risk consistent. Finally, we consider specific graph models, the torus, knearest neighbor graphs, and ǫ-random graphs. We show that on these graphs our results provide near-optimal performance by matching our results to known lower bounds. 1

5 0.65033096 350 nips-2013-Wavelets on Graphs via Deep Learning

Author: Raif Rustamov, Leonidas Guibas

Abstract: An increasing number of applications require processing of signals defined on weighted graphs. While wavelets provide a flexible tool for signal processing in the classical setting of regular domains, the existing graph wavelet constructions are less flexible – they are guided solely by the structure of the underlying graph and do not take directly into consideration the particular class of signals to be processed. This paper introduces a machine learning framework for constructing graph wavelets that can sparsely represent a given class of signals. Our construction uses the lifting scheme, and is based on the observation that the recurrent nature of the lifting scheme gives rise to a structure resembling a deep auto-encoder network. Particular properties that the resulting wavelets must satisfy determine the training objective and the structure of the involved neural networks. The training is unsupervised, and is conducted similarly to the greedy pre-training of a stack of auto-encoders. After training is completed, we obtain a linear wavelet transform that can be applied to any graph signal in time and memory linear in the size of the graph. Improved sparsity of our wavelet transform for the test signals is confirmed via experiments both on synthetic and real data. 1

6 0.6341303 73 nips-2013-Convex Relaxations for Permutation Problems

7 0.58163601 328 nips-2013-The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited

8 0.56920004 35 nips-2013-Analyzing the Harmonic Structure in Graph-Based Learning

9 0.55331099 87 nips-2013-Density estimation from unweighted k-nearest neighbor graphs: a roadmap

10 0.54236859 182 nips-2013-Manifold-based Similarity Adaptation for Label Propagation

11 0.53440523 306 nips-2013-Speeding up Permutation Testing in Neuroimaging

12 0.53313178 291 nips-2013-Sensor Selection in High-Dimensional Gaussian Trees with Nuisances

13 0.52435702 353 nips-2013-When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

14 0.52333939 316 nips-2013-Stochastic blockmodel approximation of a graphon: Theory and consistent estimation

15 0.51455462 134 nips-2013-Graphical Models for Inference with Missing Data

16 0.50661647 332 nips-2013-Tracking Time-varying Graphical Structure

17 0.49537241 184 nips-2013-Marginals-to-Models Reducibility

18 0.49314904 5 nips-2013-A Deep Architecture for Matching Short Texts

19 0.4920387 8 nips-2013-A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles

20 0.49089974 45 nips-2013-BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.025), (16, 0.034), (33, 0.139), (34, 0.11), (41, 0.048), (49, 0.06), (56, 0.108), (70, 0.038), (85, 0.07), (89, 0.027), (92, 0.215), (93, 0.044), (95, 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.86871958 296 nips-2013-Sinkhorn Distances: Lightspeed Computation of Optimal Transport

Author: Marco Cuturi

Abstract: Optimal transport distances are a fundamental family of distances for probability measures and histograms of features. Despite their appealing theoretical properties, excellent performance in retrieval tasks and intuitive formulation, their computation involves the resolution of a linear program whose cost can quickly become prohibitive whenever the size of the support of these measures or the histograms’ dimension exceeds a few hundred. We propose in this work a new family of optimal transport distances that look at transport problems from a maximumentropy perspective. We smooth the classic optimal transport problem with an entropic regularization term, and show that the resulting optimum is also a distance which can be computed through Sinkhorn’s matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transport solvers. We also show that this regularized distance improves upon classic optimal transport distances on the MNIST classification problem.

same-paper 2 0.8315345 282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching

Author: Marcelo Fiori, Pablo Sprechmann, Joshua Vogelstein, Pablo Muse, Guillermo Sapiro

Abstract: Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or collaborative sparsity formulations, as a non-smooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. The method can deal with weighted or unweighted graphs, as well as multimodal data, where different graphs represent different types of data. The proposed approach is also naturally integrated with collaborative graph inference techniques, solving general network inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. The algorithm is tested and compared with state-of-the-art graph matching techniques in both synthetic and real graphs. We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignment-free functional magnetic resonance imaging (fMRI) data. The code is publicly available. 1

3 0.82659692 280 nips-2013-Robust Data-Driven Dynamic Programming

Author: Grani Adiwena Hanasusanto, Daniel Kuhn

Abstract: In stochastic optimal control the distribution of the exogenous noise is typically unknown and must be inferred from limited data before dynamic programming (DP)-based solution schemes can be applied. If the conditional expectations in the DP recursions are estimated via kernel regression, however, the historical sample paths enter the solution procedure directly as they determine the evaluation points of the cost-to-go functions. The resulting data-driven DP scheme is asymptotically consistent and admits an efficient computational solution when combined with parametric value function approximations. If training data is sparse, however, the estimated cost-to-go functions display a high variability and an optimistic bias, while the corresponding control policies perform poorly in out-of-sample tests. To mitigate these small sample effects, we propose a robust data-driven DP scheme, which replaces the expectations in the DP recursions with worst-case expectations over a set of distributions close to the best estimate. We show that the arising minmax problems in the DP recursions reduce to tractable conic programs. We also demonstrate that the proposed robust DP algorithm dominates various non-robust schemes in out-of-sample tests across several application domains. 1

4 0.74207509 30 nips-2013-Adaptive dropout for training deep neural networks

Author: Jimmy Ba, Brendan Frey

Abstract: Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50% of their activities. We describe a method called ‘standout’ in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This ‘adaptive dropout network’ can be trained jointly with the neural network by approximately computing local expectations of binary dropout variables, computing derivatives using back-propagation, and using stochastic gradient descent. Interestingly, experiments show that the learnt dropout network parameters recapitulate the neural network parameters, suggesting that a good dropout network regularizes activities according to magnitude. When evaluated on the MNIST and NORB datasets, we found that our method achieves lower classification error rates than other feature learning methods, including standard dropout, denoising auto-encoders, and restricted Boltzmann machines. For example, our method achieves 0.80% and 5.8% errors on the MNIST and NORB test sets, which is better than state-of-the-art results obtained using feature learning methods, including those that use convolutional architectures. 1

5 0.71806824 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

Author: David Carlson, Vinayak Rao, Joshua T. Vogelstein, Lawrence Carin

Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. 1

6 0.71616483 104 nips-2013-Efficient Online Inference for Bayesian Nonparametric Relational Models

7 0.71340114 285 nips-2013-Robust Transfer Principal Component Analysis with Rank Constraints

8 0.71267718 173 nips-2013-Least Informative Dimensions

9 0.7098586 182 nips-2013-Manifold-based Similarity Adaptation for Label Propagation

10 0.70983005 45 nips-2013-BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables

11 0.70977002 353 nips-2013-When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

12 0.70974207 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

13 0.70956582 121 nips-2013-Firing rate predictions in optimal balanced networks

14 0.7093513 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

15 0.70865697 294 nips-2013-Similarity Component Analysis

16 0.70862734 350 nips-2013-Wavelets on Graphs via Deep Learning

17 0.70696509 79 nips-2013-DESPOT: Online POMDP Planning with Regularization

18 0.70692635 5 nips-2013-A Deep Architecture for Matching Short Texts

19 0.70546603 196 nips-2013-Modeling Overlapping Communities with Node Popularities

20 0.70484835 303 nips-2013-Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis