nips nips2010 nips2010-162 knowledge-graph by maker-knowledge-mining

162 nips-2010-Link Discovery using Graph Feature Tracking


Source: pdf

Author: Emile Richard, Nicolas Baskiotis, Theodoros Evgeniou, Nicolas Vayatis

Abstract: We consider the problem of discovering links of an evolving undirected graph given a series of past snapshots of that graph. The graph is observed through the time sequence of its adjacency matrix and only the presence of edges is observed. The absence of an edge on a certain snapshot cannot be distinguished from a missing entry in the adjacency matrix. Additional information can be provided by examining the dynamics of the graph through a set of topological features, such as the degrees of the vertices. We develop a novel methodology by building on both static matrix completion methods and the estimation of the future state of relevant graph features. Our procedure relies on the formulation of an optimization problem which can be approximately solved by a fast alternating linearized algorithm whose properties are examined. We show experiments with both simulated and real data which reveal the interest of our methodology. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 fr Abstract We consider the problem of discovering links of an evolving undirected graph given a series of past snapshots of that graph. [sent-11, score-0.609]

2 The graph is observed through the time sequence of its adjacency matrix and only the presence of edges is observed. [sent-12, score-0.798]

3 The absence of an edge on a certain snapshot cannot be distinguished from a missing entry in the adjacency matrix. [sent-13, score-0.371]

4 Additional information can be provided by examining the dynamics of the graph through a set of topological features, such as the degrees of the vertices. [sent-14, score-0.493]

5 We develop a novel methodology by building on both static matrix completion methods and the estimation of the future state of relevant graph features. [sent-15, score-0.926]

6 Our procedure relies on the formulation of an optimization problem which can be approximately solved by a fast alternating linearized algorithm whose properties are examined. [sent-16, score-0.189]

7 Link prediction can also be seen as a special case of matrix completion where the goal is to estimate the missing entries of the adjacency matrix of the graph where the entries can be only ”0s” and ”1s”. [sent-19, score-1.353]

8 Matrix completion became popular after the Netflix Challenge and has been extensively studied on both theoretical and algorithmic aspects [15]. [sent-20, score-0.284]

9 In this paper we consider a special case of predicting the evolution of a graph, where we only predict the new edges given a fixed set of vertices of an undirected graph by using the dynamics of the graph over time. [sent-21, score-1.254]

10 Most of the existing methods in matrix completion assume that weights over the entries (i. [sent-22, score-0.502]

11 Consider for instance the issue of link prediction in recommender systems. [sent-28, score-0.345]

12 In that case, we consider a bipartite graph for which the vertices represent products and users, and the edges connect users with the products they have purchased in the past. [sent-29, score-0.722]

13 The setup we consider in the present paper corresponds to 1 the binary case where we only observe purchase data, say the presence of a link in the graph, without any score or feedback on the product for a given user. [sent-30, score-0.31]

14 Hence, we will deal here with the situation where the components of snapshots of the adjacency matrix only consist of ”1s” and missing values. [sent-31, score-0.51]

15 Moreover, link prediction methods typically use only one snapshot of the graph’s adjacency matrix the most recent one - to predict its missing entries [9], or rely on latent variables providing semantic information for each vertex [11]. [sent-32, score-1.052]

16 Since these methods do not use any information over time, they can be called static methods. [sent-33, score-0.189]

17 However, information about how the links of the graph and its topological features have been evolving over time may also be useful to predict future links. [sent-35, score-0.739]

18 In the example of recommender systems, knowing that a particular product has been purchased by increasingly more people in a short time window provides useful information about the type of the recommendations to be made in the next period. [sent-36, score-0.207]

19 The main idea underlying our work lies in the observation that a few graph features can capture the dynamics of the graph evolution and provide information for predicting future links. [sent-37, score-1.007]

20 The purpose of the paper is to present a procedure which exploits the dynamics of the evolution of the graph to find unrevealed links in the graph. [sent-38, score-0.647]

21 The main idea is to learn over time the evolution of well-chosen local features (at the level of the vertices) of the graph and then, use the predicted value of these features on the next time period to discover the missing links. [sent-39, score-0.816]

22 Our approach is related to two theoretical streams of research: matrix completion and diffusion models. [sent-40, score-0.458]

23 In the latter only the dynamics over time of the degree of a particular vertex of the graph are modeled - the diffusion of the product corresponding to that vertex for example [17, 14]. [sent-41, score-0.556]

24 Beyond the large number of static matrix completion methods, only a few methods have been developed that combine static and dynamic information mainly using parametric methods – see [4] for a survey. [sent-42, score-0.844]

25 For example, [13] embeds graph vertices on a latent space and use either a Markov model or a Gaussian one to track the position of the vertices in this space; [10] uses a probabilistic model of the time interval between the appearance of two edges or subgraphs to predict future edges or subgraphs. [sent-43, score-0.937]

26 The setup of dynamic feature-based matrix completion is presented in Section 2. [sent-45, score-0.466]

27 In Section 3, we develop a fast linearized algorithm for efficient link prediction. [sent-46, score-0.337]

28 2 Dynamic feature-based matrix completion Setup. [sent-49, score-0.423]

29 We consider a sequence of T undirected graphs with n vertices and n × n binary adjacency matrices At , t ∈ {1, 2, . [sent-50, score-0.464]

30 , T } where for each t the edges of the graph are also contained in the graph at time t + 1. [sent-53, score-0.711]

31 , T } the goal is to predict the edges of the graph that are most likely to appear at time T + 1, that is, the most likely non-zero elements of the binary adjacency matrix AT +1 . [sent-57, score-0.885]

32 To this purpose we want to learn an n × n real-valued matrix S whose elements indicate how likely it is that there is a non-zero value at the corresponding position of matrix AT +1 . [sent-58, score-0.278]

33 The edges that we predict to be the most likely ones at time T + 1 are the ones corresponding to the largest values in S. [sent-59, score-0.242]

34 We assume that certain features of matrices At evolve over time smoothly. [sent-60, score-0.197]

35 Such an assumption is necessary to allow learnability of the evolution of At over time. [sent-61, score-0.183]

36 For simplicity we consider a linear feature map f : At → Ft where Ft is an n × k matrix of the form Ft = At Φ, with Φ an n × k matrix of features. [sent-62, score-0.366]

37 We discuss an example of such features Φ and a way to predict FT +1 given past values of the feature map F1 , F2 , . [sent-64, score-0.315]

38 , FT in Section 4 – but other features or prediction methods can be used in combination with the main part of the proposed approach. [sent-67, score-0.145]

39 The procedure we propose for link prediction is based on the assumption that the dynamics of graph features also drive the discovery of the location of new links. [sent-70, score-0.799]

40 Given the last adjacency matrix AT , a set of features Φ, and an estimate F of FT +1 based on the sequence 2 of adjacency matrices At , t ∈ {1, 2, . [sent-71, score-0.75]

41 , T }, we want to find a matrix S which fulfills the following requirements: • S has low rank - this is a standard assumption in matrix completion problems [15]. [sent-74, score-0.642]

42 • S is close to the last adjacency matrix AT - the distance between these two matrices will provide a proxy for the training error. [sent-75, score-0.441]

43 For any matrix M , we denote by M F = Tr(M M ) , the Frobenius norm of M , with M being the transpose of M and the trace operator Tr(N ) computes the sum of the diagonal elements of the n square matrix N . [sent-77, score-0.315]

44 We also define M ∗ = k=1 σk (M ) , the nuclear norm of a square matrix M of size n × n, where σk (M ) denotes the k-th largest singular value of M . [sent-78, score-0.363]

45 We recall that a singular value of matrix M corresponds to the square root of an eigenvalue of M M ordered decreasingly. [sent-79, score-0.239]

46 The proposed optimization problem for feature-based matrix completion is then: 1 1 S − AT 2 + ν SΦ − F 2 , (1) F F 2 2 and where τ and ν are positive regularization parameters. [sent-80, score-0.423]

47 Each term of the functional L reflects the aforementioned requirements for the desired matrix S. [sent-81, score-0.215]

48 In the case where ν = 0, we do not use information about the dynamics of the graph. [sent-82, score-0.103]

49 The minimizer of L corresponds to the singular value thresholding approach developed in [2], which is therefore a special case of (1). [sent-83, score-0.15]

50 Note that a key difference between link prediction and matrix completion is that in (1) the training error uses all entries of the adjacency matrix while in the case of matrix completion only the known entries (in our case the ”1s”) are used. [sent-84, score-1.658]

51 with L(S, τ, ν) = τ S ∗ + An algorithm for link discovery Solving (1) is computationally slow. [sent-87, score-0.273]

52 Here, the functional L(S, τ, ν) is continuous and convex but not differentiable with respect to S. [sent-89, score-0.115]

53 We propose to convert the minimization of the target functional L(S, τ, ν) into a tractable problem through the following steps: 1. [sent-90, score-0.118]

54 Smoothing the nuclear norm - We recall the variational formulation of the nuclear norm S ∗ = maxZ { S, Z : σ1 (Z) ≤ 1}. [sent-97, score-0.248]

55 Using the technique from [12], we can use a smooth approximation of the nuclear norm and replace g in the functional by a surrogate function gη with η > 0 being a smoothing parameter: η gη (S, τ ) = τ · max S, Z − Z 2 : σ1 (Z) ≤ 1 F Z 2 3. [sent-98, score-0.238]

56 Alternating minimization - We propose to minimize the functional which is continuous, differentiable and convex: . [sent-99, score-0.157]

57 We denote by mG (S) the minimizer of ˜ Gη,µ (S, S) with respect to S and mH (S) the minimizer of Hµ (S, S) with respect to S. [sent-104, score-0.1]

58 We can now formulate an algorithm for the fast minimization of the functional Lη (S, S) inspired by the algorithm FALM in [5] (see Algorithm 1). [sent-105, score-0.154]

59 Note that, in the alternating descent for the simultaneous minimization of the two functions Gη,µ and Hµ , we use an auxiliary matrix Zk . [sent-106, score-0.26]

60 This matrix is a linear combination of the updates for S and S. [sent-107, score-0.139]

61 Key formulas in the link prediction algorithm are those of the minimizers mG (S) and mH (S). [sent-109, score-0.357]

62 It turns out that in our case, these minimizers have explicit expressions which can be derived when solving the first-order optimality condition as Proposition 1 shows. [sent-110, score-0.107]

63 do Sk ← mG (Zk ) and Sk ← mH (Sk ) 1 Wk ← (Sk + Sk ) 2 1 2 αk+1 ← (1 + 1 + 4αk ) 2 1 αk (Sk − Wk−1 ) − (Wk − Wk−1 ) Zk+1 ← Wk + αk+1 end for ˆ Proposition 1 Let S = S − µ h(S) and the singular value decomposition S = U Diag(σ)V . [sent-114, score-0.1]

64 We also consider the singular value decomposition of S denoted by S = U Diag(ηλ)V . [sent-115, score-0.1]

65 With our notations, we can easily derive here: µ = min η/τ, 1/(1 + νσ1 (Φ)) , where σ1 (Φ) is the largest singular value of Φ. [sent-128, score-0.1]

66 4 Learning the graph features As discussed above one can use various features Φ and methods to predict the n × k matrix FT +1 given past values of the feature map F1 , F2 , . [sent-129, score-0.815]

67 In particular, we use as features Φ the first k eigenvectors of the adjacency matrix AT . [sent-134, score-0.448]

68 Note that (:,1:k) AT Φ = Ω(:,1:k) and that Ω(:,1:k) is the most informative n × k matrix for the reconstruction of AT . [sent-137, score-0.139]

69 , T } which describes the evolution of the j-th feature over the n vertices of the graph. [sent-149, score-0.347]

70 We now describe the procedure for learning the evolution of this j-th graph feature over time: 1. [sent-150, score-0.511]

71 Fix an integer m < T to learn a map between m past values (At−m Φj , . [sent-151, score-0.095]

72 , k}, we obtain the estimate F for the matrix FT +1 used in (1). [sent-167, score-0.139]

73 Static matrix completion corresponding to ν = 0 in (1). [sent-175, score-0.423]

74 The Katz algorithm [8] considered as one of the best static link prediction methods. [sent-177, score-0.478]

75 The Preferential Attachment method [1] for which the score (”likelihood”) of an edge {u, v} is du dv where du and dv are the degrees of u and v. [sent-179, score-0.217]

76 We first generate a sequence of T matrices Q(t) of size n × r whose entries Qi,j (t) are increasing over time as a sigmoid function :    1 t − µi,j  Qi,j (t) = 1 + erf  2 2σ 2 i,j where µi,j ∈ [0; T ], σi,j ∈ [0; T /3] are picked uniformly for each (i, j). [sent-182, score-0.235]

77 These matrices provide a synthetic model for the evolution of the graph over time. [sent-183, score-0.537]

78 We then add noise to the time dynamics as follows. [sent-184, score-0.141]

79 Having constructed the matrices Q(t), we then generate matrices S(t) = Q(t)Q(t) which are of rank r. [sent-189, score-0.232]

80 We finally generate the adjacency matrix At as A(t) = 1[θ;∞[ (S(t)) for a threshold θ. [sent-190, score-0.365]

81 2 Real Data Collaborative Filtering1 We can see the purchase histories of e-commerce websites as graph sequences where links are established between a user and a product when the user purchases that product. [sent-196, score-0.532]

82 We use data from 10 months music purchase history of a major e-commerce website to evaluate our method. [sent-197, score-0.134]

83 For our test we selected a set of 103 users and 103 products that had the highest degrees (number of sales). [sent-198, score-0.141]

84 5 × 103 edges of the graph (corresponding to purchases) into two parts following their occurrence time. [sent-200, score-0.395]

85 We used the data of the 8 first months to predict the features at the end of the 10th month and use these features as well as the matrix at the end of the 8th month to discover the purchases during the 2 last months. [sent-201, score-0.695]

86 For the simulation data we report the average AUC over 10 simulation runs. [sent-205, score-0.144]

87 From the simulation results we observe that for low rank underlying matrices, our method outperforms the rivals. [sent-206, score-0.152]

88 Our method (as well as the static low rank method based on the low rank hypothesis) however fails when the rank of S(t) is high. [sent-208, score-0.429]

89 However, even in this case our method outperforms the method of static matrix completion. [sent-209, score-0.328]

90 The results with the real data further indicate the advantage of using information about the evolution of the graph over time. [sent-210, score-0.461]

91 Similarly to the simulation data, the proposed method outperforms the static matrix completion one. [sent-211, score-0.684]

92 6 Conclusion The main contribution of this work is the formulation of a learning problem that can be used to predict the evolution of the edges of a graph over time. [sent-212, score-0.665]

93 A regularization approach to combine both static graph information as well as information about the dynamics of the evolution of the graph over time is proposed and an optimization algorithm is developed. [sent-213, score-1.069]

94 Despite using simple graph features 1 Notice that we are looking to discover only unobserved links and not new occurences of past links. [sent-214, score-0.581]

95 as well as estimation of the evolution of the feature values over time, experiments indicate that the proposed optimization method improves performance relative to benchmarks. [sent-281, score-0.233]

96 Testing, or learning, other graph features as well as other ways to model their dynamics over time may further improve performance and is part of future work. [sent-282, score-0.538]

97 A singular value thresholding algorithm for matrix e completion. [sent-334, score-0.239]

98 The power of convex relaxation: Near-optimal matrix e completion. [sent-338, score-0.139]

99 Fast alternating linearization methods for minimizing the sum of two convex functions. [sent-345, score-0.13]

100 Link propagation: A fast semi-supervised learning algorithm for link prediction. [sent-351, score-0.263]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('completion', 0.284), ('graph', 0.278), ('link', 0.227), ('adjacency', 0.226), ('ft', 0.205), ('static', 0.189), ('evolution', 0.183), ('auc', 0.166), ('matrix', 0.139), ('mg', 0.136), ('mh', 0.127), ('edges', 0.117), ('cachan', 0.116), ('cmla', 0.116), ('ens', 0.116), ('katz', 0.116), ('vertices', 0.114), ('sk', 0.112), ('dynamics', 0.103), ('singular', 0.1), ('wk', 0.095), ('diag', 0.089), ('purchases', 0.088), ('predict', 0.087), ('nuclear', 0.087), ('purchase', 0.083), ('features', 0.083), ('links', 0.083), ('rank', 0.08), ('entries', 0.079), ('alternating', 0.079), ('snapshot', 0.078), ('snapshots', 0.078), ('matrices', 0.076), ('functional', 0.076), ('linearized', 0.074), ('simulation', 0.072), ('collaborative', 0.072), ('topological', 0.069), ('minimizers', 0.068), ('purchased', 0.068), ('missing', 0.067), ('evolving', 0.065), ('zk', 0.064), ('attachment', 0.063), ('prediction', 0.062), ('month', 0.059), ('preferential', 0.059), ('emmanuel', 0.059), ('past', 0.057), ('france', 0.056), ('nicolas', 0.056), ('recommender', 0.056), ('users', 0.051), ('vertex', 0.051), ('mining', 0.051), ('simulated', 0.051), ('months', 0.051), ('linearization', 0.051), ('marketing', 0.051), ('goldfarb', 0.051), ('feature', 0.05), ('social', 0.05), ('minimizer', 0.05), ('benchmarks', 0.049), ('undirected', 0.048), ('products', 0.047), ('dv', 0.046), ('proposition', 0.046), ('predicting', 0.046), ('discover', 0.046), ('discovery', 0.046), ('recommendations', 0.045), ('cand', 0.045), ('saul', 0.044), ('dynamic', 0.043), ('degrees', 0.043), ('tr', 0.043), ('minimization', 0.042), ('picked', 0.042), ('ridge', 0.041), ('du', 0.041), ('differentiable', 0.039), ('optimality', 0.039), ('map', 0.038), ('surrogate', 0.038), ('time', 0.038), ('norm', 0.037), ('lawrence', 0.037), ('latent', 0.036), ('future', 0.036), ('fast', 0.036), ('diffusion', 0.035), ('occurences', 0.034), ('penetration', 0.034), ('sarkar', 0.034), ('masashi', 0.034), ('koren', 0.034), ('newsletter', 0.034), ('schubert', 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 162 nips-2010-Link Discovery using Graph Feature Tracking

Author: Emile Richard, Nicolas Baskiotis, Theodoros Evgeniou, Nicolas Vayatis

Abstract: We consider the problem of discovering links of an evolving undirected graph given a series of past snapshots of that graph. The graph is observed through the time sequence of its adjacency matrix and only the presence of edges is observed. The absence of an edge on a certain snapshot cannot be distinguished from a missing entry in the adjacency matrix. Additional information can be provided by examining the dynamics of the graph through a set of topological features, such as the degrees of the vertices. We develop a novel methodology by building on both static matrix completion methods and the estimation of the future state of relevant graph features. Our procedure relies on the formulation of an optimization problem which can be approximately solved by a fast alternating linearized algorithm whose properties are examined. We show experiments with both simulated and real data which reveal the interest of our methodology. 1

2 0.1828198 275 nips-2010-Transduction with Matrix Completion: Three Birds with One Stone

Author: Andrew Goldberg, Ben Recht, Junming Xu, Robert Nowak, Xiaojin Zhu

Abstract: We pose transductive classification as a matrix completion problem. By assuming the underlying matrix has a low rank, our formulation is able to handle three problems simultaneously: i) multi-label learning, where each item has more than one label, ii) transduction, where most of these labels are unspecified, and iii) missing data, where a large number of features are missing. We obtained satisfactory results on several real-world tasks, suggesting that the low rank assumption may not be as restrictive as it seems. Our method allows for different loss functions to apply on the feature and label entries of the matrix. The resulting nuclear norm minimization problem is solved with a modified fixed-point continuation method that is guaranteed to find the global optimum. 1

3 0.12525171 48 nips-2010-Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm

Author: Nathan Srebro, Ruslan Salakhutdinov

Abstract: We show that matrix completion with trace-norm regularization can be significantly hurt when entries of the matrix are sampled non-uniformly, but that a properly weighted version of the trace-norm regularizer works well with non-uniform sampling. We show that the weighted trace-norm regularization indeed yields significant gains on the highly non-uniformly sampled Netflix dataset.

4 0.12347706 182 nips-2010-New Adaptive Algorithms for Online Classification

Author: Francesco Orabona, Koby Crammer

Abstract: We propose a general framework to online learning for classification problems with time-varying potential functions in the adversarial setting. This framework allows to design and prove relative mistake bounds for any generic loss function. The mistake bounds can be specialized for the hinge loss, allowing to recover and improve the bounds of known online classification algorithms. By optimizing the general bound we derive a new online classification algorithm, called NAROW, that hybridly uses adaptive- and fixed- second order information. We analyze the properties of the algorithm and illustrate its performance using synthetic dataset. 1

5 0.12028517 254 nips-2010-Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models

Author: Han Liu, Kathryn Roeder, Larry Wasserman

Abstract: A challenging problem in estimating high-dimensional graphical models is to choose the regularization parameter in a data-dependent way. The standard techniques include K-fold cross-validation (K-CV), Akaike information criterion (AIC), and Bayesian information criterion (BIC). Though these methods work well for low-dimensional problems, they are not suitable in high dimensional settings. In this paper, we present StARS: a new stability-based method for choosing the regularization parameter in high dimensional inference for undirected graphs. The method has a clear interpretation: we use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling. This interpretation requires essentially no conditions. Under mild conditions, we show that StARS is partially sparsistent in terms of graph estimation: i.e. with high probability, all the true edges will be included in the selected model even when the graph size diverges with the sample size. Empirically, the performance of StARS is compared with the state-of-the-art model selection procedures, including K-CV, AIC, and BIC, on both synthetic data and a real microarray dataset. StARS outperforms all these competing procedures.

6 0.11702031 150 nips-2010-Learning concept graphs from text with stick-breaking priors

7 0.11602692 210 nips-2010-Practical Large-Scale Optimization for Max-norm Regularization

8 0.11359324 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

9 0.113244 259 nips-2010-Subgraph Detection Using Eigenvector L1 Norms

10 0.11108918 110 nips-2010-Guaranteed Rank Minimization via Singular Value Projection

11 0.11021642 92 nips-2010-Fast global convergence rates of gradient methods for high-dimensional statistical recovery

12 0.10927834 136 nips-2010-Large-Scale Matrix Factorization with Missing Data under Additional Constraints

13 0.10693595 117 nips-2010-Identifying graph-structured activation patterns in networks

14 0.10465077 129 nips-2010-Inter-time segment information sharing for non-homogeneous dynamic Bayesian networks

15 0.10368912 94 nips-2010-Feature Set Embedding for Incomplete Data

16 0.098857962 148 nips-2010-Learning Networks of Stochastic Differential Equations

17 0.096343115 63 nips-2010-Distributed Dual Averaging In Networks

18 0.093995653 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

19 0.093267746 85 nips-2010-Exact learning curves for Gaussian process regression on large random graphs

20 0.088569358 105 nips-2010-Getting lost in space: Large sample analysis of the resistance distance


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.246), (1, 0.064), (2, 0.059), (3, 0.123), (4, -0.025), (5, -0.117), (6, 0.024), (7, -0.071), (8, -0.067), (9, 0.036), (10, -0.048), (11, 0.026), (12, 0.159), (13, 0.14), (14, 0.102), (15, -0.001), (16, 0.023), (17, -0.147), (18, -0.03), (19, 0.156), (20, 0.069), (21, 0.164), (22, -0.106), (23, -0.092), (24, 0.125), (25, 0.057), (26, 0.009), (27, 0.101), (28, 0.011), (29, 0.076), (30, -0.004), (31, -0.025), (32, -0.027), (33, 0.02), (34, -0.074), (35, 0.044), (36, 0.094), (37, 0.05), (38, 0.087), (39, 0.005), (40, -0.005), (41, 0.028), (42, 0.061), (43, 0.066), (44, 0.019), (45, 0.098), (46, 0.115), (47, 0.005), (48, -0.031), (49, -0.075)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96063966 162 nips-2010-Link Discovery using Graph Feature Tracking

Author: Emile Richard, Nicolas Baskiotis, Theodoros Evgeniou, Nicolas Vayatis

Abstract: We consider the problem of discovering links of an evolving undirected graph given a series of past snapshots of that graph. The graph is observed through the time sequence of its adjacency matrix and only the presence of edges is observed. The absence of an edge on a certain snapshot cannot be distinguished from a missing entry in the adjacency matrix. Additional information can be provided by examining the dynamics of the graph through a set of topological features, such as the degrees of the vertices. We develop a novel methodology by building on both static matrix completion methods and the estimation of the future state of relevant graph features. Our procedure relies on the formulation of an optimization problem which can be approximately solved by a fast alternating linearized algorithm whose properties are examined. We show experiments with both simulated and real data which reveal the interest of our methodology. 1

2 0.6653372 136 nips-2010-Large-Scale Matrix Factorization with Missing Data under Additional Constraints

Author: Kaushik Mitra, Sameer Sheorey, Rama Chellappa

Abstract: Matrix factorization in the presence of missing data is at the core of many computer vision problems such as structure from motion (SfM), non-rigid SfM and photometric stereo. We formulate the problem of matrix factorization with missing data as a low-rank semidefinite program (LRSDP) with the advantage that: 1) an efficient quasi-Newton implementation of the LRSDP enables us to solve large-scale factorization problems, and 2) additional constraints such as orthonormality, required in orthographic SfM, can be directly incorporated in the new formulation. Our empirical evaluations suggest that, under the conditions of matrix completion theory, the proposed algorithm finds the optimal solution, and also requires fewer observations compared to the current state-of-the-art algorithms. We further demonstrate the effectiveness of the proposed algorithm in solving the affine SfM problem, non-rigid SfM and photometric stereo problems.

3 0.65817958 204 nips-2010-Penalized Principal Component Regression on Graphs for Analysis of Subnetworks

Author: Ali Shojaie, George Michailidis

Abstract: Network models are widely used to capture interactions among component of complex systems, such as social and biological. To understand their behavior, it is often necessary to analyze functionally related components of the system, corresponding to subsystems. Therefore, the analysis of subnetworks may provide additional insight into the behavior of the system, not evident from individual components. We propose a novel approach for incorporating available network information into the analysis of arbitrary subnetworks. The proposed method offers an efficient dimension reduction strategy using Laplacian eigenmaps with Neumann boundary conditions, and provides a flexible inference framework for analysis of subnetworks, based on a group-penalized principal component regression model on graphs. Asymptotic properties of the proposed inference method, as well as the choice of the tuning parameter for control of the false positive rate are discussed in high dimensional settings. The performance of the proposed methodology is illustrated using simulated and real data examples from biology. 1

4 0.65691537 110 nips-2010-Guaranteed Rank Minimization via Singular Value Projection

Author: Prateek Jain, Raghu Meka, Inderjit S. Dhillon

Abstract: Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization under affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy a restricted isometry property (RIP). Our method guarantees geometric convergence rate even in the presence of noise and requires strictly weaker assumptions on the RIP constants than the existing methods. We also introduce a Newton-step for our SVP framework to speed-up the convergence with substantial empirical gains. Next, we address a practically important application of ARMP - the problem of lowrank matrix completion, for which the defining affine constraints do not directly obey RIP, hence the guarantees of SVP do not hold. However, we provide partial progress towards a proof of exact recovery for our algorithm by showing a more restricted isometry property and observe empirically that our algorithm recovers low-rank incoherent matrices from an almost optimal number of uniformly sampled entries. We also demonstrate empirically that our algorithms outperform existing methods, such as those of [5, 18, 14], for ARMP and the matrix completion problem by an order of magnitude and are also more robust to noise and sampling schemes. In particular, results show that our SVP-Newton method is significantly robust to noise and performs impressively on a more realistic power-law sampling scheme for the matrix completion problem. 1

5 0.63995302 259 nips-2010-Subgraph Detection Using Eigenvector L1 Norms

Author: Benjamin Miller, Nadya Bliss, Patrick J. Wolfe

Abstract: When working with network datasets, the theoretical framework of detection theory for Euclidean vector spaces no longer applies. Nevertheless, it is desirable to determine the detectability of small, anomalous graphs embedded into background networks with known statistical properties. Casting the problem of subgraph detection in a signal processing context, this article provides a framework and empirical results that elucidate a “detection theory” for graph-valued data. Its focus is the detection of anomalies in unweighted, undirected graphs through L1 properties of the eigenvectors of the graph’s so-called modularity matrix. This metric is observed to have relatively low variance for certain categories of randomly-generated graphs, and to reveal the presence of an anomalous subgraph with reasonable reliability when the anomaly is not well-correlated with stronger portions of the background graph. An analysis of subgraphs in real network datasets confirms the efficacy of this approach. 1

6 0.6212579 275 nips-2010-Transduction with Matrix Completion: Three Birds with One Stone

7 0.62040764 48 nips-2010-Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm

8 0.60788804 254 nips-2010-Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models

9 0.58774906 85 nips-2010-Exact learning curves for Gaussian process regression on large random graphs

10 0.57514358 105 nips-2010-Getting lost in space: Large sample analysis of the resistance distance

11 0.5745331 117 nips-2010-Identifying graph-structured activation patterns in networks

12 0.53623796 195 nips-2010-Online Learning in The Manifold of Low-Rank Matrices

13 0.53194857 210 nips-2010-Practical Large-Scale Optimization for Max-norm Regularization

14 0.50409585 190 nips-2010-On the Convexity of Latent Social Network Inference

15 0.497926 148 nips-2010-Learning Networks of Stochastic Differential Equations

16 0.49640498 31 nips-2010-An analysis on negative curvature induced by singularity in multi-layer neural-network learning

17 0.4742018 129 nips-2010-Inter-time segment information sharing for non-homogeneous dynamic Bayesian networks

18 0.47032398 150 nips-2010-Learning concept graphs from text with stick-breaking priors

19 0.44740793 92 nips-2010-Fast global convergence rates of gradient methods for high-dimensional statistical recovery

20 0.44567287 30 nips-2010-An Inverse Power Method for Nonlinear Eigenproblems with Applications in 1-Spectral Clustering and Sparse PCA


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.059), (17, 0.01), (27, 0.072), (30, 0.065), (35, 0.023), (45, 0.198), (50, 0.06), (52, 0.063), (60, 0.031), (77, 0.082), (78, 0.015), (79, 0.23), (90, 0.033)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.94974703 115 nips-2010-Identifying Dendritic Processing

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

2 0.85167408 204 nips-2010-Penalized Principal Component Regression on Graphs for Analysis of Subnetworks

Author: Ali Shojaie, George Michailidis

Abstract: Network models are widely used to capture interactions among component of complex systems, such as social and biological. To understand their behavior, it is often necessary to analyze functionally related components of the system, corresponding to subsystems. Therefore, the analysis of subnetworks may provide additional insight into the behavior of the system, not evident from individual components. We propose a novel approach for incorporating available network information into the analysis of arbitrary subnetworks. The proposed method offers an efficient dimension reduction strategy using Laplacian eigenmaps with Neumann boundary conditions, and provides a flexible inference framework for analysis of subnetworks, based on a group-penalized principal component regression model on graphs. Asymptotic properties of the proposed inference method, as well as the choice of the tuning parameter for control of the false positive rate are discussed in high dimensional settings. The performance of the proposed methodology is illustrated using simulated and real data examples from biology. 1

same-paper 3 0.80899304 162 nips-2010-Link Discovery using Graph Feature Tracking

Author: Emile Richard, Nicolas Baskiotis, Theodoros Evgeniou, Nicolas Vayatis

Abstract: We consider the problem of discovering links of an evolving undirected graph given a series of past snapshots of that graph. The graph is observed through the time sequence of its adjacency matrix and only the presence of edges is observed. The absence of an edge on a certain snapshot cannot be distinguished from a missing entry in the adjacency matrix. Additional information can be provided by examining the dynamics of the graph through a set of topological features, such as the degrees of the vertices. We develop a novel methodology by building on both static matrix completion methods and the estimation of the future state of relevant graph features. Our procedure relies on the formulation of an optimization problem which can be approximately solved by a fast alternating linearized algorithm whose properties are examined. We show experiments with both simulated and real data which reveal the interest of our methodology. 1

4 0.80001372 17 nips-2010-A biologically plausible network for the computation of orientation dominance

Author: Kritika Muralidharan, Nuno Vasconcelos

Abstract: The determination of dominant orientation at a given image location is formulated as a decision-theoretic question. This leads to a novel measure for the dominance of a given orientation θ, which is similar to that used by SIFT. It is then shown that the new measure can be computed with a network that implements the sequence of operations of the standard neurophysiological model of V1. The measure can thus be seen as a biologically plausible version of SIFT, and is denoted as bioSIFT. The network units are shown to exhibit trademark properties of V1 neurons, such as cross-orientation suppression, sparseness and independence. The connection between SIFT and biological vision provides a justification for the success of SIFT-like features and reinforces the importance of contrast normalization in computer vision. We illustrate this by replacing the Gabor units of an HMAX network with the new bioSIFT units. This is shown to lead to significant gains for classification tasks, leading to state-of-the-art performance among biologically inspired network models and performance competitive with the best non-biological object recognition systems. 1

5 0.73861653 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

Author: Surya Ganguli, Haim Sompolinsky

Abstract: Recent proposals suggest that large, generic neuronal networks could store memory traces of past input sequences in their instantaneous state. Such a proposal raises important theoretical questions about the duration of these memory traces and their dependence on network size, connectivity and signal statistics. Prior work, in the case of gaussian input sequences and linear neuronal networks, shows that the duration of memory traces in a network cannot exceed the number of neurons (in units of the neuronal time constant), and that no network can out-perform an equivalent feedforward network. However a more ethologically relevant scenario is that of sparse input sequences. In this scenario, we show how linear neural networks can essentially perform compressed sensing (CS) of past inputs, thereby attaining a memory capacity that exceeds the number of neurons. This enhanced capacity is achieved by a class of “orthogonal” recurrent networks and not by feedforward networks or generic recurrent networks. We exploit techniques from the statistical physics of disordered systems to analytically compute the decay of memory traces in such networks as a function of network size, signal sparsity and integration time. Alternately, viewed purely from the perspective of CS, this work introduces a new ensemble of measurement matrices derived from dynamical systems, and provides a theoretical analysis of their asymptotic performance. 1

6 0.73343748 117 nips-2010-Identifying graph-structured activation patterns in networks

7 0.72375703 92 nips-2010-Fast global convergence rates of gradient methods for high-dimensional statistical recovery

8 0.72286081 51 nips-2010-Construction of Dependent Dirichlet Processes based on Poisson Processes

9 0.72264868 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

10 0.72261596 109 nips-2010-Group Sparse Coding with a Laplacian Scale Mixture Prior

11 0.72139859 148 nips-2010-Learning Networks of Stochastic Differential Equations

12 0.7207126 265 nips-2010-The LASSO risk: asymptotic results and real world examples

13 0.72034413 7 nips-2010-A Family of Penalty Functions for Structured Sparsity

14 0.71955669 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

15 0.71754831 63 nips-2010-Distributed Dual Averaging In Networks

16 0.71575654 30 nips-2010-An Inverse Power Method for Nonlinear Eigenproblems with Applications in 1-Spectral Clustering and Sparse PCA

17 0.71519732 158 nips-2010-Learning via Gaussian Herding

18 0.71500564 155 nips-2010-Learning the context of a category

19 0.71495122 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

20 0.71488738 31 nips-2010-An analysis on negative curvature induced by singularity in multi-layer neural-network learning