nips nips2013 nips2013-11 knowledge-graph by maker-knowledge-mining

11 nips-2013-A New Convex Relaxation for Tensor Completion


Source: pdf

Author: Bernardino Romera-Paredes, Massimiliano Pontil

Abstract: We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract We study the problem of learning a tensor from a set of linear measurements. [sent-9, score-0.669]

2 A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. [sent-10, score-1.232]

3 In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. [sent-11, score-0.232]

4 We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. [sent-12, score-0.169]

5 Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable. [sent-13, score-1.269]

6 1 Introduction During the recent years, there has been a growing interest on the problem of learning a tensor from a set of linear measurements, such as a subset of its entries, see [9, 17, 22, 23, 25, 26, 27] and references therein. [sent-14, score-0.669]

7 This methodology, which is also referred to as tensor completion, has been applied to various fields, ranging from collaborative filtering [15], to computer vision [17], and medical imaging [9], among others. [sent-15, score-0.669]

8 In this paper, we propose a new method to tensor completion, which is based on a convex regularizer which encourages low rank tensors and develop an algorithm for solving the associated regularization problem. [sent-16, score-1.275]

9 Arguably the most widely used convex approach to tensor completion is based upon the extension of trace norm regularization [24] to that context. [sent-17, score-1.482]

10 This involves computing the average of the trace norm of each matricization of the tensor [16]. [sent-18, score-1.252]

11 A key insight behind using trace norm regularization for matrix completion is that this norm provides a tight convex relaxation of the rank of a matrix defined on the spectral unit ball [8]. [sent-19, score-1.366]

12 Unfortunately, the extension of this methodology to the more general tensor setting presents some difficulties. [sent-20, score-0.693]

13 In particular, we shall prove in this paper that the tensor trace norm is not a tight convex relaxation of the tensor rank. [sent-21, score-2.066]

14 The above negative result stems from the fact that the spectral norm, used to compute the convex relaxation for the trace norm, is not an invariant property of the matricization of a tensor. [sent-22, score-0.736]

15 This observation leads us to take a different route and study afresh the convex relaxation of tensor rank on the Euclidean ball. [sent-23, score-0.974]

16 We show that this relaxation is tighter than the tensor trace norm, and we describe a technique to solve the associated regularization problem. [sent-24, score-1.172]

17 This method builds upon the alternating direction method of multipliers and a subgradient method to compute the proximity operator of the proposed regularizer. [sent-25, score-0.354]

18 Furthermore, we present numerical experiments on one synthetic dataset and two real-life datasets, which indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable. [sent-26, score-1.289]

19 In Section 2, we describe the tensor completion framework. [sent-28, score-0.854]

20 In Section 3, we highlight some limitations of the tensor trace norm regularizer and present an alternative convex relaxation for the tensor rank. [sent-29, score-2.171]

21 An N -order tensor W ∈ Rp1 ×···×pN , is a collection of real numbers (Wi1 ,. [sent-42, score-0.669]

22 W, will be used to denote tensors of order higher than two. [sent-48, score-0.19]

23 Vectors are 1-order tensors and will be denoted by lower case letters, e. [sent-49, score-0.19]

24 x or a; matrices are 2-order tensors and will be denoted by upper case letters, e. [sent-51, score-0.19]

25 A mode-n fiber of a tensor W is a vector composed of the elements of W obtained by fixing all indices but one, corresponding to the n-th mode. [sent-62, score-0.725]

26 The mode-n matricization (or unfolding) of W, denoted by W(n) , is a matrix obtained by arranging the mode-n fibers of W so that each of them is a column of W(n) ∈ Rpn ×Jn , where Jn := k=n pk . [sent-64, score-0.139]

27 We choose a linear operator I : Rp1 ×···×pN → Rm , representing a set of linear measurements obtained from a target tensor W 0 as y = I(W 0 )+ξ, where ξ is some disturbance noise. [sent-67, score-0.721]

28 Tensor completion is an important example of this setting, in this case the operator I returns the known elements of the tensor. [sent-68, score-0.242]

29 Our aim is to recover the tensor W 0 from the data (I, y). [sent-73, score-0.669]

30 The role of the regularizer R is to encourage solutions W which have a simple structure in the sense that they involve a small number of “degrees of freedom”. [sent-75, score-0.135]

31 Specifically, we consider the combinatorial regularizer R(W) = 1 N N rank(W(n) ). [sent-77, score-0.135]

32 (2) n=1 Finding a convex relaxation of this regularizer has been the subject of recent works [9, 17, 23]. [sent-78, score-0.367]

33 This is defined as the average of the trace norm of each matricization of W, that is, W tr = 1 N N W(n) tr (3) n=1 where W(n) tr is the trace (or nuclear) norm of matrix W(n) , namely the ℓ1 -norm of the vector of singular values of matrix W(n) (see, e. [sent-80, score-1.343]

34 Note that in the particular case of 2-order tensors, functions (2) and (3) coincide with the usual notion of rank and trace norm of a matrix, respectively. [sent-83, score-0.539]

35 A rational behind the regularizer (3) is that the trace norm is the tightest convex lower bound to the rank of a matrix on the spectral unit ball, see [8, Thm. [sent-84, score-0.901]

36 This lower bound is given by the convex envelope of the function rank(W ), if W ∞ ≤ 1 (4) Ψ(W ) = +∞, otherwise 1 For simplicity we assume that pn ≥ 2 for every n ∈ [N ], otherwise we simply reduce the order of the tensor without loss of information. [sent-86, score-1.075]

37 The convex envelope can be derived by computing the double conjugate of Ψ. [sent-88, score-0.246]

38 Note that Ψ is a spectral function, that is, Ψ(W ) = ψ(σ(W )) where ψ : Rd → R denotes the + associated symmetric gauge function. [sent-90, score-0.141]

39 3 Alternative Convex Relaxation In this section, we show that the tensor trace norm is not a tight convex relaxation of the tensor rank R in equation (2). [sent-97, score-2.164]

40 We then propose an alternative convex relaxation for this function. [sent-98, score-0.232]

41 Note that due to the composite nature of the function R, computing its convex envelope is a challenging task and one needs to resort to approximations. [sent-99, score-0.255]

42 In [22], the authors note that the tensor trace norm · tr in equation (3) is a convex lower bound to R on the set G∞ := W ∈ Rp1 ×···×pN : W(n) ∞ ≤ 1, ∀n ∈ [N ] . [sent-100, score-1.355]

43 However, the authors of [22] leave open the question of whether the tensor trace norm is the convex envelope of R on the set G∞ . [sent-102, score-1.36]

44 In the following, we will prove that this question has a negative answer by showing that there exists a convex function Ω = · tr which underestimates the function R on G∞ and such that for some tensor W ∈ G∞ it holds that Ω(W) > W tr . [sent-103, score-0.938]

45 ×pN : W where · 2 2 ≤1 is the Euclidean norm for tensors, that is, p1 W 2 2 := i1 =1 pN ··· (Wi1 ,. [sent-107, score-0.186]

46 iN =1 We will choose Ω(W) = Ωα (W) := 1 N N ∗∗ ωα σ W(n) (6) n=1 ∗∗ where ωα is the √ convex envelope of the cardinality of a vector on the ℓ2 -ball of radius α and we will choose α = pmin . [sent-111, score-0.415]

47 Note, by Lemma 4 stated in Appendix A, that for every α > 0, function Ωα is a convex lower bound of function R on the set αG2 . [sent-112, score-0.142]

48 Let ωα be the convex envelope of the cardinality on the ℓ2 -ball of radius α. [sent-115, score-0.264]

49 The function ωα resembles the norm developed in [1], which corresponds to the convex envelope of the indicator function of the cardinality of a vector in the ℓ2 ball. [sent-118, score-0.45]

50 The extension of its application to tensors is not straighforward though, as it is required to specify beforehand the rank of each matricization. [sent-119, score-0.263]

51 The next lemma provides, together with Lemma 1, a sufficient condition for the existence of a tensor W ∈ G∞ at which the regularizer in equation (6) is strictly larger than the tensor trace norm. [sent-120, score-1.813]

52 , pN are√not all equal to each other, then there exists W ∈ Rp1 ×···×pN such that: (a) W 2 = pmin , (b) W ∈ G∞ , (c) min rank(W(n) ) < n∈[N ] max rank(W(n) ). [sent-125, score-0.151]

53 , pN ∈ N, let · tr be the tensor trace norm in equation (3) and let √ Ωα be the function in equation (6) for α = pmin . [sent-132, score-1.41]

54 If pmin < pmax , then there are infinitely many tensors W ∈ G∞ such that Ωα (W) > W tr . [sent-133, score-0.451]

55 Since G∞ ⊂ αG2 then Ωα is a convex lower bound for the tensor rank R on the set G∞ as well. [sent-137, score-0.863]

56 Indeed, all tensors obtained following the process described in the proof of Lemma 2 (in Appendix C) have the property that tr N = 1 N < W 1 (pmin (N − 1) + pmin + 1) = Ω(W) = R(W). [sent-139, score-0.415]

57 N σ(W(n) ) 1 = n=1 1 N pmin (N − 1) + p2 + pmin min Furthermore there are infinitely many such tensors which satisfy this claim (see Appendix C). [sent-140, score-0.519]

58 ∗∗ With respect to the second claim, given that ω1 is the convex envelope of the cardinality card on ∗∗ the Euclidean unit ball, then ω1 (σ) ≥ σ 1 for every vector σ such that σ 2 ≤ 1. [sent-141, score-0.317]

59 The above result stems from the fact that the spectral norm is not an invariant property of the matricization of a tensor, whereas the Euclidean (Frobenius) norm is. [sent-143, score-0.596]

60 4 Optimization Method In this section, we explain how to solve the regularization problem associated with the regularizer (6). [sent-145, score-0.222]

61 For this purpose, we first recall the alternating direction method of multipliers (ADMM) [4], which was conveniently applied to tensor trace norm regularization in [9, 22]. [sent-146, score-1.3]

62 In particular, if ψ is the ℓ1 norm then problem (7) corresponds to tensor trace norm regular∗∗ ization, whereas if ψ = ωα it implements the proposed regularizer. [sent-150, score-1.341]

63 By completing the square in the right hand side, the solution of this problem is given by ˆ 1 Bn(n) = prox β Ψ (X) := argmin Bn(n) 1 1 Bn(n) − X Ψ Bn(n) + β 2 2 2 , 1 where X = W(n) − β An(n) . [sent-169, score-0.161]

64 1]) we know that if ψ is a gauge function then ⊤ 1 1 prox β Ψ (X) = UX diag prox β ψ (σ(X)) VX , where UX and VX are the orthogonal matrices formed by the left and right singular vectors of X, respectively. [sent-174, score-0.321]

65 If we choose ψ = · 1 the associated proximity operator is the well-known soft 1 thresholding operator, that is, prox β · 1 (σ) = v, where the vector v has components vi = sign (σi ) |σi | − 1 β . [sent-175, score-0.303]

66 2 Computation of the Proximity Operator 1 ∗∗ To compute the proximity operator of the function β ωα we will use several properties of proximity 1 ∗∗ calculus. [sent-179, score-0.276]

67 Finally, 1 by the scaling property of proximity operators [7], we have that proxg (x) = β proxβωα (βx). [sent-184, score-0.17]

68 ∗ 2 The somewhat cumbersome notation Bn(n) denotes the mode-n matricization of tensor Bn , that is, Bn(n) = (Bn )(n) . [sent-185, score-0.786]

69 5 Experiments We have conducted a set of experiments to assess whether there is any advantage of using the proposed regularizer over the tensor trace norm for tensor completion3 . [sent-204, score-1.99]

70 Then, we have tried both methods on two tensor completion real data problems. [sent-206, score-0.856]

71 It should take the value of the Euclidean norm of the underlying tensor. [sent-212, score-0.186]

72 This estimator assumes that each value in the tensor is sampled from N (mean(w), var(w)), where mean(w) and var(w) are the average and the variance of the elements in w. [sent-214, score-0.699]

73 0085 −5 −4 −3 2 log σ −2 0 −1 50 100 150 200 p Figure 1: Synthetic dataset: (Left) Root Mean Squared Error (RMSE) of tensor trace norm and the proposed regularizer. [sent-223, score-1.155]

74 1 Synthetic Dataset We have generated a 3-order tensor W 0 ∈ R40×20×10 by the following procedure. [sent-226, score-0.669]

75 First we generated a tensor W with ranks (12, 6, 3) using Tucker decomposition (see e. [sent-227, score-0.689]

76 We then created the ground truth tensor W 0 by the equation 0 Wi1 ,i2 ,i3 = Wi1 ,i2 ,i3 − mean(W) √ + ξi1 ,i2 ,i3 N std(W) where mean(W) and std(W) are the mean and standard deviation of the elements of W, N is the total number of elements of W, and the ξi1 ,i2 ,i3 are i. [sent-230, score-0.754]

77 We have randomly sampled 10% of the elements of the tensor to compose the training set, 45% for the validation set, and the remaining 45% for the test set. [sent-234, score-0.747]

78 We have generated tensors W 0 ∈ Rp×p×p for different values of p ∈ {20, 40, . [sent-239, score-0.19]

79 For low values of p, the ratio between the running time of our approach and that of the trace norm regularization method is quite high. [sent-244, score-0.532]

80 However, as the volume of the tensor increases, the ratio quickly decreases. [sent-247, score-0.669]

81 Therefore, we can conclude that even though our approach is slower than the trace norm based method, this difference becomes much smaller as the size of the tensor increases. [sent-252, score-1.135]

82 It is composed of examination marks ranging from 0 to 70, of 15362 students who are described by a set of attributes such as school and ethnic group. [sent-255, score-0.122]

83 Most of these attributes are categorical, thereby we can think of exam mark prediction as a tensor completion problem where each of the modes corresponds to a categorical attribute. [sent-256, score-0.877]

84 In particular, we have used the following attributes: school (139), gender (2), VR-band (3), ethnic (11), and year (3), leading to a 5-order tensor W ∈ R139×2×3×11×3 . [sent-257, score-0.739]

85 4 26 24 4000 6000 8000 10000 m (Training Set Size) 12000 2 4 6 8 10 12 m (Training Set Size) 14 16 4 x 10 Figure 2: Root Mean Squared Error (RMSE) of tensor trace norm and the proposed regularizer for ILEA dataset (Left) and Ocean video (Right). [sent-264, score-1.353]

86 There is a distinguishable improvement of our approach with respect to tensor trace norm regularization for values of m > 7000. [sent-268, score-1.201]

87 3 Video Completion In the second real-data experiment we have performed a video completion test. [sent-273, score-0.223]

88 Any video can be treated as a 4-order tensor: “width” × “height” × “RGB” × “video length”, so we can use tensor completion algorithms to rebuild a video from a few inputs, a procedure that can be useful for compression purposes. [sent-274, score-0.955]

89 This video sequence can be treated as a tensor W ∈ R160×112×3×32 . [sent-276, score-0.732]

90 We have randomly sampled m tensors elements as training data, 5% of them as validation data, and the remaining ones composed the test set. [sent-277, score-0.294]

91 The proposed approach is noticeably better than the tensor trace norm in this experiment. [sent-279, score-1.155]

92 6 Conclusion In this paper, we proposed a convex relaxation for the average of the rank of the matricizations of a tensor. [sent-282, score-0.364]

93 We compared this relaxation to a commonly used convex relaxation used in the context of tensor completion, which is based on the trace norm. [sent-283, score-1.292]

94 We proved that this second relaxation is not tight and argued that the proposed convex regularizer may be advantageous. [sent-284, score-0.417]

95 Our numerical experience indicates that our method consistently improves in terms of estimation error over tensor trace norm regularization, while being computationally comparable on the range of problems we considered. [sent-285, score-1.176]

96 In the future it would be interesting to study methods to speed up the computation of the proximity operator of our regularizer and investigate its utility in tensor learning problems beyond tensor completion such as multilinear multitask learning [20]. [sent-286, score-1.848]

97 Tensor completion and low-n-rank tensor recovery via convex optimization. [sent-352, score-0.95]

98 Multiverse recommendation: n-dimensional tensor factorization for context-aware collaborative filtering. [sent-392, score-0.669]

99 Tensor completion for estimating missing values in visual data. [sent-407, score-0.16]

100 Learning with tensors: a framework based on convex optimization and spectral regularization. [sent-438, score-0.184]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('tensor', 0.669), ('trace', 0.28), ('bn', 0.255), ('tensors', 0.19), ('norm', 0.186), ('completion', 0.16), ('pn', 0.16), ('pmin', 0.151), ('regularizer', 0.135), ('convex', 0.121), ('prox', 0.118), ('matricization', 0.117), ('proximity', 0.112), ('relaxation', 0.111), ('envelope', 0.104), ('tr', 0.074), ('rank', 0.073), ('subgradient', 0.071), ('regularization', 0.066), ('rmse', 0.065), ('video', 0.063), ('spectral', 0.063), ('admm', 0.059), ('proxg', 0.058), ('gauge', 0.057), ('signoretto', 0.054), ('operator', 0.052), ('bers', 0.046), ('london', 0.045), ('ethnic', 0.044), ('ilea', 0.044), ('malet', 0.044), ('argmin', 0.043), ('argyriou', 0.043), ('tomioka', 0.043), ('multipliers', 0.042), ('rd', 0.039), ('paired', 0.039), ('matricizations', 0.039), ('cardinality', 0.039), ('euclidean', 0.039), ('appendix', 0.038), ('lemar', 0.036), ('ocean', 0.036), ('pmax', 0.036), ('vx', 0.036), ('alternating', 0.035), ('lemma', 0.035), ('micchelli', 0.032), ('card', 0.032), ('std', 0.032), ('conducted', 0.031), ('jn', 0.031), ('elements', 0.03), ('tight', 0.03), ('composite', 0.03), ('var', 0.029), ('hayashi', 0.028), ('multilinear', 0.028), ('pontil', 0.028), ('singular', 0.028), ('centre', 0.028), ('ux', 0.028), ('wk', 0.027), ('tried', 0.027), ('demanding', 0.027), ('claim', 0.027), ('composed', 0.026), ('validation', 0.026), ('synthetic', 0.026), ('tucker', 0.026), ('school', 0.026), ('attributes', 0.026), ('equation', 0.025), ('describe', 0.025), ('ball', 0.025), ('stems', 0.024), ('ps', 0.024), ('repeating', 0.024), ('methodology', 0.024), ('multitask', 0.023), ('routine', 0.023), ('wt', 0.023), ('sup', 0.023), ('matrix', 0.022), ('direction', 0.022), ('remaining', 0.022), ('categorical', 0.022), ('every', 0.021), ('behind', 0.021), ('associated', 0.021), ('conjugate', 0.021), ('argmax', 0.021), ('experience', 0.021), ('letters', 0.021), ('numerical', 0.02), ('invariant', 0.02), ('college', 0.02), ('proposed', 0.02), ('decomposition', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 11 nips-2013-A New Convex Relaxation for Tensor Completion

Author: Bernardino Romera-Paredes, Massimiliano Pontil

Abstract: We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable. 1

2 0.42121699 295 nips-2013-Simultaneous Rectification and Alignment via Robust Recovery of Low-rank Tensors

Author: Xiaoqin Zhang, Di Wang, Zhengyuan Zhou, Yi Ma

Abstract: In this work, we propose a general method for recovering low-rank three-order tensors, in which the data can be deformed by some unknown transformation and corrupted by arbitrary sparse errors. Since the unfolding matrices of a tensor are interdependent, we introduce auxiliary variables and relax the hard equality constraints by the augmented Lagrange multiplier method. To improve the computational efficiency, we introduce a proximal gradient step to the alternating direction minimization method. We have provided proof for the convergence of the linearized version of the problem which is the inner loop of the overall algorithm. Both simulations and experiments show that our methods are more efficient and effective than previous work. The proposed method can be easily applied to simultaneously rectify and align multiple images or videos frames. In this context, the state-of-the-art algorithms “RASL” and “TILT” can be viewed as two special cases of our work, and yet each only performs part of the function of our method.

3 0.41765621 113 nips-2013-Exact and Stable Recovery of Pairwise Interaction Tensors

Author: Shouyuan Chen, Michael R. Lyu, Irwin King, Zenglin Xu

Abstract: Tensor completion from incomplete observations is a problem of significant practical interest. However, it is unlikely that there exists an efficient algorithm with provable guarantee to recover a general tensor from a limited number of observations. In this paper, we study the recovery algorithm for pairwise interaction tensors, which has recently gained considerable attention for modeling multiple attribute data due to its simplicity and effectiveness. Specifically, in the absence of noise, we show that one can exactly recover a pairwise interaction tensor by solving a constrained convex program which minimizes the weighted sum of nuclear norms of matrices from O(nr log2 (n)) observations. For the noisy cases, we also prove error bounds for a constrained convex program for recovering the tensors. Our experiments on the synthetic dataset demonstrate that the recovery performance of our algorithm agrees well with the theory. In addition, we apply our algorithm on a temporal collaborative filtering task and obtain state-of-the-art results. 1

4 0.39066082 155 nips-2013-Learning Hidden Markov Models from Non-sequence Data via Tensor Decomposition

Author: Tzu-Kuo Huang, Jeff Schneider

Abstract: Learning dynamic models from observed data has been a central issue in many scientific studies or engineering tasks. The usual setting is that data are collected sequentially from trajectories of some dynamical system operation. In quite a few modern scientific modeling tasks, however, it turns out that reliable sequential data are rather difficult to gather, whereas out-of-order snapshots are much easier to obtain. Examples include the modeling of galaxies, chronic diseases such Alzheimer’s, or certain biological processes. Existing methods for learning dynamic model from non-sequence data are mostly based on Expectation-Maximization, which involves non-convex optimization and is thus hard to analyze. Inspired by recent advances in spectral learning methods, we propose to study this problem from a different perspective: moment matching and spectral decomposition. Under that framework, we identify reasonable assumptions on the generative process of non-sequence data, and propose learning algorithms based on the tensor decomposition method [2] to provably recover firstorder Markov models and hidden Markov models. To the best of our knowledge, this is the first formal guarantee on learning from non-sequence data. Preliminary simulation results confirm our theoretical findings. 1

5 0.38015366 179 nips-2013-Low-Rank Matrix and Tensor Completion via Adaptive Sampling

Author: Akshay Krishnamurthy, Aarti Singh

Abstract: We study low rank matrix and tensor completion and propose novel algorithms that employ adaptive sampling schemes to obtain strong performance guarantees. Our algorithms exploit adaptivity to identify entries that are highly informative for learning the column space of the matrix (tensor) and consequently, our results hold even when the row space is highly coherent, in contrast with previous analyses. In the absence of noise, we show that one can exactly recover a n ⇥ n matrix of rank r from merely ⌦(nr3/2 log(r)) matrix entries. We also show that one can recover an order T tensor using ⌦(nrT 1/2 T 2 log(r)) entries. For noisy recovery, our algorithm consistently estimates a low rank matrix corrupted with noise using ⌦(nr3/2 polylog(n)) entries. We complement our study with simulations that verify our theory and demonstrate the scalability of our algorithms. 1

6 0.3776015 74 nips-2013-Convex Tensor Decomposition via Structured Schatten Norm Regularization

7 0.30125532 203 nips-2013-Multilinear Dynamical Systems for Tensor Time Series

8 0.2144344 281 nips-2013-Robust Low Rank Kernel Embeddings of Multivariate Distributions

9 0.21149209 185 nips-2013-Matrix Completion From any Given Set of Observations

10 0.20849305 222 nips-2013-On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization

11 0.17736188 263 nips-2013-Reasoning With Neural Tensor Networks for Knowledge Base Completion

12 0.11773869 353 nips-2013-When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

13 0.1042442 70 nips-2013-Contrastive Learning Using Spectral Methods

14 0.10080372 75 nips-2013-Convex Two-Layer Modeling

15 0.098655738 307 nips-2013-Speedup Matrix Completion with Side Information: Application to Multi-Label Learning

16 0.090913571 108 nips-2013-Error-Minimizing Estimates and Universal Entry-Wise Error Bounds for Low-Rank Matrix Completion

17 0.082120813 254 nips-2013-Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

18 0.080828957 91 nips-2013-Dirty Statistical Models

19 0.0764575 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

20 0.072288841 285 nips-2013-Robust Transfer Principal Component Analysis with Rank Constraints


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.204), (1, 0.138), (2, 0.207), (3, 0.441), (4, 0.012), (5, -0.424), (6, 0.046), (7, -0.06), (8, 0.234), (9, 0.079), (10, 0.003), (11, -0.048), (12, -0.046), (13, -0.003), (14, 0.004), (15, -0.001), (16, -0.06), (17, -0.032), (18, 0.061), (19, 0.024), (20, 0.021), (21, 0.129), (22, -0.006), (23, 0.05), (24, -0.086), (25, 0.029), (26, -0.012), (27, 0.021), (28, 0.055), (29, 0.004), (30, -0.007), (31, 0.041), (32, -0.031), (33, -0.028), (34, 0.012), (35, -0.011), (36, -0.017), (37, -0.019), (38, -0.059), (39, 0.01), (40, 0.048), (41, -0.006), (42, 0.008), (43, 0.047), (44, 0.003), (45, -0.005), (46, -0.008), (47, 0.01), (48, 0.018), (49, -0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97142297 11 nips-2013-A New Convex Relaxation for Tensor Completion

Author: Bernardino Romera-Paredes, Massimiliano Pontil

Abstract: We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable. 1

2 0.93961954 295 nips-2013-Simultaneous Rectification and Alignment via Robust Recovery of Low-rank Tensors

Author: Xiaoqin Zhang, Di Wang, Zhengyuan Zhou, Yi Ma

Abstract: In this work, we propose a general method for recovering low-rank three-order tensors, in which the data can be deformed by some unknown transformation and corrupted by arbitrary sparse errors. Since the unfolding matrices of a tensor are interdependent, we introduce auxiliary variables and relax the hard equality constraints by the augmented Lagrange multiplier method. To improve the computational efficiency, we introduce a proximal gradient step to the alternating direction minimization method. We have provided proof for the convergence of the linearized version of the problem which is the inner loop of the overall algorithm. Both simulations and experiments show that our methods are more efficient and effective than previous work. The proposed method can be easily applied to simultaneously rectify and align multiple images or videos frames. In this context, the state-of-the-art algorithms “RASL” and “TILT” can be viewed as two special cases of our work, and yet each only performs part of the function of our method.

3 0.89817816 74 nips-2013-Convex Tensor Decomposition via Structured Schatten Norm Regularization

Author: Ryota Tomioka, Taiji Suzuki

Abstract: We study a new class of structured Schatten norms for tensors that includes two recently proposed norms (“overlapped” and “latent”) for convex-optimizationbased tensor decomposition. We analyze the performance of “latent” approach for tensor decomposition, which was empirically found to perform better than the “overlapped” approach in some settings. We show theoretically that this is indeed the case. In particular, when the unknown true tensor is low-rank in a specific unknown mode, this approach performs as well as knowing the mode with the smallest rank. Along the way, we show a novel duality result for structured Schatten norms, which is also interesting in the general context of structured sparsity. We confirm through numerical simulations that our theory can precisely predict the scaling behaviour of the mean squared error. 1

4 0.88158739 203 nips-2013-Multilinear Dynamical Systems for Tensor Time Series

Author: Mark Rogers, Lei Li, Stuart Russell

Abstract: Data in the sciences frequently occur as sequences of multidimensional arrays called tensors. How can hidden, evolving trends in such data be extracted while preserving the tensor structure? The model that is traditionally used is the linear dynamical system (LDS) with Gaussian noise, which treats the latent state and observation at each time slice as a vector. We present the multilinear dynamical system (MLDS) for modeling tensor time series and an expectation–maximization (EM) algorithm to estimate the parameters. The MLDS models each tensor observation in the time series as the multilinear projection of the corresponding member of a sequence of latent tensors. The latent tensors are again evolving with respect to a multilinear projection. Compared to the LDS with an equal number of parameters, the MLDS achieves higher prediction accuracy and marginal likelihood for both artificial and real datasets. 1

5 0.87513036 113 nips-2013-Exact and Stable Recovery of Pairwise Interaction Tensors

Author: Shouyuan Chen, Michael R. Lyu, Irwin King, Zenglin Xu

Abstract: Tensor completion from incomplete observations is a problem of significant practical interest. However, it is unlikely that there exists an efficient algorithm with provable guarantee to recover a general tensor from a limited number of observations. In this paper, we study the recovery algorithm for pairwise interaction tensors, which has recently gained considerable attention for modeling multiple attribute data due to its simplicity and effectiveness. Specifically, in the absence of noise, we show that one can exactly recover a pairwise interaction tensor by solving a constrained convex program which minimizes the weighted sum of nuclear norms of matrices from O(nr log2 (n)) observations. For the noisy cases, we also prove error bounds for a constrained convex program for recovering the tensors. Our experiments on the synthetic dataset demonstrate that the recovery performance of our algorithm agrees well with the theory. In addition, we apply our algorithm on a temporal collaborative filtering task and obtain state-of-the-art results. 1

6 0.69019634 155 nips-2013-Learning Hidden Markov Models from Non-sequence Data via Tensor Decomposition

7 0.67537785 179 nips-2013-Low-Rank Matrix and Tensor Completion via Adaptive Sampling

8 0.50920737 281 nips-2013-Robust Low Rank Kernel Embeddings of Multivariate Distributions

9 0.44988298 222 nips-2013-On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization

10 0.4322854 254 nips-2013-Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

11 0.4033528 307 nips-2013-Speedup Matrix Completion with Side Information: Application to Multi-Label Learning

12 0.40286633 185 nips-2013-Matrix Completion From any Given Set of Observations

13 0.40156019 342 nips-2013-Unsupervised Spectral Learning of Finite State Transducers

14 0.3892794 263 nips-2013-Reasoning With Neural Tensor Networks for Knowledge Base Completion

15 0.3785108 70 nips-2013-Contrastive Learning Using Spectral Methods

16 0.33140731 214 nips-2013-On Algorithms for Sparse Multi-factor NMF

17 0.31845835 353 nips-2013-When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

18 0.31573203 117 nips-2013-Fast Algorithms for Gaussian Noise Invariant Independent Component Analysis

19 0.28760925 108 nips-2013-Error-Minimizing Estimates and Universal Entry-Wise Error Bounds for Low-Rank Matrix Completion

20 0.28365558 247 nips-2013-Phase Retrieval using Alternating Minimization


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(16, 0.025), (33, 0.187), (34, 0.086), (41, 0.281), (49, 0.031), (56, 0.155), (70, 0.02), (85, 0.035), (89, 0.022), (93, 0.034), (95, 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.98810595 257 nips-2013-Projected Natural Actor-Critic

Author: Philip S. Thomas, William C. Dabney, Stephen Giguere, Sridhar Mahadevan

Abstract: Natural actor-critics form a popular class of policy search algorithms for finding locally optimal policies for Markov decision processes. In this paper we address a drawback of natural actor-critics that limits their real-world applicability—their lack of safety guarantees. We present a principled algorithm for performing natural gradient descent over a constrained domain. In the context of reinforcement learning, this allows for natural actor-critic algorithms that are guaranteed to remain within a known safe region of policy space. While deriving our class of constrained natural actor-critic algorithms, which we call Projected Natural ActorCritics (PNACs), we also elucidate the relationship between natural gradient descent and mirror descent. 1

2 0.96797526 193 nips-2013-Mixed Optimization for Smooth Functions

Author: Mehrdad Mahdavi, Lijun Zhang, Rong Jin

Abstract: It is well known that the optimal convergence rate for stochastic optimization of √ smooth functions is O(1/ T ), which is same as stochastic optimization of Lipschitz continuous convex functions. This is in contrast to optimizing smooth functions using full gradients, which yields a convergence rate of O(1/T 2 ). In this work, we consider a new setup for optimizing smooth functions, termed as Mixed Optimization, which allows to access both a stochastic oracle and a full gradient oracle. Our goal is to significantly improve the convergence rate of stochastic optimization of smooth functions by having an additional small number of accesses to the full gradient oracle. We show that, with an O(ln T ) calls to the full gradient oracle and an O(T ) calls to the stochastic oracle, the proposed mixed optimization algorithm is able to achieve an optimization error of O(1/T ). 1

3 0.94176626 189 nips-2013-Message Passing Inference with Chemical Reaction Networks

Author: Nils E. Napp, Ryan P. Adams

Abstract: Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems. In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale. Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, and compile them into chemical reaction networks that implement inference. In particular, we show that marginalization based on sum-product message passing can be implemented in terms of reactions between chemical species whose concentrations represent probabilities. We show algebraically that the steady state concentration of these species correspond to the marginal distributions of the random variables in the graph and validate the results in simulations. As with standard sum-product inference, this procedure yields exact results for tree-structured graphs, and approximate solutions for loopy graphs.

4 0.93984658 258 nips-2013-Projecting Ising Model Parameters for Fast Mixing

Author: Justin Domke, Xianghang Liu

Abstract: Inference in general Ising models is difficult, due to high treewidth making treebased algorithms intractable. Moreover, when interactions are strong, Gibbs sampling may take exponential time to converge to the stationary distribution. We present an algorithm to project Ising model parameters onto a parameter set that is guaranteed to be fast mixing, under several divergences. We find that Gibbs sampling using the projected parameters is more accurate than with the original parameters when interaction strengths are strong and when limited time is available for sampling.

5 0.91435099 62 nips-2013-Causal Inference on Time Series using Restricted Structural Equation Models

Author: Jonas Peters, Dominik Janzing, Bernhard Schölkopf

Abstract: Causal inference uses observational data to infer the causal structure of the data generating system. We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo). These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. This work contains two main contributions: (1) Theoretical: By restricting the model class (e.g. to additive noise) we provide general identifiability results. They cover lagged and instantaneous effects that can be nonlinear and unfaithful, and non-instantaneous feedbacks between the time series. (2) Practical: If there are no feedback loops between time series, we propose an algorithm based on non-linear independence tests of time series. We show empirically that when the data are causally insufficient or the model is misspecified, the method avoids incorrect answers. We extend the theoretical and the algorithmic part to situations in which the time series have been measured with different time delays. TiMINo is applied to artificial and real data and code is provided. 1

6 0.91211623 175 nips-2013-Linear Convergence with Condition Number Independent Access of Full Gradients

7 0.90529126 180 nips-2013-Low-rank matrix reconstruction and clustering via approximate message passing

same-paper 8 0.87773001 11 nips-2013-A New Convex Relaxation for Tensor Completion

9 0.83660346 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

10 0.81183207 311 nips-2013-Stochastic Convex Optimization with Multiple Objectives

11 0.80834597 20 nips-2013-Accelerating Stochastic Gradient Descent using Predictive Variance Reduction

12 0.78347886 113 nips-2013-Exact and Stable Recovery of Pairwise Interaction Tensors

13 0.77653807 239 nips-2013-Optimistic policy iteration and natural actor-critic: A unifying view and a non-optimality result

14 0.77550972 184 nips-2013-Marginals-to-Models Reducibility

15 0.7645371 318 nips-2013-Structured Learning via Logistic Regression

16 0.76439053 54 nips-2013-Bayesian optimization explains human active search

17 0.76392519 295 nips-2013-Simultaneous Rectification and Alignment via Robust Recovery of Low-rank Tensors

18 0.76360208 333 nips-2013-Trading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent

19 0.7605837 240 nips-2013-Optimization, Learning, and Games with Predictable Sequences

20 0.75742644 282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching