nips nips2011 nips2011-211 knowledge-graph by maker-knowledge-mining

211 nips-2011-Penalty Decomposition Methods for Rank Minimization


Source: pdf

Author: Yong Zhang, Zhaosong Lu

Abstract: In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems. The convergence results of the PD methods have been shown in the longer version of the paper [19]. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Penalty Decomposition Methods for Rank Minimization ∗ Zhaosong Lu † Yong Zhang ‡ Abstract In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. [sent-1, score-0.796]

2 We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. [sent-2, score-0.276]

3 As a consequence, we establish that a class of rank minimization problems have closed form solutions. [sent-3, score-0.605]

4 Using this result, we then propose penalty decomposition methods for general rank minimization problems. [sent-4, score-0.762]

5 Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. [sent-6, score-0.52]

6 Recently, some image recovery and machine learning problems are formulated as (1) or (2) (see, for example, [27, 31]). [sent-13, score-0.17]

7 In addition, the problem of finding nearest low-rank correlation matrix is in the form of (1), which has important application in finance (see, for example, [4, 29, 36, 38, 25, 30, 12]). [sent-14, score-0.281]

8 , MAXCUT), one novel method is to first solve the semidefinite programming (SDP) relaxation of (1) and then obtain an approximate solution of (1) by applying some heuristics to the solution of the SDP (see, for example, [11]). [sent-18, score-0.28]

9 In addition, the nuclear norm relaxation approach has been proposed for problems (1) or (2). [sent-20, score-0.374]

10 In their approach, a convex relaxation is applied to (1) or (2) by replacing the rank of X by the nuclear norm of X and numerous efficient methods can then be applied to solve the resulting convex problems. [sent-30, score-0.751]

11 Additionally, for some application problems, the nuclear norm stays constant in feasible region. [sent-34, score-0.293]

12 For example, as for nearest low-rank correlation matrix problem (see Subsection 3. [sent-35, score-0.281]

13 2), any feasible point is a symmetric positive semidefinite matrix with all diagonal entries equal to one. [sent-36, score-0.133]

14 For those problems, nuclear norm relaxation approach is obviously inappropriate. [sent-37, score-0.322]

15 In this approach, problem (1) is cast into an NLP problem by replacing the constraint rank(X) ≤ r by X = U V where U ∈ m×r and V ∈ r×n , and then numerous optimization methods can be applied to solve the resulting NLP. [sent-39, score-0.155]

16 In this paper we consider general rank minimization problems (1) and (2). [sent-42, score-0.484]

17 We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. [sent-43, score-0.276]

18 As a consequence, we establish that a class of rank minimization problems have closed form solutions. [sent-44, score-0.605]

19 Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. [sent-45, score-1.002]

20 Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. [sent-47, score-0.52]

21 In Section 2, we first establish some technical results on a class of rank minimization problems and then use them to develop the penalty decomposition methods for solving problems (1) and (2). [sent-52, score-1.037]

22 In Section 3, we conduct numerical experiments to test the performance of our penalty decomposition methods for solving matrix completion and nearest low-rank correlation matrix problems. [sent-53, score-1.005]

23 The Frobenius norm of a real matrix X is defined as X F := Tr(XX T ) where Tr(·) denotes the trace of a matrix, and the nuclear norm of X, denoted by X ∗ , is defined as the sum of all singular values of X. [sent-60, score-0.546]

24 For a real symmetric matrix X, λ(X) denotes the vector of all eigenvalues of X arranged in nondecreasing order and Λ(X) is the diagonal matrix whose ith diagonal entry is λi (X) for all i. [sent-63, score-0.244]

25 2 2 Penalty decomposition methods In this section, we first establish some technical results on a class of rank minimization problems. [sent-69, score-0.697]

26 Then we propose penalty decomposition (PD) methods for solving problems (1) and (2) by using these technical results. [sent-70, score-0.507]

27 1 Technical results on special rank minimization In this subsection we first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. [sent-72, score-0.837]

28 As a consequence, we establish a result that a class of rank minimization problems have closed form solutions, which will be used to develop penalty decomposition methods in Subsection 2. [sent-73, score-0.935]

29 A norm · is a unitarily invariant norm on m×n if U XV = X for all U ∈ U m , V ∈ U n , X ∈ n×n . [sent-78, score-0.557]

30 More generally, a function F : m×n → is a unitarily invariant function if F (U XV ) = F (X) for all U ∈ U m , V ∈ U n , X ∈ m×n . [sent-79, score-0.385]

31 A set X ⊆ m×n is a unitarily invariant set if {U XV : U ∈ U m , V ∈ U n , X ∈ X } = X . [sent-80, score-0.385]

32 Similarly, a function F : S n → is a unitary similarity invariant function if F (U XU T ) = F (X) for all U ∈ U n , X ∈ S n . [sent-81, score-0.173]

33 A set X ⊆ S n is a unitary similarity invariant set if {U XU T : U ∈ U n , X ∈ X } = X . [sent-82, score-0.173]

34 The following result establishes that a class of matrix optimization problems over a subset of can be solved as lower dimensional vector optimization problems. [sent-83, score-0.276]

35 1 Let · be a unitarily invariant norm on m×n , and let F : m×n → be a unitarily invariant function. [sent-85, score-0.856]

36 Suppose that X ⊆ m×n is a unitarily invariant set. [sent-86, score-0.385]

37 Suppose that U Σ(A)V T is the singular value decomposition of A. [sent-88, score-0.236]

38 Then, X ∗ = U D(x∗ )V T is an optimal solution of the problem min F (X) + φ( X − A ) (3) s. [sent-89, score-0.137]

39 X ∈ X , ∗ q where x ∈ is an optimal solution of the problem min F (D(x)) + φ( D(x) − Σ(A) ) s. [sent-91, score-0.137]

40 1, we next state that a class of rank minimization problems on a subset of m×n can be solved as lower dimensional vector minimization problems. [sent-95, score-0.662]

41 Suppose that X ⊆ m×n is a unitarily invariant set, and U Σ(A)V T is the singular value decomposition of A. [sent-98, score-0.621]

42 Suppose that X ⊆ m×n is a unitarily invariant set, and U Σ(A)V T is the singular value decomposition of A. [sent-102, score-0.621]

43 When X is simple enough, problems (5) and (7) have closed form solutions. [sent-105, score-0.127]

44 In this case, it is not hard to observe that problems (6) and (8) have closed form solutions (see [20]). [sent-108, score-0.127]

45 3 that problems (5) and (7) also have closed form solutions. [sent-111, score-0.127]

46 The following results are heavily used in [6, 22, 34] for developing algorithms for solving the nuclear norm relaxation of matrix completion problems. [sent-112, score-0.615]

47 Suppose that U Σ(A)V T is the singular value decomposition of A. [sent-117, score-0.236]

48 Then, X ∗ = U D(x∗ )V T is an optimal solution of the problem 1 min ν X ∗ + X − A 2 , F 2 ∗ q where x ∈ is an optimal solution of the problem 1 min ν x 1 + x − σ(A) 2 . [sent-118, score-0.274]

49 Suppose that U Σ(A)V T is the singular value decomposition of A. [sent-121, score-0.236]

50 Clearly, the above results can be generalized to solve a class of matrix optimization problems over a subset of S n . [sent-123, score-0.24]

51 2 Penalty decomposition methods for solving (1) and (2) In this subsection, we consider the rank minimization problems (1) and (2). [sent-126, score-0.686]

52 In particular, we first propose a penalty decomposition (PD) method for solving problem (1), and then extend it to solve problem (2) at end of this subsection. [sent-127, score-0.48]

53 Assumption 1 Problems (1) and (2) are feasible, and moreover, at least a feasible solution, denoted by X feas , is known. [sent-129, score-0.21]

54 Given a penalty parameter > 0, the associated quadratic penalty function for (9) is defined as Q (X, Y ) := f (X) + 2 X −Y 2 F. [sent-131, score-0.364]

55 (10) We now propose a PD method for solving problem (9) (or, equivalently, (1)) in which each penalty subproblem is approximately solved by a block coordinate descent (BCD) method. [sent-132, score-0.419]

56 Penalty decomposition method for (9) (asymmetric matrices): Let 0 > 0, σ > 1 be given. [sent-133, score-0.184]

57 Choose an arbitrary Y00 ∈ Y and a constant Υ ≥ max{f (X feas ), minX∈X Q 0 (X, Y00 )}. [sent-134, score-0.135]

58 1) Set l = 0 and apply the BCD method to find an approximate solution (X k , Y k ) ∈ X × Y for the penalty subproblem min{Q k (X, Y ) : X ∈ X , Y ∈ Y} (11) by performing steps 1a)-1d): 4 k 1a) Solve Xl+1 ∈ Arg min Q k (X, Ylk ). [sent-136, score-0.444]

59 3) If min Q X∈X k+1 (X, Y k ) > Υ, set Y0k+1 := X feas . [sent-140, score-0.216]

60 (12) max(|Q k (Xlk , Ylk )|, 1) Moreover, we can terminate the outer iterations of the above method once k k max |Xij − Yij | ≤ O (13) ij for some O > 0. [sent-147, score-0.164]

61 To enhance the quality of approximate solutions, one may execute the BCD method multiple times starting from a suitable perturbation of the current approximate solution. [sent-149, score-0.149]

62 In detail, at the kth outer iteration, let (X k , Y k ) be a current approximate solution of (11) obtained by the BCD method, and let rk = rank(Y k ). [sent-150, score-0.155]

63 Before starting the (k + 1)th outer iteration, one can apply the BCD method again starting from Y0k ∈ Arg min{ Y − Y k F : rank(Y ) ≤ rk − 1} (namely, a rank-one perturbation of Y k ) and obtain a new approximate ˜ ˜ ˜ ˜ solution (X k , Y k ) of (11). [sent-152, score-0.234]

64 Otherwise, one can terminate the kth outer iteration and start the next outer iteration. [sent-154, score-0.195]

65 3, the subproblem in step 1b) can be reduced to the problem in form of (8), which has closed form solution when Ω is simple enough. [sent-156, score-0.22]

66 Under some suitable assumptions, we have established that any accumulation point of the sequence generated by our method when applied to problem (1) is a stationary point of a nonlinear reformulation of the problem. [sent-158, score-0.209]

67 (14) X,Y Given a penalty parameter > 0, the associated quadratic penalty function for (14) is defined as P (X, Y ) := f (X) + ν rank(Y ) + 2 X −Y 2 F. [sent-161, score-0.364]

68 (15) Then we can easily adapt the PD method for solving (9) to solve (14) (or, equivalently, (2)) by setting the constant Υ ≥ max{f (X feas ) + ν rank(X feas ), minX∈X P 0 (X, Y00 )}. [sent-162, score-0.42]

69 2, the BCD subproblem in step 1b) when applied to minimize the penalty function (15) can be reduced to the problem in form of (6), which has closed form solution when Ω is simple enough. [sent-165, score-0.402]

70 To enhance the quality of approximate solutions, one may apply a similar strategy as described for the previous PD method by executing the BCD method multiple times starting from a suitable perturbation of the current approximate solution. [sent-168, score-0.185]

71 5 3 Numerical results In this section, we conduct numerical experiments to test the performance of our penalty decomposition (PD) methods proposed in Section 2 by applying them to solve matrix completion and nearest low-rank correlation matrix problems. [sent-173, score-1.011]

72 1 Matrix completion problem In this subsection, we apply our PD method proposed in Section 2 to the matrix completion problem, which has numerous applications in control and systems theory, image recovery and data mining (see, for example, [33, 24, 9, 16]). [sent-180, score-0.568]

73 Recently, numerous methods were proposed to solve the nuclear norm relaxation or the variant of (16) (see, for example, [18, 6, 22, 8, 13, 14, 21, 23, 32, 17, 37, 35]). [sent-184, score-0.439]

74 It is not hard to see that problem (16) is a special case of the general rank minimization problem (2) with f (X) ≡ 0, ν = 1, Ω = m×n , and X = {X ∈ m×n : Xij = Mij , (i, j) ∈ Θ}. [sent-185, score-0.432]

75 Next we conduct numerical experiments to test the performance of our PD method for solving matrix completion problem (16) on real data. [sent-189, score-0.43]

76 In our experiment, we aim to test the performance of our PD method for solving a grayscale image inpainting problem [2]. [sent-190, score-0.278]

77 For an image inpainting problem, our goal is to fill the missing pixel values of the image at given pixel locations. [sent-192, score-0.301]

78 As shown in [33, 24], this problem can be solved as a matrix completion problem if the image is of low-rank. [sent-194, score-0.384]

79 In our test, the original 512 × 512 grayscale image is shown in Figure 1(a). [sent-195, score-0.123]

80 To obtain the data for problem (16), we first apply the singular value decomposition to the original image and truncate the resulting decomposition to get an image of rank 40 shown in Figure 1(e). [sent-196, score-0.87]

81 We now apply our PD method to solve problem (16) with the data given in Figures 1(b), 1(c) and 1(d), and the resulting recovered images are presented in Figures 1(f), 1(g) and 1(h), respectively. [sent-199, score-0.149]

82 2 Nearest low-rank correlation matrix problem In this subsection, we apply our PD method proposed in Section 2 to find the nearest low-rank correlation matrix, which has important applications in finance (see, for example, [4, 29, 36, 38, 30]). [sent-206, score-0.42]

83 diag(X) = e, rank(X) ≤ r, X 0 n for some correlation matrix C ∈ S+ and some integer r ∈ [1, n], where diag(X) denotes the vector consisting of the diagonal entries of X and e is the all-ones vector. [sent-209, score-0.193]

84 6 (a) original image (b) 50% masked original (c) 50% masked rank 40 (d) 6. [sent-211, score-0.541]

85 Next we conduct numerical experiments to test the performance of our method for solving (17) on three classes of benchmark testing problems. [sent-216, score-0.191]

86 Then we apply our PD method and the method named as Major developed in [25] to solve problem (17) on the instances generated above. [sent-226, score-0.132]

87 4 Concluding remarks In this paper we proposed penalty decomposition (PD) methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. [sent-238, score-1.002]

88 In the longer version of the paper [20], we have showed that under some suitable assumptions any accumulation point of the sequence generated by our method when applied to the rank constrained minimization problem is a stationary point of a nonlinear reformulation of the problem. [sent-239, score-0.684]

89 A singular value thresholding algorithm for matrix come pletion. [sent-341, score-0.178]

90 A rank minimization heuristic with application to minimum order system approximation. [sent-366, score-0.432]

91 A sequential semismooth Newton method for the nearest low-rank correlation matrix problem. [sent-403, score-0.317]

92 Interior-point method for nuclear norm approximation with application to system identification. [sent-408, score-0.286]

93 An implementable proximal point algorithmic framework for nuclear norm minimization. [sent-418, score-0.25]

94 Fixed point and Bregman iterative methods for matrix rank minimization. [sent-450, score-0.402]

95 A sequential factorization method for recovering shape and motion from image streams. [sent-464, score-0.16]

96 Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. [sent-487, score-0.34]

97 On the simultaneous calibration of multifactor lognormal interest rate models to Black volatilities and to the correlation matrix. [sent-492, score-0.142]

98 An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. [sent-520, score-0.25]

99 Shape and motion from image streams under orthography: a factorization method. [sent-528, score-0.124]

100 Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. [sent-543, score-0.276]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('pd', 0.471), ('rank', 0.312), ('unitarily', 0.27), ('bcd', 0.24), ('penalty', 0.182), ('nuclear', 0.164), ('completion', 0.149), ('decomposition', 0.148), ('feas', 0.135), ('ylk', 0.135), ('subsection', 0.129), ('minimization', 0.12), ('invariant', 0.115), ('fraser', 0.108), ('xlk', 0.108), ('correlation', 0.103), ('matrix', 0.09), ('subproblem', 0.089), ('singular', 0.088), ('nearest', 0.088), ('image', 0.087), ('norm', 0.086), ('longcorr', 0.081), ('min', 0.081), ('closed', 0.075), ('relaxation', 0.072), ('masked', 0.071), ('technical', 0.071), ('outer', 0.067), ('inpainting', 0.065), ('simon', 0.063), ('terminate', 0.061), ('solve', 0.06), ('conduct', 0.06), ('yl', 0.058), ('unitary', 0.058), ('solved', 0.058), ('numerous', 0.057), ('solution', 0.056), ('xv', 0.056), ('nlp', 0.056), ('fazel', 0.056), ('corollary', 0.056), ('xl', 0.055), ('solving', 0.054), ('brigo', 0.054), ('burnaby', 0.054), ('grubi', 0.054), ('maxcut', 0.054), ('monteiro', 0.054), ('obj', 0.054), ('accumulation', 0.054), ('lu', 0.053), ('recovered', 0.053), ('problems', 0.052), ('cij', 0.052), ('mathematics', 0.049), ('major', 0.049), ('nonconvex', 0.048), ('suitably', 0.048), ('reformulation', 0.048), ('zhaosong', 0.047), ('semide', 0.046), ('establish', 0.046), ('figures', 0.044), ('email', 0.044), ('iter', 0.044), ('pricing', 0.044), ('nance', 0.044), ('feasible', 0.043), ('report', 0.043), ('longer', 0.043), ('perturbation', 0.043), ('xij', 0.043), ('numerical', 0.041), ('descend', 0.041), ('philadelphia', 0.039), ('calibration', 0.039), ('optimization', 0.038), ('factorization', 0.037), ('stationary', 0.037), ('method', 0.036), ('sdp', 0.036), ('illinois', 0.036), ('grayscale', 0.036), ('quality', 0.036), ('bc', 0.034), ('mij', 0.034), ('suitable', 0.034), ('matrices', 0.034), ('nondecreasing', 0.033), ('recht', 0.033), ('equivalently', 0.032), ('concluding', 0.032), ('denoted', 0.032), ('rk', 0.032), ('arranged', 0.031), ('formulated', 0.031), ('pixel', 0.031), ('reformulated', 0.031)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999952 211 nips-2011-Penalty Decomposition Methods for Rank Minimization

Author: Yong Zhang, Zhaosong Lu

Abstract: In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems. The convergence results of the PD methods have been shown in the longer version of the paper [19]. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. 1

2 0.1557294 165 nips-2011-Matrix Completion for Multi-label Image Classification

Author: Ricardo S. Cabral, Fernando Torre, Joao P. Costeira, Alexandre Bernardino

Abstract: Recently, image categorization has been an active research topic due to the urgent need to retrieve and browse digital images via semantic keywords. This paper formulates image categorization as a multi-label classification problem using recent advances in matrix completion. Under this setting, classification of testing data is posed as a problem of completing unknown label entries on a data matrix that concatenates training and testing features with training labels. We propose two convex algorithms for matrix completion based on a Rank Minimization criterion specifically tailored to visual data, and prove its convergence properties. A major advantage of our approach w.r.t. standard discriminative classification methods for image categorization is its robustness to outliers, background noise and partial occlusions both in the feature and label space. Experimental validation on several datasets shows how our method outperforms state-of-the-art algorithms, while effectively capturing semantic concepts of classes. 1

3 0.15321119 270 nips-2011-Statistical Performance of Convex Tensor Decomposition

Author: Ryota Tomioka, Taiji Suzuki, Kohei Hayashi, Hisashi Kashima

Abstract: We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm. Conventionally tensor decomposition has been formulated as non-convex optimization problems, which hindered the analysis of their performance. We show under some conditions that the mean squared error of the convex method scales linearly with the quantity we call the normalized rank of the true tensor. The current analysis naturally extends the analysis of convex low-rank matrix estimation to tensors. Furthermore, we show through numerical experiments that our theory can precisely predict the scaling behaviour in practice.

4 0.10691441 289 nips-2011-Trace Lasso: a trace norm regularization for correlated designs

Author: Edouard Grave, Guillaume R. Obozinski, Francis R. Bach

Abstract: Using the 1 -norm to regularize the estimation of the parameter vector of a linear model leads to an unstable estimator when covariates are highly correlated. In this paper, we introduce a new penalty function which takes into account the correlation of the design matrix to stabilize the estimation. This norm, called the trace Lasso, uses the trace norm of the selected covariates, which is a convex surrogate of their rank, as the criterion of model complexity. We analyze the properties of our norm, describe an optimization algorithm based on reweighted least-squares, and illustrate the behavior of this norm on synthetic data, showing that it is more adapted to strong correlations than competing methods such as the elastic net. 1

5 0.10395811 259 nips-2011-Sparse Estimation with Structured Dictionaries

Author: David P. Wipf

Abstract: In the vast majority of recent work on sparse estimation algorithms, performance has been evaluated using ideal or quasi-ideal dictionaries (e.g., random Gaussian or Fourier) characterized by unit ℓ2 norm, incoherent columns or features. But in reality, these types of dictionaries represent only a subset of the dictionaries that are actually used in practice (largely restricted to idealized compressive sensing applications). In contrast, herein sparse estimation is considered in the context of structured dictionaries possibly exhibiting high coherence between arbitrary groups of columns and/or rows. Sparse penalized regression models are analyzed with the purpose of finding, to the extent possible, regimes of dictionary invariant performance. In particular, a Type II Bayesian estimator with a dictionarydependent sparsity penalty is shown to have a number of desirable invariance properties leading to provable advantages over more conventional penalties such as the ℓ1 norm, especially in areas where existing theoretical recovery guarantees no longer hold. This can translate into improved performance in applications such as model selection with correlated features, source localization, and compressive sensing with constrained measurement directions. 1

6 0.10318264 63 nips-2011-Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

7 0.10056312 230 nips-2011-RTRMC: A Riemannian trust-region method for low-rank matrix completion

8 0.090468191 257 nips-2011-SpaRCS: Recovering low-rank and sparse matrices from compressive measurements

9 0.09044797 297 nips-2011-Universal low-rank matrix recovery from Pauli measurements

10 0.087658606 142 nips-2011-Large-Scale Sparse Principal Component Analysis with Application to Text Data

11 0.080713741 108 nips-2011-Greedy Algorithms for Structurally Constrained High Dimensional Problems

12 0.080707647 118 nips-2011-High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity

13 0.075319186 176 nips-2011-Multi-View Learning of Word Embeddings via CCA

14 0.06825728 144 nips-2011-Learning Auto-regressive Models from Sequence and Non-sequence Data

15 0.068247147 260 nips-2011-Sparse Features for PCA-Like Linear Regression

16 0.068142824 141 nips-2011-Large-Scale Category Structure Aware Image Categorization

17 0.065760262 262 nips-2011-Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation

18 0.06449689 39 nips-2011-Approximating Semidefinite Programs in Sublinear Time

19 0.064253002 182 nips-2011-Nearest Neighbor based Greedy Coordinate Descent

20 0.062999323 5 nips-2011-A Denoising View of Matrix Completion


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.179), (1, 0.019), (2, -0.088), (3, -0.128), (4, -0.094), (5, 0.095), (6, -0.001), (7, 0.103), (8, 0.072), (9, 0.038), (10, 0.123), (11, 0.034), (12, -0.073), (13, 0.013), (14, 0.038), (15, -0.09), (16, -0.07), (17, 0.078), (18, -0.017), (19, -0.014), (20, -0.032), (21, -0.036), (22, 0.067), (23, -0.152), (24, -0.052), (25, 0.108), (26, -0.012), (27, -0.149), (28, 0.018), (29, -0.039), (30, 0.005), (31, -0.008), (32, -0.114), (33, 0.049), (34, -0.063), (35, 0.053), (36, 0.02), (37, -0.018), (38, -0.165), (39, -0.041), (40, 0.042), (41, -0.091), (42, 0.016), (43, -0.042), (44, -0.029), (45, 0.037), (46, -0.012), (47, 0.034), (48, 0.048), (49, -0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96585315 211 nips-2011-Penalty Decomposition Methods for Rank Minimization

Author: Yong Zhang, Zhaosong Lu

Abstract: In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems. The convergence results of the PD methods have been shown in the longer version of the paper [19]. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. 1

2 0.68262738 230 nips-2011-RTRMC: A Riemannian trust-region method for low-rank matrix completion

Author: Nicolas Boumal, Pierre-antoine Absil

Abstract: We consider large matrices of low rank. We address the problem of recovering such matrices when most of the entries are unknown. Matrix completion finds applications in recommender systems. In this setting, the rows of the matrix may correspond to items and the columns may correspond to users. The known entries are the ratings given by users to some items. The aim is to predict the unobserved ratings. This problem is commonly stated in a constrained optimization framework. We follow an approach that exploits the geometry of the low-rank constraint to recast the problem as an unconstrained optimization problem on the Grassmann manifold. We then apply first- and second-order Riemannian trust-region methods to solve it. The cost of each iteration is linear in the number of known entries. Our methods, RTRMC 1 and 2, outperform state-of-the-art algorithms on a wide range of problem instances. 1

3 0.67011207 165 nips-2011-Matrix Completion for Multi-label Image Classification

Author: Ricardo S. Cabral, Fernando Torre, Joao P. Costeira, Alexandre Bernardino

Abstract: Recently, image categorization has been an active research topic due to the urgent need to retrieve and browse digital images via semantic keywords. This paper formulates image categorization as a multi-label classification problem using recent advances in matrix completion. Under this setting, classification of testing data is posed as a problem of completing unknown label entries on a data matrix that concatenates training and testing features with training labels. We propose two convex algorithms for matrix completion based on a Rank Minimization criterion specifically tailored to visual data, and prove its convergence properties. A major advantage of our approach w.r.t. standard discriminative classification methods for image categorization is its robustness to outliers, background noise and partial occlusions both in the feature and label space. Experimental validation on several datasets shows how our method outperforms state-of-the-art algorithms, while effectively capturing semantic concepts of classes. 1

4 0.65939438 108 nips-2011-Greedy Algorithms for Structurally Constrained High Dimensional Problems

Author: Ambuj Tewari, Pradeep K. Ravikumar, Inderjit S. Dhillon

Abstract: A hallmark of modern machine learning is its ability to deal with high dimensional problems by exploiting structural assumptions that limit the degrees of freedom in the underlying model. A deep understanding of the capabilities and limits of high dimensional learning methods under specific assumptions such as sparsity, group sparsity, and low rank has been attsined. Efforts [1,2] are now underway to distill this valuable experience by proposing general unified frameworks that can achieve the twio goals of summarizing previous analyses and enabling their application to notions of structure hitherto unexplored. Inspired by these developments, we propose and analyze a general computational scheme based on a greedy strategy to solve convex optimization problems that arise when dealing with structurally constrained high-dimensional problems. Our framework not only unifies existing greedy algorithms by recovering them as special cases but also yields novel ones. Finally, we extend our results to infinite dimensional settings by using interesting connections between smoothness of norms and behavior of martingales in Banach spaces.

5 0.62802875 161 nips-2011-Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

Author: Zhouchen Lin, Risheng Liu, Zhixun Su

Abstract: Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, we propose a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the subproblems. For fast convergence, we also allow the penalty to change adaptively according a novel update rule. We prove the global convergence of LADM with adaptive penalty (LADMAP). As an example, we apply LADMAP to solve lowrank representation (LRR), which is an important subspace clustering technique yet suffers from high computation cost. By combining LADMAP with a skinny SVD representation technique, we are able to reduce the complexity O(n3 ) of the original ADM based method to O(rn2 ), where r and n are the rank and size of the representation matrix, respectively, hence making LRR possible for large scale applications. Numerical experiments verify that for LRR our LADMAP based methods are much faster than state-of-the-art algorithms. 1

6 0.62310284 297 nips-2011-Universal low-rank matrix recovery from Pauli measurements

7 0.61989236 270 nips-2011-Statistical Performance of Convex Tensor Decomposition

8 0.61669606 142 nips-2011-Large-Scale Sparse Principal Component Analysis with Application to Text Data

9 0.58642185 257 nips-2011-SpaRCS: Recovering low-rank and sparse matrices from compressive measurements

10 0.57199377 260 nips-2011-Sparse Features for PCA-Like Linear Regression

11 0.56725639 73 nips-2011-Divide-and-Conquer Matrix Factorization

12 0.53881878 51 nips-2011-Clustered Multi-Task Learning Via Alternating Structure Optimization

13 0.50316405 63 nips-2011-Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

14 0.48533016 172 nips-2011-Minimax Localization of Structural Information in Large Noisy Matrices

15 0.48401564 5 nips-2011-A Denoising View of Matrix Completion

16 0.47289151 176 nips-2011-Multi-View Learning of Word Embeddings via CCA

17 0.46784645 43 nips-2011-Bayesian Partitioning of Large-Scale Distance Data

18 0.45926508 78 nips-2011-Efficient Methods for Overlapping Group Lasso

19 0.45739755 164 nips-2011-Manifold Precis: An Annealing Technique for Diverse Sampling of Manifolds

20 0.44016093 262 nips-2011-Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.492), (4, 0.037), (20, 0.039), (26, 0.013), (31, 0.032), (43, 0.042), (45, 0.112), (57, 0.013), (74, 0.043), (83, 0.047), (99, 0.037)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.91706091 207 nips-2011-Optimal learning rates for least squares SVMs using Gaussian kernels

Author: Mona Eberts, Ingo Steinwart

Abstract: We prove a new oracle inequality for support vector machines with Gaussian RBF kernels solving the regularized least squares regression problem. To this end, we apply the modulus of smoothness. With the help of the new oracle inequality we then derive learning rates that can also be achieved by a simple data-dependent parameter selection method. Finally, it turns out that our learning rates are asymptotically optimal for regression functions satisfying certain standard smoothness conditions. 1

same-paper 2 0.87854731 211 nips-2011-Penalty Decomposition Methods for Rank Minimization

Author: Yong Zhang, Zhaosong Lu

Abstract: In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems. The convergence results of the PD methods have been shown in the longer version of the paper [19]. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. 1

3 0.79622477 250 nips-2011-Shallow vs. Deep Sum-Product Networks

Author: Olivier Delalleau, Yoshua Bengio

Abstract: We investigate the representational power of sum-product networks (computation networks analogous to neural networks, but whose individual units compute either products or weighted sums), through a theoretical analysis that compares deep (multiple hidden layers) vs. shallow (one hidden layer) architectures. We prove there exist families of functions that can be represented much more efficiently with a deep network than with a shallow one, i.e. with substantially fewer hidden units. Such results were not available until now, and contribute to motivate recent research involving learning of deep sum-product networks, and more generally motivate research in Deep Learning. 1 Introduction and prior work Many learning algorithms are based on searching a family of functions so as to identify one member of said family which minimizes a training criterion. The choice of this family of functions and how members of that family are parameterized can be a crucial one. Although there is no universally optimal choice of parameterization or family of functions (or “architecture”), as demonstrated by the no-free-lunch results [37], it may be the case that some architectures are appropriate (or inappropriate) for a large class of learning tasks and data distributions, such as those related to Artificial Intelligence (AI) tasks [4]. Different families of functions have different characteristics that can be appropriate or not depending on the learning task of interest. One of the characteristics that has spurred much interest and research in recent years is depth of the architecture. In the case of a multi-layer neural network, depth corresponds to the number of (hidden and output) layers. A fixedkernel Support Vector Machine is considered to have depth 2 [4] and boosted decision trees to have depth 3 [7]. Here we use the word circuit or network to talk about a directed acyclic graph, where each node is associated with some output value which can be computed based on the values associated with its predecessor nodes. The arguments of the learned function are set at the input nodes of the circuit (which have no predecessor) and the outputs of the function are read off the output nodes of the circuit. Different families of functions correspond to different circuits and allowed choices of computations in each node. Learning can be performed by changing the computation associated with a node, or rewiring the circuit (possibly changing the number of nodes). The depth of the circuit is the length of the longest path in the graph from an input node to an output node. Deep Learning algorithms [3] are tailored to learning circuits with variable depth, typically greater than depth 2. They are based on the idea of multiple levels of representation, with the intuition that the raw input can be represented at different levels of abstraction, with more abstract features of the input or more abstract explanatory factors represented by deeper circuits. These algorithms are often based on unsupervised learning, opening the door to semi-supervised learning and efficient 1 use of large quantities of unlabeled data [3]. Analogies with the structure of the cerebral cortex (in particular the visual cortex) [31] and similarities between features learned with some Deep Learning algorithms and those hypothesized in the visual cortex [17] further motivate investigations into deep architectures. It has been suggested that deep architectures are more powerful in the sense of being able to more efficiently represent highly-varying functions [4, 3]. In this paper, we measure “efficiency” in terms of the number of computational units in the network. An efficient representation is important mainly because: (i) it uses less memory and is faster to compute, and (ii) given a fixed amount of training samples and computational power, better generalization is expected. The first successful algorithms for training deep architectures appeared in 2006, with efficient training procedures for Deep Belief Networks [14] and deep auto-encoders [13, 27, 6], both exploiting the general idea of greedy layer-wise pre-training [6]. Since then, these ideas have been investigated further and applied in many settings, demonstrating state-of-the-art learning performance in object recognition [16, 28, 18, 15] and segmentation [20], audio classification [19, 10], natural language processing [9, 36, 21, 32], collaborative filtering [30], modeling textures [24], modeling motion [34, 33], information retrieval [29, 26], and semi-supervised learning [36, 22]. Poon and Domingos [25] introduced deep sum-product networks as a method to compute partition functions of tractable graphical models. These networks are analogous to traditional artificial neural networks but with nodes that compute either products or weighted sums of their inputs. Analogously to neural networks, we define “hidden” nodes as those nodes that are neither input nodes nor output nodes. If the nodes are organized in layers, we define the “hidden” layers to be those that are neither the input layer nor the output layer. Poon and Domingos [25] report experiments with networks much deeper (30+ hidden layers) than those typically used until now, e.g. in Deep Belief Networks [14, 3], where the number of hidden layers is usually on the order of three to five. Whether such deep architectures have theoretical advantages compared to so-called “shallow” architectures (i.e. those with a single hidden layer) remains an open question. After all, in the case of a sum-product network, the output value can always be written as a sum of products of input variables (possibly raised to some power by allowing multiple connections from the same input), and consequently it is easily rewritten as a shallow network with a sum output unit and product hidden units. The argument supported by our theoretical analysis is that a deep architecture is able to compute some functions much more efficiently than a shallow one. Until recently, very few theoretical results supported the idea that deep architectures could present an advantage in terms of representing some functions more efficiently. Most related results originate from the analysis of boolean circuits (see e.g. [2] for a review). Well-known results include the proof that solving the n-bit parity task with a depth-2 circuit requires an exponential number of gates [1, 38], and more generally that there exist functions computable with a polynomial-size depthk circuit that would require exponential size when restricted to depth k − 1 [11]. Another recent result on boolean circuits by Braverman [8] offers proof of a longstanding conjecture, showing that bounded-depth boolean circuits are unable to distinguish some (non-uniform) input distributions from the uniform distribution (i.e. they are “fooled” by such input distributions). In particular, Braverman’s result suggests that shallow circuits can in general be fooled more easily than deep ones, i.e., that they would have more difficulty efficiently representing high-order dependencies (those involving many input variables). It is not obvious that circuit complexity results (that typically consider only boolean or at least discrete nodes) are directly applicable in the context of typical machine learning algorithms such as neural networks (that compute continuous representations of their input). Orponen [23] surveys theoretical results in computational complexity that are relevant to learning algorithms. For instance, H˚ stad and Goldmann [12] extended some results to the case of networks of linear threshold units a with positivity constraints on the weights. Bengio et al. [5, 7] investigate, respectively, complexity issues in networks of Gaussian radial basis functions and decision trees, showing intrinsic limitations of these architectures e.g. on tasks similar to the parity problem. Utgoff and Stracuzzi [35] informally discuss the advantages of depth in boolean circuit in the context of learning architectures. Bengio [3] suggests that some polynomials could be represented more efficiently by deep sumproduct networks, but without providing any formal statement or proofs. This work partly addresses this void by demonstrating families of circuits for which a deep architecture can be exponentially more efficient than a shallow one in the context of real-valued polynomials. Note that we do not address in this paper the problem of learning these parameters: even if an efficient deep representation exists for the function we seek to approximate, in general there is no 2 guarantee for standard optimization algorithms to easily converge to this representation. This paper focuses on the representational power of deep sum-product circuits compared to shallow ones, and studies it by considering particular families of target functions (to be represented by the learner). We first formally define sum-product networks. We consider two families of functions represented by deep sum-product networks (families F and G). For each family, we establish a lower bound on the minimal number of hidden units a depth-2 sum-product network would require to represent a function of this family, showing it is much less efficient than the deep representation. 2 Sum-product networks Definition 1. A sum-product network is a network composed of units that either compute the product of their inputs or a weighted sum of their inputs (where weights are strictly positive). Here, we restrict our definition of the generic term “sum-product network” to networks whose summation units have positive incoming weights1 , while others are called “negative-weight” networks. Definition 2. A “negative-weight“ sum-product network may contain summation units whose weights are non-positive (i.e. less than or equal to zero). Finally, we formally define what we mean by deep vs. shallow networks in the rest of the paper. Definition 3. A “shallow“ sum-product network contains a single hidden layer (i.e. a total of three layers when counting the input and output layers, and a depth equal to two). Definition 4. A “deep“ sum-product network contains more than one hidden layer (i.e. a total of at least four layers, and a depth at least three). The family F 3 3.1 Definition The first family of functions we study, denoted by F, is made of functions built from deep sumproduct networks that alternate layers of product and sum units with two inputs each (details are provided below). The basic idea we use here is that composing layers (i.e. using a deep architecture) is equivalent to using a factorized representation of the polynomial function computed by the network. Such a factorized representation can be exponentially more compact than its expansion as a sum of products (which can be associated to a shallow network with product units in its hidden layer and a sum unit as output). This is what we formally show in what follows. + ℓ2 = λ11ℓ1 + µ11ℓ1 = x1x2 + x3x4 = f (x1, x2, x3, x4) 2 1 1 λ11 = 1 µ11 = 1 × ℓ1 = x1x2 1 x1 x2 × ℓ1 = x3x4 2 x3 x4 Figure 1: Sum-product network computing the function f ∈ F such that i = λ11 = µ11 = 1. Let n = 4i , with i a positive integer value. Denote by ℓ0 the input layer containing scalar variables {x1 , . . . , xn }, such that ℓ0 = xj for 1 ≤ j ≤ n. Now define f ∈ F as any function computed by a j sum-product network (deep for i ≥ 2) composed of alternating product and sum layers: • ℓ2k+1 = ℓ2k · ℓ2k for 0 ≤ k ≤ i − 1 and 1 ≤ j ≤ 22(i−k)−1 2j−1 2j j • ℓ2k = λjk ℓ2k−1 + µjk ℓ2k−1 for 1 ≤ k ≤ i and 1 ≤ j ≤ 22(i−k) j 2j 2j−1 where the weights λjk and µjk of the summation units are strictly positive. The output of the network is given by f (x1 , . . . , xn ) = ℓ2i ∈ R, the unique unit in the last layer. 1 The corresponding (shallow) network for i = 1 and additive weights set to one is shown in Figure 1 1 This condition is required by some of the proofs presented here. 3 (this architecture is also the basic building block of bigger networks for i > 1). Note that both the input size n = 4i and the network’s depth 2i increase with parameter i. 3.2 Theoretical results The main result of this section is presented below in Corollary 1, providing a lower bound on the minimum number of hidden units required by a shallow sum-product network to represent a function f ∈ F. The high-level proof sketch consists in the following steps: (1) Count the number of unique products found in the polynomial representation of f (Lemma 1 and Proposition 1). (2) Show that the only possible architecture for a shallow sum-product network to compute f is to have a hidden layer made of product units, with a sum unit as output (Lemmas 2 to 5). (3) Conclude that the number of hidden units must be at least the number of unique products computed in step 3.2 (Lemma 6 and Corollary 1). Lemma 1. Any element ℓk can be written as a (positively) weighted sum of products of input varij ables, such that each input variable xt is used in exactly one unit of ℓk . Moreover, the number mk of products found in the sum computed by ℓk does not depend on j and obeys the following recurrence j rule for k ≥ 0: if k + 1 is odd, then mk+1 = m2 , otherwise mk+1 = 2mk . k Proof. We prove the lemma by induction on k. It is obviously true for k = 0 since ℓ0 = xj . j Assuming this is true for some k ≥ 0, we consider two cases: k+1 k • If k + 1 is odd, then ℓj = ℓk 2j−1 · ℓ2j . By the inductive hypothesis, it is the product of two (positively) weighted sums of products of input variables, and no input variable can k appear in both ℓk 2j−1 and ℓ2j , so the result is also a (positively) weighted sum of products k of input variables. Additionally, if the number of products in ℓk 2j−1 and ℓ2j is mk , then 2 mk+1 = mk , since all products involved in the multiplication of the two units are different (since they use disjoint subsets of input variables), and the sums have positive weights. Finally, by the induction assumption, an input variable appears in exactly one unit of ℓk . This unit is an input to a single unit of ℓk+1 , that will thus be the only unit of ℓk+1 where this input variable appears. k • If k + 1 is even, then ℓk+1 = λjk ℓk 2j−1 + µjk ℓ2j . Again, from the induction assumption, it j must be a (positively) weighted sum of products of input variables, but with mk+1 = 2mk such products. As in the previous case, an input variable will appear in the single unit of ℓk+1 that has as input the single unit of ℓk in which this variable must appear. 2i Proposition 1. The number of products in the sum computed in the output unit l1 of a network √ n−1 . computing a function in F is m2i = 2 Proof. We first prove by induction on k ≥ 1 that for odd k, mk = 22 k 22 1+1 2 2 k+1 2 −2 , and for even k, . This is obviously true for k = 1 since 2 = 2 = 1, and all units in ℓ1 are mk = 2 single products of the form xr xs . Assuming this is true for some k ≥ 1, then: −1 0 −2 • if k + 1 is odd, then from Lemma 1 and the induction assumption, we have: mk+1 = m2 = k 2 k 22 2 −1 k +1 = 22 2 • if k + 1 is even, then instead we have: mk+1 = 2mk = 2 · 22 k+1 2 −2 −2 = 22 = 22 (k+1)+1 2 (k+1) 2 −2 −1 which shows the desired result for k + 1, and thus concludes the induction proof. Applying this result with k = 2i (which is even) yields 2i m2i = 22 2 −1 √ =2 4 22i −1 √ =2 n−1 . 2i Lemma 2. The products computed in the output unit l1 can be split in two groups, one with products containing only variables x1 , . . . , x n and one containing only variables x n +1 , . . . , xn . 2 2 Proof. This is obvious since the last unit is a “sum“ unit that adds two terms whose inputs are these two groups of variables (see e.g. Fig. 1). 2i Lemma 3. The products computed in the output unit l1 involve more than one input variable. k Proof. It is straightforward to show by induction on k ≥ 1 that the products computed by lj all involve more than one input variable, thus it is true in particular for the output layer (k = 2i). Lemma 4. Any shallow sum-product network computing f ∈ F must have a “sum” unit as output. Proof. By contradiction, suppose the output unit of such a shallow sum-product network is multiplicative. This unit must have more than one input, because in the case that it has only one input, the output would be either a (weighted) sum of input variables (which would violate Lemma 3), or a single product of input variables (which would violate Proposition 1), depending on the type (sum or product) of the single input hidden unit. Thus the last unit must compute a product of two or more hidden units. It can be re-written as a product of two factors, where each factor corresponds to either one hidden unit, or a product of multiple hidden units (it does not matter here which specific factorization is chosen among all possible ones). Regardless of the type (sum or product) of the hidden units involved, those two factors can thus be written as weighted sums of products of variables xt (with positive weights, and input variables potentially raised to powers above one). From Lemma 1, both x1 and xn must be present in the final output, and thus they must appear in at least one of these two factors. Without loss of generality, assume x1 appears in the first factor. Variables x n +1 , . . . , xn then cannot be present in the second factor, since otherwise one product in the output 2 would contain both x1 and one of these variables (this product cannot cancel out since weights must be positive), violating Lemma 2. But with a similar reasoning, since as a result xn must appear in the first factor, variables x1 , . . . , x n cannot be present in the second factor either. Consequently, no 2 input variable can be present in the second factor, leading to the desired contradiction. Lemma 5. Any shallow sum-product network computing f ∈ F must have only multiplicative units in its hidden layer. Proof. By contradiction, suppose there exists a “sum“ unit in the hidden layer, written s = t∈S αt xt with S the set of input indices appearing in this sum, and αt > 0 for all t ∈ S. Since according to Lemma 4 the output unit must also be a sum (and have positive weights according to Definition 1), then the final output will also contain terms of the form βt xt for t ∈ S, with βt > 0. This violates Lemma 3, establishing the contradiction. Lemma 6. Any shallow negative-weight sum-product network (see Definition 2) computing f ∈ F √ must have at least 2 n−1 hidden units, if its output unit is a sum and its hidden units are products. Proof. Such a network computes a weighted sum of its hidden units, where each hidden unit is a γ product of input variables, i.e. its output can be written as Σj wj Πt xt jt with wj ∈ R and γjt ∈ {0, 1}. In order to compute a function in F, this shallow network thus needs a number of hidden units at least equal to the number of unique products in that function. From Proposition 1, this √ number is equal to 2 n−1 . √ Corollary 1. Any shallow sum-product network computing f ∈ F must have at least 2 units. n−1 hidden Proof. This is a direct corollary of Lemmas 4 (showing the output unit is a sum), 5 (showing that hidden units are products), and 6 (showing the desired result for any shallow network with this specific structure – regardless of the sign of weights). 5 3.3 Discussion Corollary 1 above shows that in order to compute some function in F with n inputs, the number of √ √ units in a shallow network has to be at least 2 n−1 , (i.e. grows exponentially in n). On another hand, the total number of units in the deep (for i > 1) network computing the same function, as described in Section 3.1, is equal to 1 + 2 + 4 + 8 + . . . + 22i−1 (since all units are binary), which is √ also equal to 22i − 1 = n − 1 (i.e. grows only quadratically in n). It shows that some deep sumproduct network with n inputs and depth O(log n) can represent with O(n) units what would √ require O(2 n ) units for a depth-2 network. Lemma 6 also shows a similar result regardless of the sign of the weights in the summation units of the depth-2 network, but assumes a specific architecture for this network (products in the hidden layer with a sum as output). 4 The family G In this section we present similar results with a different family of functions, denoted by G. Compared to F, one important difference of deep sum-product networks built to define functions in G is that they can vary their input size independently of their depth. Their analysis thus provides additional insight when comparing the representational efficiency of deep vs. shallow sum-product networks in the case of a fixed dataset. 4.1 Definition Networks in family G also alternate sum and product layers, but their units have as inputs all units from the previous layer except one. More formally, define the family G = ∪n≥2,i≥0 Gin of functions represented by sum-product networks, where the sub-family Gin is made of all sum-product networks with n input variables and 2i + 2 layers (including the input layer ℓ0 ), such that: 1. ℓ1 contains summation units; further layers alternate multiplicative and summation units. 2. Summation units have positive weights. 3. All layers are of size n, except the last layer ℓ2i+1 that contains a single sum unit that sums all units in the previous layer ℓ2i . k−1 4. In each layer ℓk for 1 ≤ k ≤ 2i, each unit ℓk takes as inputs {ℓm |m = j}. j An example of a network belonging to G1,3 (i.e. with three layers and three input variables) is shown in Figure 2. ℓ3 = x2 + x2 + x2 + 3(x1x2 + x1x3 + x2x3) = g(x1, x2, x3) 3 2 1 1 + ℓ2 = x2 + x1x2 × 1 1 +x1x3 + x2x3 ℓ1 = x2 + x3 1 × ℓ2 = . . . 2 × ℓ2 = x2 + x1x2 3 3 +x1x3 + x2x3 + + ℓ1 = x1 + x3 2 + ℓ1 = x1 + x2 3 x1 x2 x3 Figure 2: Sum-product network computing a function of G1,3 (summation units’ weights are all 1’s). 4.2 Theoretical results The main result is stated in Proposition 3 below, establishing a lower bound on the number of hidden units of a shallow sum-product network computing g ∈ G. The proof sketch is as follows: 1. We show that the polynomial expansion of g must contain a large set of products (Proposition 2 and Corollary 2). 2. We use both the number of products in that set as well as their degree to establish the desired lower bound (Proposition 3). 6 We will also need the following lemma, which states that when n − 1 items each belong to n − 1 sets among a total of n sets, then we can associate to each item one of the sets it belongs to without using the same set for different items. Lemma 7. Let S1 , . . . , Sn be n sets (n ≥ 2) containing elements of {P1 , . . . , Pn−1 }, such that for any q, r, |{r|Pq ∈ Sr }| ≥ n − 1 (i.e. each element Pq belongs to at least n − 1 sets). Then there exist r1 , . . . , rn−1 different indices such that Pq ∈ Srq for 1 ≤ q ≤ n − 1. Proof. Omitted due to lack of space (very easy to prove by construction). Proposition 2. For any 0 ≤ j ≤ i, and any product of variables P = Πn xαt such that αt ∈ N and t=1 t j 2j whose computed value, when expanded as a weighted t αt = (n − 1) , there exists a unit in ℓ sum of products, contains P among these products. Proof. We prove this proposition by induction on j. First, for j = 0, this is obvious since any P of this form must be made of a single input variable xt , that appears in ℓ0 = xt . t Suppose now the proposition is true for some j < i. Consider a product P = Πn xαt such that t=1 t αt ∈ N and t αt = (n − 1)j+1 . P can be factored in n − 1 sub-products of degree (n − 1)j , β i.e. written P = P1 . . . Pn−1 with Pq = Πn xt qt , βqt ∈ N and t βqt = (n − 1)j for all q. By t=1 the induction hypothesis, each Pq can be found in at least one unit ℓ2j . As a result, by property 4 kq (in the definition of family G), each Pq will also appear in the additive layer ℓ2j+1 , in at least n − 1 different units (the only sum unit that may not contain Pq is the one that does not have ℓ2j as input). kq By Lemma 7, we can thus find a set of units ℓ2j+1 such that for any 1 ≤ q ≤ n − 1, the product rq Pq appears in ℓ2j+1 , with indices rq being different from each other. Let 1 ≤ s ≤ n be such that rq 2(j+1) s = rq for all q. Then, from property 4 of family G, the multiplicative unit ℓs computes the n−1 2j+1 product Πq=1 ℓrq , and as a result, when expanded as a sum of products, it contains in particular P1 . . . Pn−1 = P . The proposition is thus true for j + 1, and by induction, is true for all j ≤ i. Corollary 2. The output gin of a sum-product network in Gin , when expanded as a sum of products, contains all products of variables of the form Πn xαt such that αt ∈ N and t αt = (n − 1)i . t=1 t Proof. Applying Proposition 2 with j = i, we obtain that all products of this form can be found in the multiplicative units of ℓ2i . Since the output unit ℓ2i+1 computes a sum of these multiplicative 1 units (weighted with positive weights), those products are also present in the output. Proposition 3. A shallow negative-weight sum-product network computing gin ∈ Gin must have at least (n − 1)i hidden units. Proof. First suppose the output unit of the shallow network is a sum. Then it may be able to compute gin , assuming we allow multiplicative units in the hidden layer in the hidden layer to use powers of their inputs in the product they compute (which we allow here for the proof to be more generic). However, it will require at least as many of these units as the number of unique products that can be found in the expansion of gin . In particular, from Corollary 2, it will require at least the number n of unique tuples of the form (α1 , . . . , αn ) such that αt ∈ N and t=1 αt = (n − 1)i . Denoting ni dni = (n − 1)i , this number is known to be equal to n+dni −1 , and it is easy to verify it is higher d than (or equal to) dni for any n ≥ 2 and i ≥ 0. Now suppose the output unit is multiplicative. Then there can be no multiplicative hidden unit, otherwise it would mean one could factor some input variable xt in the computed function output: this is not possible since by Corollary 2, for any variable xt there exist products in the output function that do not involve xt . So all hidden units must be additive, and since the computed function contains products of degree dni , there must be at least dni such hidden units. 7 4.3 Discussion Proposition 3 shows that in order to compute the same function as gin ∈ Gin , the number of units in the shallow network has to grow exponentially in i, i.e. in the network’s depth (while the deep network’s size grows linearly in i). The shallow network also needs to grow polynomially in the number of input variables n (with a degree equal to i), while the deep network grows only linearly in n. It means that some deep sum-product network with n inputs and depth O(i) can represent with O(ni) units what would require O((n − 1)i ) units for a depth-2 network. Note that in the similar results found for family F, the depth-2 network computing the same function as a function in F had to be constrained to either have a specific combination of sum and hidden units (in Lemma 6) or to have non-negative weights (in Corollary 1). On the contrary, the result presented here for family G holds without requiring any of these assumptions. 5 Conclusion We compared a deep sum-product network and a shallow sum-product network representing the same function, taken from two families of functions F and G. For both families, we have shown that the number of units in the shallow network has to grow exponentially, compared to a linear growth in the deep network, so as to represent the same functions. The deep version thus offers a much more compact representation of the same functions. This work focuses on two specific families of functions: finding more general parameterization of functions leading to similar results would be an interesting topic for future research. Another open question is whether it is possible to represent such functions only approximately (e.g. up to an error bound ǫ) with a much smaller shallow network. Results by Braverman [8] on boolean circuits suggest that similar results as those presented in this paper may still hold, but this topic has yet to be formally investigated in the context of sum-product networks. A related problem is also to look into functions defined only on discrete input variables: our proofs do not trivially extend to this situation because we cannot assume anymore that two polynomials yielding the same output values must have the same expansion coefficients (since the number of input combinations becomes finite). Acknowledgments The authors would like to thank Razvan Pascanu and David Warde-Farley for their help in improving this manuscript, as well as the anonymous reviewers for their careful reviews. This work was partially funded by NSERC, CIFAR, and the Canada Research Chairs. References [1] Ajtai, M. (1983). P1 1 -formulae on finite structures. Annals of Pure and Applied Logic, 24(1), 1–48. [2] Allender, E. (1996). Circuit complexity before the dawn of the new millennium. In 16th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pages 1–18. Lecture Notes in Computer Science 1180, Springer Verlag. [3] Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1–127. Also published as a book. Now Publishers, 2009. [4] Bengio, Y. and LeCun, Y. (2007). Scaling learning algorithms towards AI. In L. Bottou, O. Chapelle, D. DeCoste, and J. Weston, editors, Large Scale Kernel Machines. MIT Press. [5] Bengio, Y., Delalleau, O., and Le Roux, N. (2006). The curse of highly variable functions for local kernel machines. In NIPS’05, pages 107–114. MIT Press, Cambridge, MA. [6] Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep networks. In NIPS 19, pages 153–160. MIT Press. [7] Bengio, Y., Delalleau, O., and Simard, C. (2010). Decision trees do not generalize to new variations. Computational Intelligence, 26(4), 449–467. [8] Braverman, M. (2011). Poly-logarithmic independence fools bounded-depth boolean circuits. Communications of the ACM, 54(4), 108–115. [9] Collobert, R. and Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML 2008, pages 160–167. [10] Dahl, G. E., Ranzato, M., Mohamed, A., and Hinton, G. E. (2010). Phone recognition with the meancovariance restricted boltzmann machine. In Advances in Neural Information Processing Systems (NIPS). 8 [11] H˚ stad, J. (1986). Almost optimal lower bounds for small depth circuits. In Proceedings of the 18th a annual ACM Symposium on Theory of Computing, pages 6–20, Berkeley, California. ACM Press. [12] H˚ stad, J. and Goldmann, M. (1991). On the power of small-depth threshold circuits. Computational a Complexity, 1, 113–129. [13] Hinton, G. E. and Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507. [14] Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554. [15] Kavukcuoglu, K., Sermanet, P., Boureau, Y.-L., Gregor, K., Mathieu, M., and LeCun, Y. (2010). Learning convolutional feature hierarchies for visual recognition. In NIPS’10. [16] Larochelle, H., Erhan, D., Courville, A., Bergstra, J., and Bengio, Y. (2007). An empirical evaluation of deep architectures on problems with many factors of variation. In ICML’07, pages 473–480. ACM. [17] Lee, H., Ekanadham, C., and Ng, A. (2008). Sparse deep belief net model for visual area V2. In NIPS’07, pages 873–880. MIT Press, Cambridge, MA. [18] Lee, H., Grosse, R., Ranganath, R., and Ng, A. Y. (2009a). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML 2009. Montreal (Qc), Canada. [19] Lee, H., Pham, P., Largman, Y., and Ng, A. (2009b). Unsupervised feature learning for audio classification using convolutional deep belief networks. In NIPS’09, pages 1096–1104. [20] Levner, I. (2008). Data Driven Object Segmentation. Ph.D. thesis, Department of Computer Science, University of Alberta. [21] Mnih, A. and Hinton, G. E. (2009). A scalable hierarchical distributed language model. In NIPS’08, pages 1081–1088. [22] Mobahi, H., Collobert, R., and Weston, J. (2009). Deep learning from temporal coherence in video. In ICML’2009, pages 737–744. [23] Orponen, P. (1994). Computational complexity of neural networks: a survey. Nordic Journal of Computing, 1(1), 94–110. [24] Osindero, S. and Hinton, G. E. (2008). Modeling image patches with a directed hierarchy of markov random field. In NIPS’07, pages 1121–1128, Cambridge, MA. MIT Press. [25] Poon, H. and Domingos, P. (2011). Sum-product networks: A new deep architecture. In UAI’2011, Barcelona, Spain. [26] Ranzato, M. and Szummer, M. (2008). Semi-supervised learning of compact document representations with deep networks. In ICML. [27] Ranzato, M., Poultney, C., Chopra, S., and LeCun, Y. (2007). Efficient learning of sparse representations with an energy-based model. In NIPS’06, pages 1137–1144. MIT Press. [28] Ranzato, M., Boureau, Y.-L., and LeCun, Y. (2008). Sparse feature learning for deep belief networks. In NIPS’07, pages 1185–1192, Cambridge, MA. MIT Press. [29] Salakhutdinov, R. and Hinton, G. E. (2007). Semantic hashing. In Proceedings of the 2007 Workshop on Information Retrieval and applications of Graphical Models (SIGIR 2007), Amsterdam. Elsevier. [30] Salakhutdinov, R., Mnih, A., and Hinton, G. E. (2007). Restricted Boltzmann machines for collaborative filtering. In ICML 2007, pages 791–798, New York, NY, USA. [31] Serre, T., Kreiman, G., Kouh, M., Cadieu, C., Knoblich, U., and Poggio, T. (2007). A quantitative theory of immediate visual recognition. Progress in Brain Research, Computational Neuroscience: Theoretical Insights into Brain Function, 165, 33–56. [32] Socher, R., Lin, C., Ng, A. Y., and Manning, C. (2011). Learning continuous phrase representations and syntactic parsing with recursive neural networks. In ICML’2011. [33] Taylor, G. and Hinton, G. (2009). Factored conditional restricted Boltzmann machines for modeling motion style. In ICML 2009, pages 1025–1032. [34] Taylor, G., Hinton, G. E., and Roweis, S. (2007). Modeling human motion using binary latent variables. In NIPS’06, pages 1345–1352. MIT Press, Cambridge, MA. [35] Utgoff, P. E. and Stracuzzi, D. J. (2002). Many-layered learning. Neural Computation, 14, 2497–2539. [36] Weston, J., Ratle, F., and Collobert, R. (2008). Deep learning via semi-supervised embedding. In ICML 2008, pages 1168–1175, New York, NY, USA. [37] Wolpert, D. H. (1996). The lack of a priori distinction between learning algorithms. Neural Computation, 8(7), 1341–1390. [38] Yao, A. (1985). Separating the polynomial-time hierarchy by oracles. In Proceedings of the 26th Annual IEEE Symposium on Foundations of Computer Science, pages 1–10. 9

4 0.77225375 12 nips-2011-A Two-Stage Weighting Framework for Multi-Source Domain Adaptation

Author: Qian Sun, Rita Chattopadhyay, Sethuraman Panchanathan, Jieping Ye

Abstract: Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no labeled data from the test or target distribution but may have plenty of labeled data from multiple related sources with different distributions. The difference in distributions may be both in marginal and conditional probabilities. Most of the existing domain adaptation work focuses on the marginal probability distribution difference between the domains, assuming that the conditional probabilities are similar. However in many real world applications, conditional probability distribution differences are as commonplace as marginal probability differences. In this paper we propose a two-stage domain adaptation methodology which combines weighted data from multiple sources based on marginal probability differences (first stage) as well as conditional probability differences (second stage), with the target domain data. The weights for minimizing the marginal probability differences are estimated independently, while the weights for minimizing conditional probability differences are computed simultaneously by exploiting the potential interaction among multiple sources. We also provide a theoretical analysis on the generalization performance of the proposed multi-source domain adaptation formulation using the weighted Rademacher complexity measure. Empirical comparisons with existing state-of-the-art domain adaptation methods using three real-world datasets demonstrate the effectiveness of the proposed approach. 1

5 0.63298315 96 nips-2011-Fast and Balanced: Efficient Label Tree Learning for Large Scale Object Recognition

Author: Jia Deng, Sanjeev Satheesh, Alexander C. Berg, Fei Li

Abstract: We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by Bengio et al. 1

6 0.56831646 152 nips-2011-Learning in Hilbert vs. Banach Spaces: A Measure Embedding Viewpoint

7 0.50938213 47 nips-2011-Beyond Spectral Clustering - Tight Relaxations of Balanced Graph Cuts

8 0.50238067 53 nips-2011-Co-Training for Domain Adaptation

9 0.49242738 106 nips-2011-Generalizing from Several Related Classification Tasks to a New Unlabeled Sample

10 0.48854959 256 nips-2011-Solving Decision Problems with Limited Information

11 0.47523111 265 nips-2011-Sparse recovery by thresholded non-negative least squares

12 0.46835434 108 nips-2011-Greedy Algorithms for Structurally Constrained High Dimensional Problems

13 0.46453935 291 nips-2011-Transfer from Multiple MDPs

14 0.45902267 22 nips-2011-Active Ranking using Pairwise Comparisons

15 0.45544869 270 nips-2011-Statistical Performance of Convex Tensor Decomposition

16 0.45189765 287 nips-2011-The Manifold Tangent Classifier

17 0.44706109 233 nips-2011-Rapid Deformable Object Detection using Dual-Tree Branch-and-Bound

18 0.44592455 186 nips-2011-Noise Thresholds for Spectral Clustering

19 0.44402179 73 nips-2011-Divide-and-Conquer Matrix Factorization

20 0.44386825 198 nips-2011-On U-processes and clustering performance