iccv iccv2013 iccv2013-93 knowledge-graph by maker-knowledge-mining

93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso


Source: pdf

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. [sent-5, score-0.212]

2 It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. [sent-7, score-0.304]

3 In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. [sent-8, score-0.334]

4 A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. [sent-9, score-0.371]

5 The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. [sent-10, score-0.331]

6 1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. [sent-12, score-0.153]

7 Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. [sent-15, score-0.195]

8 In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. [sent-16, score-0.19]

9 CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. [sent-17, score-0.206]

10 Introduction This paper focuses on subspace segmentation, the goal of which is to segment a given data set into clusters, ideally with each cluster corresponding to a subspace. [sent-21, score-0.227]

11 cations, such as motion segmentation [19], face clustering [12], and image segmentation [9], owing to the fact that the real-world data often approximately lie in a mixture of subspaces. [sent-44, score-0.256]

12 Assume that the data are drawn from a union of k subspaces of unknown dimensions reospf kec stuivbelsyp. [sent-46, score-0.162]

13 Related work There has been a large body of research on subspace segmentation [23, 3, 13, 24, 17, 5, 8]. [sent-82, score-0.253]

14 Most recently, the Sparse Subspace Clustering (SSC) [3, 4], Low-Rank Representation (LRR) [13, 12, 2], and Least Squares Regression (LSR) [16] techniques have been proposed for subspace segmentation and attracted much attention. [sent-83, score-0.253]

15 These methods learn an affinity matrix whose entries measure the similarities among the data points and then perform spectral clustering on the affinity matrix to segment data. [sent-84, score-0.387]

16 Ideal- ly, the affinity matrix should be block diagonal (or block sparse in vector form), with nonzero entries corresponding to data point pairs from the same subspace. [sent-85, score-0.484]

17 The constructed affinity matrix is usually not block diagonal even under certain strong assumptions, e. [sent-89, score-0.293]

18 Then it uses the derived representation coefficients to measure the similarities between data points and constructs the affinity matrix. [sent-97, score-0.221]

19 It is shown that, if the subspaces are independent, the sparse representation is block sparse. [sent-98, score-0.22]

20 However, if the data from the same subspace 1A collection of k linear subspaces {Si }ik=1 are independent if and only if Si ? [sent-99, score-0.304]

21 Thus SSC may result in a sparse affinity matrix but lead to unsatisfactory performance. [sent-106, score-0.195]

22 Although LRR guarantees to produce a block diagonal solution when the data are noise free and drawn from independent subspaces, the real data are usually contaminated with noises or outliers. [sent-114, score-0.291]

23 Below is the most recent work by using Least Squares Regression (LSR) [16] for subspace segmentation: mWin ||X − XW||2F + λ||W||2F. [sent-120, score-0.176]

24 In fact, for subspace segmentation, both sparsity and grouping effect are very important. [sent-122, score-0.334]

25 Ideally, the affinity matrix should be sparse, with no connection between clusters. [sent-123, score-0.154]

26 On the other hand, the affinity matrix should not be too sparse, i. [sent-124, score-0.154]

27 , the nonzero connections within cluster should be sufficient enough for grouping correlated data in the same subspaces. [sent-126, score-0.276]

28 Thus, it is expected that the model can automatically group the correlated data within cluster (like LRR and LSR) and eliminate the connections between clusters (like SSC). [sent-127, score-0.189]

29 We take the adaptive advantage of trace Lasso to regularize the representation coefficient matrix, and define an affinity matrix by applying spectral clustering to the normalized Laplacian. [sent-135, score-0.462]

30 For CASS, we can see that most large representation coefficients cluster on the data points from the same subspace as y. [sent-139, score-0.27]

31 In comparison, the connections within cluster are very sparse by SSC, and the connections between clusters are very dense by LRR and LSR. [sent-140, score-0.165]

32 Contributions We summarize the contributions of this paper as follows: • • • We propose a new subspace segmentation method, cWaelle dpr otpheo sCeor are nleatwion s Adaptive Subspace Segmentation (CASS), by using trace Lasso [7]. [sent-143, score-0.443]

33 CASS is the first method that takes the data correlation into account for subspace segmentation. [sent-144, score-0.255]

34 We theoretically prove that trace Lasso has the grouping ethffeeocrte, i . [sent-148, score-0.288]

35 ∗ A main difference between trace Lasso and the existing norms is that trace Lasso involves the data matrix X, which makes it adaptive to the correlation of data. [sent-156, score-0.533]

36 1 If the data are highly correlated (the data points are all the same, X = x11T, XTX = 11T), trace Lasso is equal to the ? [sent-170, score-0.317]

37 For other cases, trace Lasso interpolates between the ? [sent-172, score-0.228]

38 We use trace Lasso for subset selection from all the data adaptively, which leads to the Correlation Adaptive Subspace Segmentation (CASS) method. [sent-175, score-0.216]

39 We first consider the subspace segmentation problem with clean data by CASS and then extend it to the noisy case. [sent-176, score-0.279]

40 CASS with clean data Let X = [x1, · · · , xn] = [X1, · · · , Xk]Γ be a set of data drawn from ,k· subspaces {Si},ik=··1·, ,wXhere Xi denotes a ctoal ldercatwionn orofm ni d sautab points {fSrom} the i-th subspace Si, n = ni, and Γ is a hypothesized permutation matrix w? [sent-179, score-0.432]

41 Different from the previous methods in SSC, LRR and LSR, CASS uses the trace Lasso as the objective function and solves the following problem: ? [sent-184, score-0.209]

42 (6) The methods, SSC, LRR and LSR, show that if the da- ta are sufficiently sampled from independent subspaces, a block diagonal solution can be achieved. [sent-188, score-0.157]

43 The work [16] further shows that it is easy to get a block diagonal solution if the objective function satisfies the Enforced Block Diagonal (EBD) conditions. [sent-189, score-0.164]

44 But the EBD conditions cannot be applied to trace Lasso directly, since trace Lasso is a function involving both the data X and w. [sent-190, score-0.44]

45 Here we extend the EBD conditions [16] to the Enforced Block Sparse (EBS) conditions and show that the obtained solution is block sparse when the objective function satisfies the EBS conditions. [sent-191, score-0.229]

46 Trace Lasso is a special case which satisfies the EBS conditions and thus leads to a block sparse solution. [sent-192, score-0.195]

47 y = Xw, (7) to be block sparse when the subspace are independent. [sent-200, score-0.312]

48 Theorem 1 Let X = [x1, · · · , xn] = [X1, · · · , Xk]Γ ∈ Rd×n be a data matrix w,h·o·s·e , xcolumn vec,to··rs· are ]sΓuff i∈ciently 3 drawn from a union of k independent subspaces {Si}ik=1, x? [sent-201, score-0.215]

49 e EBS conditions greatly extend the family of the objective function which involves the block sparse property. [sent-237, score-0.17]

50 It is easy to check that trace Lasso satisfies the EBS conditions. [sent-238, score-0.215]

51 The affinity matrices derived by (a) SSC, (b) LRR, (c) L- SR, and (d) CASS on the Extended Yale B Database (10 subjects). [sent-246, score-0.175]

52 ∗ ∗ In a similar way, CASS owns the block sparse property: Theorem 2 Let X = [x1, · · · , xn] = [X1, · · · , Xk]Γ ∈ Rd×n be a data matrix who,se·· c·o ,lxumn vectors, are s,uXffic]iΓen ∈tly drawn from a union of k independent subspaces {Si}ik=1, xj 0n, j m= a 1, ·i o· ·n , n. [sent-250, score-0.372]

53 (8) The block sparse property of CASS is the same as those of SSC, LRR and LSR when the data are from independent subspaces. [sent-266, score-0.18]

54 This is also the motivation for using trace Lasso for subspace segmentation. [sent-267, score-0.366]

55 For the noisy case, different from the previous methods, CASS may also lead to a solution which is close to block sparse, and it also has the grouping effect (see Section 2. [sent-268, score-0.224]

56 It is important for subspace segmentation but not the main focus of this paper. [sent-282, score-0.253]

57 1348 In the case ofdata contaminated with noises, it is difficult to obtain a block sparse solution. [sent-283, score-0.151]

58 Though the representation coefficient derived by SSC tends to be sparse, it is unable to group correlated data together. [sent-284, score-0.179]

59 CASS by using trace Lasso takes the correlation of data into account which places a tradeoff between sparsity and grouping effect. [sent-286, score-0.396]

60 We can see that the coefficient matrix derived by SSC is so sparse that it is even difficult to identify how many groups there are. [sent-292, score-0.167]

61 Our proposed method CASS by using trace Lasso, achieves a more accurate coefficient matrix, which is close to be block diagonal, and it also groups data within cluster. [sent-299, score-0.369]

62 Such intuition shows that CASS is more accurate to reveal the true data structure for subspace segmentation. [sent-300, score-0.202]

63 In this work, we show that trace Lasso also has the grouping effect for correlated data. [sent-307, score-0.378]

64 The grouping effect of CASS may be weaker than LRR and LSR, since it Algorithm 1 Solving Problem (9) by ADM Algorithm 1Solving Problem (9) by ADM Input: data matrix X, parameter λ. [sent-317, score-0.19]

65 end while also encourages sparsity between clusters, but it is sufficient enough for grouping correlated data together. [sent-332, score-0.212]

66 Construct the affinity matrix by (|W∗| + |W∗T|)/2. [sent-345, score-0.154]

67 The segmentation algorithm For solving the subspace segmentation problem by trace Lasso, we first solve problem (9) for each data point xi with Xiˆ = [x1, · · · , xi−1 , xi+1 , · · · , xn] which excludes xi itself, and o,b·t·ai·n , xthe corresponding coefficients. [sent-354, score-0.608]

68 The affinity matrix is defined as (|W∗| + Finally, we use mthea Nrixor imsa dliefziende dC autss ((|WNCu|t+s ) |[W20] to| segment tlhlye, d watea uisneto k groups. [sent-356, score-0.154]

69 Experiments In this section, we apply CASS for subspace segmentation on three databases: the Hopkins 155 4 motion database, Extended Yale B database [6] and MNIST database 5 of handwritten digits. [sent-360, score-0.315]

70 CASS is compared with SSC, LRR and LSR which are the representative and state-of-the-art methods for subspace segmentation. [sent-361, score-0.176]

71 The derived affinity ma- trices from all algorithms are also evaluated for the semisupervised learning task on the Extended Yale B database. [sent-362, score-0.152]

72 The segmentation accuracy/error is used to evaluate the subspace segmentation performance. [sent-365, score-0.33]

73 Each sequence is a sole data set and so there are 156 subspace segmentation problems in total. [sent-393, score-0.279]

74 Extended Yale B is challenging for subspace segmentation due to large noises. [sent-396, score-0.253]

75 We construct three subspace segmentation tasks based on the first 5, 8 and 10 subjects face images of this database. [sent-399, score-0.32]

76 The data are first projected into a 5 6, 8 6, and 10 6-dimensional subspace by PjeCctAed, respectively. [sent-400, score-0.202]

77 To further evaluate the effectiveness of CASS for other learning problems, we also use the derived affinity matrix for semi-supervised learning. [sent-402, score-0.187]

78 Our goal is to predict the labels of the test data by Markov random walks [21] on the affinity matrices learnt by kNN, SSC, LRR, LSR and CASS. [sent-408, score-0.168]

79 MNIST database of handwritten digits is also widely used in subspace learning and clustering [11]. [sent-411, score-0.259]

80 Third, the correlation of data is strong as the dimension of each affine subspace is no more than three [3] [16], thus CASS tends to be close to LSR in this case. [sent-428, score-0.255]

81 The affinity matrices derived by (a) SSC, (b) LRR, (c) LSR, and (d) CASS on the MNIST database. [sent-439, score-0.175]

82 For these two clustering tasks, both LRR and LSR perform much better than SSC, which can be attributed to the strong grouping effect of the two methods. [sent-456, score-0.166]

83 CASS not only preserves the grouping effect within cluster but also enhances the sparsity between clusters. [sent-458, score-0.183]

84 It confirms that CASS usually leads to an approximately block diagonal affinity matrix which results in a more accurate segmentation result. [sent-460, score-0.388]

85 This example shows that the affinity matrix by trace Lasso is also effective for semi-supervised learning. [sent-467, score-0.344]

86 The comparison of the derived affinity matrices by these four methods is illustrated in Figure 4. [sent-469, score-0.175]

87 We can see that CASS obtains an affinity matrix which is close to block diagonal by preserving the 135 1 grouping effect. [sent-470, score-0.391]

88 The main reason may lie in the fact that the handwritten digit data do not fit the subspace structure well. [sent-474, score-0.232]

89 This is also the main challenge for real-world applications by subspace segmentation. [sent-475, score-0.176]

90 Conclusions and Future Work In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using the trace Lasso. [sent-477, score-0.19]

91 The adaptive advantage of CASS comes from the mechanism of trace Lasso which balances between ? [sent-479, score-0.257]

92 In theory, we show that CASS is able to reveal the true segmentation result when the subspaces are independent. [sent-482, score-0.161]

93 The grouping effect of trace Lasso is firstly established in this work. [sent-483, score-0.319]

94 It may be better to learn a compact and discriminative dictionary for trace Lasso. [sent-488, score-0.19]

95 Second, trace Lasso may have many other applications, i. [sent-489, score-0.19]

96 Third, more scalable optimization algorithms should be developed for large scale subspace segmentation. [sent-492, score-0.176]

97 Trace Lasso: a trace norm regularization for correlated designs. [sent-546, score-0.269]

98 Simultaneous tensor subspace selection and clustering: the equivalence of high order SVD and k-means clustering. [sent-577, score-0.176]

99 Latent low-rank representation for subspace segmentation and feature extraction. [sent-597, score-0.253]

100 Robust and efficient subspace segmentation via least squares regression. [sent-615, score-0.272]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('cass', 0.658), ('lsr', 0.28), ('lrr', 0.268), ('ssc', 0.259), ('xdiag', 0.215), ('lasso', 0.214), ('trace', 0.19), ('subspace', 0.176), ('ebs', 0.145), ('affinity', 0.119), ('grouping', 0.098), ('block', 0.095), ('subspaces', 0.084), ('yale', 0.077), ('segmentation', 0.077), ('xw', 0.072), ('xtx', 0.069), ('xizi', 0.063), ('correlated', 0.059), ('ik', 0.053), ('correlation', 0.053), ('adm', 0.052), ('mnist', 0.049), ('si', 0.047), ('mwin', 0.046), ('subjects', 0.046), ('diagonal', 0.044), ('coefficient', 0.042), ('ebd', 0.042), ('sparse', 0.041), ('hopkins', 0.039), ('adaptive', 0.039), ('connections', 0.039), ('interpolates', 0.038), ('xjzj', 0.038), ('clustering', 0.037), ('accuracies', 0.036), ('matrix', 0.035), ('rd', 0.035), ('drawn', 0.034), ('conditions', 0.034), ('noises', 0.033), ('derived', 0.033), ('zi', 0.033), ('xi', 0.031), ('effect', 0.031), ('handwritten', 0.03), ('sparsity', 0.029), ('nonzero', 0.029), ('zk', 0.029), ('extended', 0.028), ('balances', 0.028), ('jt', 0.028), ('coefficients', 0.027), ('data', 0.026), ('wb', 0.026), ('cluster', 0.025), ('diag', 0.025), ('satisfies', 0.025), ('orm', 0.023), ('enforced', 0.023), ('matrices', 0.023), ('isj', 0.022), ('canyi', 0.022), ('xj', 0.021), ('xn', 0.021), ('face', 0.021), ('clusters', 0.021), ('equality', 0.02), ('zj', 0.02), ('permutation', 0.02), ('norm', 0.02), ('squares', 0.019), ('group', 0.019), ('regarded', 0.019), ('solves', 0.019), ('independent', 0.018), ('union', 0.018), ('entry', 0.018), ('approximately', 0.018), ('lemma', 0.018), ('theorem', 0.017), ('wt', 0.017), ('singapore', 0.017), ('regression', 0.017), ('yan', 0.017), ('corruptions', 0.017), ('xk', 0.017), ('yt', 0.016), ('adaptively', 0.016), ('separable', 0.016), ('uncorrelated', 0.016), ('groups', 0.016), ('points', 0.016), ('wi', 0.016), ('tpami', 0.016), ('database', 0.016), ('sign', 0.015), ('contaminated', 0.015), ('ni', 0.015)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999994 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

2 0.31962049 232 iccv-2013-Latent Space Sparse Subspace Clustering

Author: Vishal M. Patel, Hien Van Nguyen, René Vidal

Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.

3 0.31518185 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

Author: Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I. Jordan

Abstract: Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data. Low-Rank Representation (LRR), a convex formulation of the subspace segmentation problem, is provably and empirically accurate on small problems but does not scale to the massive sizes of modern vision datasets. Moreover, past work aimed at scaling up low-rank matrix factorization is not applicable to LRR given its non-decomposable constraints. In this work, we propose a novel divide-and-conquer algorithm for large-scale subspace segmentation that can cope with LRR ’s non-decomposable constraints and maintains LRR ’s strong recovery guarantees. This has immediate implications for the scalability of subspace segmentation, which we demonstrate on a benchmark face recognition dataset and in simulations. We then introduce novel applications of LRR-based subspace segmentation to large-scale semisupervised learning for multimedia event detection, concept detection, and image tagging. In each case, we obtain stateof-the-art results and order-of-magnitude speed ups.

4 0.29619119 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

Author: Yingya Zhang, Zhenan Sun, Ran He, Tieniu Tan

Abstract: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e.g., noise and corruptions) existing in high-dimensional data. Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. However large corruptions in real-world applications can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of our method with explicit introduction of representation error terms into the model is also proposed. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods.

5 0.23962815 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering

Author: Canyi Lu, Jinhui Tang, Min Lin, Liang Lin, Shuicheng Yan, Zhouchen Lin

Abstract: In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows/features. The multiplicative form of half-quadratic optimization is used to optimize the nonconvex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.

6 0.23467964 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

7 0.13120991 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation

8 0.096483648 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

9 0.095152661 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

10 0.088976666 235 iccv-2013-Learning Coupled Feature Spaces for Cross-Modal Matching

11 0.086312041 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection

12 0.082509518 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

13 0.078557439 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

14 0.073835395 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

15 0.070854202 361 iccv-2013-Robust Trajectory Clustering for Motion Segmentation

16 0.06733273 167 iccv-2013-Finding Causal Interactions in Video Sequences

17 0.062961318 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications

18 0.060542978 140 iccv-2013-Elastic Net Constraints for Shape Matching

19 0.056740172 290 iccv-2013-New Graph Structured Sparsity Model for Multi-label Image Annotations

20 0.054960884 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.124), (1, 0.017), (2, -0.062), (3, -0.016), (4, -0.164), (5, 0.127), (6, 0.014), (7, 0.186), (8, 0.18), (9, 0.008), (10, 0.104), (11, 0.02), (12, -0.171), (13, 0.016), (14, -0.128), (15, -0.076), (16, -0.007), (17, -0.05), (18, 0.011), (19, 0.035), (20, -0.019), (21, 0.198), (22, -0.089), (23, -0.13), (24, -0.03), (25, -0.145), (26, -0.077), (27, 0.049), (28, -0.041), (29, -0.044), (30, -0.029), (31, -0.016), (32, 0.067), (33, -0.02), (34, -0.061), (35, 0.035), (36, -0.011), (37, -0.016), (38, -0.001), (39, 0.006), (40, 0.012), (41, -0.024), (42, -0.013), (43, -0.052), (44, 0.021), (45, -0.083), (46, 0.019), (47, -0.039), (48, -0.026), (49, -0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95298845 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

2 0.95169562 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

Author: Yingya Zhang, Zhenan Sun, Ran He, Tieniu Tan

Abstract: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e.g., noise and corruptions) existing in high-dimensional data. Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. However large corruptions in real-world applications can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of our method with explicit introduction of representation error terms into the model is also proposed. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods.

3 0.88180923 232 iccv-2013-Latent Space Sparse Subspace Clustering

Author: Vishal M. Patel, Hien Van Nguyen, René Vidal

Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.

4 0.85630471 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

Author: Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I. Jordan

Abstract: Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data. Low-Rank Representation (LRR), a convex formulation of the subspace segmentation problem, is provably and empirically accurate on small problems but does not scale to the massive sizes of modern vision datasets. Moreover, past work aimed at scaling up low-rank matrix factorization is not applicable to LRR given its non-decomposable constraints. In this work, we propose a novel divide-and-conquer algorithm for large-scale subspace segmentation that can cope with LRR ’s non-decomposable constraints and maintains LRR ’s strong recovery guarantees. This has immediate implications for the scalability of subspace segmentation, which we demonstrate on a benchmark face recognition dataset and in simulations. We then introduce novel applications of LRR-based subspace segmentation to large-scale semisupervised learning for multimedia event detection, concept detection, and image tagging. In each case, we obtain stateof-the-art results and order-of-magnitude speed ups.

5 0.85336375 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering

Author: Canyi Lu, Jinhui Tang, Min Lin, Liang Lin, Shuicheng Yan, Zhouchen Lin

Abstract: In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows/features. The multiplicative form of half-quadratic optimization is used to optimize the nonconvex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.

6 0.81034327 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation

7 0.70496768 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

8 0.67556947 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

9 0.6638965 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

10 0.53366435 329 iccv-2013-Progressive Multigrid Eigensolvers for Multiscale Spectral Segmentation

11 0.41121718 235 iccv-2013-Learning Coupled Feature Spaces for Cross-Modal Matching

12 0.38875076 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

13 0.38827959 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video

14 0.34216523 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

15 0.33228031 443 iccv-2013-Video Synopsis by Heterogeneous Multi-source Correlation

16 0.32526055 357 iccv-2013-Robust Matrix Factorization with Unknown Noise

17 0.32495302 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

18 0.32347682 167 iccv-2013-Finding Causal Interactions in Video Sequences

19 0.31772155 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

20 0.31545147 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.062), (7, 0.018), (26, 0.075), (27, 0.01), (31, 0.042), (34, 0.016), (40, 0.015), (42, 0.196), (48, 0.012), (64, 0.02), (73, 0.023), (78, 0.017), (89, 0.094), (95, 0.01), (96, 0.25), (97, 0.015), (98, 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.79162115 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

2 0.74313188 173 iccv-2013-Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging

Author: Hae-Gon Jeon, Joon-Young Lee, Yudeog Han, Seon Joo Kim, In So Kweon

Abstract: Finding a good binary sequence is critical in determining theperformance ofthe coded exposure imaging, butprevious methods mostly rely on a random search for finding the binary codes, which could easily fail to find good long sequences due to the exponentially growing search space. In this paper, we present a new computationally efficient algorithm for generating the binary sequence, which is especially well suited for longer sequences. We show that the concept of the low autocorrelation binary sequence that has been well exploited in the information theory community can be applied for generating the fluttering patterns of the shutter, propose a new measure of a good binary sequence, and present a new algorithm by modifying the Legendre sequence for the coded exposure imaging. Experiments using both synthetic and real data show that our new algorithm consistently generates better binary sequencesfor the coded exposure problem, yielding better deblurring and resolution enhancement results compared to the previous methods for generating the binary codes.

3 0.67699534 22 iccv-2013-A New Adaptive Segmental Matching Measure for Human Activity Recognition

Author: Shahriar Shariat, Vladimir Pavlovic

Abstract: The problem of human activity recognition is a central problem in many real-world applications. In this paper we propose a fast and effective segmental alignmentbased method that is able to classify activities and interactions in complex environments. We empirically show that such model is able to recover the alignment that leads to improved similarity measures within sequence classes and hence, raises the classification performance. We also apply a bounding technique on the histogram distances to reduce the computation of the otherwise exhaustive search.

4 0.67349201 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables

Author: Daozheng Chen, Dhruv Batra, William T. Freeman

Abstract: Latent variables models have been applied to a number of computer vision problems. However, the complexity of the latent space is typically left as a free design choice. A larger latent space results in a more expressive model, but such models are prone to overfitting and are slower to perform inference with. The goal of this paper is to regularize the complexity of the latent space and learn which hidden states are really relevant for prediction. Specifically, we propose using group-sparsity-inducing regularizers such as ?1-?2 to estimate the parameters of Structured SVMs with unstructured latent variables. Our experiments on digit recognition and object detection show that our approach is indeed able to control the complexity of latent space without any significant loss in accuracy of the learnt model.

5 0.67216671 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition

Author: De-An Huang, Yu-Chiang Frank Wang

Abstract: Cross-domain image synthesis and recognition are typically considered as two distinct tasks in the areas of computer vision and pattern recognition. Therefore, it is not clear whether approaches addressing one task can be easily generalized or extended for solving the other. In this paper, we propose a unified model for coupled dictionary and feature space learning. The proposed learning model not only observes a common feature space for associating cross-domain image data for recognition purposes, the derived feature space is able to jointly update the dictionaries in each image domain for improved representation. This is why our method can be applied to both cross-domain image synthesis and recognition problems. Experiments on a variety of synthesis and recognition tasks such as single image super-resolution, cross-view action recognition, and sketchto-photo face recognition would verify the effectiveness of our proposed learning model.

6 0.66669786 213 iccv-2013-Implied Feedback: Learning Nuances of User Behavior in Image Search

7 0.66336775 70 iccv-2013-Cascaded Shape Space Pruning for Robust Facial Landmark Detection

8 0.66286659 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects

9 0.66145313 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

10 0.66093588 167 iccv-2013-Finding Causal Interactions in Video Sequences

11 0.65730482 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding

12 0.65580052 54 iccv-2013-Attribute Pivots for Guiding Relevance Feedback in Image Search

13 0.65455687 46 iccv-2013-Allocentric Pose Estimation

14 0.65370774 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

15 0.65357423 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones

16 0.65248042 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

17 0.65170676 184 iccv-2013-Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion

18 0.64962965 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

19 0.64910752 231 iccv-2013-Latent Multitask Learning for View-Invariant Action Recognition

20 0.6457957 44 iccv-2013-Adapting Classification Cascades to New Domains