iccv iccv2013 iccv2013-232 knowledge-graph by maker-knowledge-mining

232 iccv-2013-Latent Space Sparse Subspace Clustering


Source: pdf

Author: Vishal M. Patel, Hien Van Nguyen, René Vidal

Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. [sent-4, score-0.606]

2 Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. [sent-5, score-0.485]

3 Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. [sent-6, score-0.527]

4 One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. [sent-8, score-0.383]

5 Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods. [sent-9, score-0.68]

6 For instance, it is well known that the set of face images under all possible illumination conditions can be well approximated by a 9-dimensional linear subspace [2]. [sent-13, score-0.408]

7 Similarly, trajectories of a rigidly moving object in a video [4] and hand-written digits with different variations [10] also lie in low-dimensional subspaces. [sent-14, score-0.307]

8 Therefore, one can view the collection of data from different classes as samples from a union of lowdimensional subspaces. [sent-15, score-0.166]

9 In subspace clustering, given the data from a union of subspaces, the objective is to find the number of subspaces, their dimensions, the segmentation of the data and a basis for each subspace [25]. [sent-16, score-0.955]

10 Various algorithms have been proposed in the literature for subspace clustering. [sent-17, score-0.408]

11 Some of these algorithms are iterative in nature [11], [31] while the others are based on spectral clustering [3], [9], [29], [5]. [sent-18, score-0.345]

12 Overview of the proposed latent space sparse subspace clustering method. [sent-22, score-0.976]

13 In particular, sparse representation and low-rank approximation-based methods for subspace clustering [15], [7], [5], [6], [26], [20], [16] have gained a lot of traction in recent years. [sent-24, score-0.827]

14 These methods find a sparse and low-rank representation of the data and build a similarity graph on the sparse coefficient matrix for segmenting the data. [sent-25, score-0.366]

15 Computation of sparse and low-rank representations is very computationally demanding especially when the dimension of the features is high [6]. [sent-29, score-0.18]

16 This is one of the drawbacks of the sparse and low-rank methods. [sent-30, score-0.147]

17 To deal with this problem, dimensionally reduction is generally applied on the data prior to applying these algorithms. [sent-31, score-0.147]

18 However, a well learned projection matrix can lead to a higher clustering accuracy at a lower dimensionality. [sent-33, score-0.377]

19 Several works have been proposed in the literature that find a sparse representation on a low-dimensional latent space [17], [30], [19]. [sent-34, score-0.296]

20 Motivated by some of the sparsity promoting dimensionality reduction methods, in this paper, we propose a method for simultaneous dimensionality reduction and subspace clustering under the framework of SSC. [sent-36, score-1.169]

21 We learn the transformation of data from the original space onto a lowdimensional space such that its manifold structure is maintained. [sent-37, score-0.158]

22 An efficient algorithm is proposed that simultaneously learns the projection and finds the sparse coefficients in the low-dimensional latent space. [sent-38, score-0.485]

23 Finally, the segmenta- tion of the data is obtained by applying spectral clustering to a similarity matrix built from the sparse coefficients. [sent-39, score-0.527]

24 Figure 1presents an overview of our subspace clustering method. [sent-41, score-0.68]

25 reduction and sparse A simple iterative procedure is introduced for solving tAhe s proposed optimization problem. [sent-43, score-0.232]

26 Sections 3 and 4 give the details of our linear and non-linear simultaneous dimensionality reduction and subspace clustering approaches, respectively. [sent-49, score-0.897]

27 Let Y = [y1, · ·· , yN] ∈ RD×N be a collection of N signals {yi ∈ RD,·}·iN=·1 , ydra]w ∈n Rfrom a union of n linear subspaces S1 ∪ S2 ∪ · · · ∪ Sn of dimensions {d? [sent-53, score-0.283]

28 That is, one can represent yi as follows = = yi Yci, cii 0, ? [sent-65, score-0.176]

29 1-minimization problem is solved to obtain the coefficients min ? [sent-73, score-0.129]

30 1 subject to Y = YC, diag(C) = 0, (2) where C = [c1, ··· , cN] ∈ RN×N is the coefficient matrix whose colu,m··n ci is t h∈e sparse representation vector corresponding to yi. [sent-84, score-0.252]

31 Once C is found, spectral clustering methods [18] are applied on the affinity matrix W = |C | + |C |T to obtain the segmentation of the data Y into Y|C1| ,+Y |2,C · , Yn. [sent-85, score-0.464]

32 In practice, the dwahtear may lie in a union of affine subspaces. [sent-98, score-0.173]

33 In this case, the following problem can be solved to obtain the sparse coefficients min ? [sent-99, score-0.276]

34 Latent Space SSC (LS3C) Different from the traditional SSC, we develop an algorithm that embeds signals into a low-dimensional space and simultaneously finds the sparse codes in that space. [sent-106, score-0.254]

35 Let P ∈ Rt×D be a matrix representing a linear transformation tPha ∈t maps signals from the original space RD to a latent output space of dimension t. [sent-107, score-0.256]

36 We can learn the mapping and find the sparse codes simultaneously by minimizing the following cost function [P∗,C∗] =P m,iCnJ(P,C,Y) subject to PPT = I,diag(C) = 0, (5) where J(P,C,Y) = ? [sent-108, score-0.18]

37 Note that the above formulation can be extended so that it can deal with data that lie in a union of affine subspaces. [sent-120, score-0.173]

38 Intuitively, the above proposition says that the projection can be written as a linear combination of the data samples. [sent-126, score-0.193]

39 Using the eigen decomposition of K = VSVT, we get VS12MMTS21VT, KTQTK = where M = As a result, (13) can be rewritten as trace S21VTΨ. [sent-166, score-0.133]

40 Non-Linear Latent Space SSC (NLS3C) In [6], it was shown that the performance of the SSC method depends on the principle angles between subspaces and the distribution of the data in each subspace. [sent-205, score-0.265]

41 The smallest principle angle, θij, between two subspaces Si and Sj is defined as cos(θi,j) =ai∈mSiaaxj∈Sj? [sent-206, score-0.295]

42 In particular, when the smallest principle angle between subspaces and the number of data points in each subspace is small, the SSC clustering error increases. [sent-211, score-1.099]

43 By mapping the data onto a high-dimensional feature space and then projecting back onto a low-dimensional space, one may be able deal with this small-principle-angles problem. [sent-212, score-0.123]

44 Some commonly used kernels inc)l,u·d·e· polynomial kernels κ(x, y) = ? [sent-219, score-0.119]

45 Both the linear and non-linear methods for finding the sparse coefficient matrix in the latent space along with the projection matrix are summarized in Algorithm 1. [sent-249, score-0.473]

46 Similar to the SSC method, once the sparse coefficient matrix C is found, spectral clustering is applied on the affinity matrix W = |C | + |C |T to obtain the segmentation of tithye mdaattar ixn Wthe =low |C-d|i +me |Cns|ional latent space. [sent-252, score-0.795]

47 The resulting latent space SSC (LS3C) and non-linear latent space SSC (NLS3C) methods are summarized in Algorithm 2. [sent-253, score-0.298]

48 We compare our method with several state-of-the-art subspace clustering algorithms such as SSC [6], Low-Rank Representation (LRR) [15], Low-Rank Subspace Clustering (LRSC) [7], Local Subspace Affinity (LSA) [29] and Spectral Curvature Clustering (SCC) [3]. [sent-260, score-0.68]

49 Subspace clustering error is used to measure the performance of different algorithms. [sent-265, score-0.316]

50 It is defined as subspace clustering error =#of tmotiaslc#laosfsi pfioeidn ptsoints×100. [sent-266, score-0.724]

51 Synthetic Data In this section, we generate a synthetic data to study the performance of LNS3C and NLS3C when the data in each subspace and the smallest principle angle between subspaces are small. [sent-269, score-0.749]

52 We consider n = 3 subspaces of dimension d = 3 embedded in D = 50 dimensional space. [sent-271, score-0.16]

53 s eAsl s{oT, the∈ subspaces are generated such that θ12 = θ23 = θ. [sent-273, score-0.16]

54 Furthermore, we generate the same number of points, Ng , in each subspace at random and change the value of Ng. [sent-274, score-0.438]

55 For a fixed value of d, we change the minimum angle between subspaces, θ, as well as the number of points in each subspace Ng. [sent-275, score-0.488]

56 For each pair of (θ, Ng), we compute the subspace clustering error. [sent-276, score-0.68]

57 Since the performance of SSC and NSSC methods are based on how well the sparse coefficients are found, we also calculate the subspace sparse recovery error. [sent-277, score-0.898]

58 For the data points the sparse recovery error oEr. [sent-278, score-0.33]

59 , where ciT = [ci1T, ci2T, ci3T] represents the sparse coefficients ofyi ∈ Sqi and cij corresponds to the points in Sj . [sent-286, score-0.272]

60 We vary t∈he S smallest principle angle b tehtew peoeinn subspaces and the number of points in each subspace as θ ∈ [6, 60] aanndd Ng n∈u [md b+e 1, 2f0 pdo] , respectively. [sent-287, score-0.783]

61 Fbsopr aecaech a pair ∈(θ [,6 Ng) , we calcu∈lat [ed t +he 1 average subspace clustering error as θw,Nell as the average ESR over 20 trials. [sent-288, score-0.724]

62 When θ and Ng decrease both the sparse recovery and clustering errors of all the methods increase. [sent-291, score-0.573]

63 Also, the clustering error is highly dependent on the sparse recovery error and both errors follow the same pattern. [sent-292, score-0.661]

64 In other words, clustering results are highly dependent on how well the sparse coefficients are recovered. [sent-293, score-0.51]

65 This can be explained by the fact that our method finds the projection directly from data and preserves the sparse structure of data in the latent space. [sent-296, score-0.364]

66 It has been shown that trajectories of a general rigid motion under affine projection span a 4n-dimensional linear subspace [4]. [sent-307, score-0.673]

67 In other words, feature trajectories of n rigid motions lie in a union of ndimensional subspaces of R2F. [sent-308, score-0.462]

68 Hence, the problem of clustering the trajectories according to the different motion is equivalent to the problem of clustering affine subspaces. [sent-309, score-0.703]

69 We apply our joint dimensionality reduction and subspace clustering framework to the Hopkins155 motion segmentation database [24]. [sent-311, score-0.945]

70 On average, each sequence of 2 motions has 266 feature trajectories and 30 frames and each sequence of 3 motions has 398 feature trajectories and 29 frames. [sent-314, score-0.26]

71 For the methods other than LS3C and NLS3C, the data is first projected onto the 4-dimensional subspace using PCA [6]. [sent-318, score-0.451]

72 Non-linear LS3C also obtains small clustering errors compared to the other competitive subspace clustering methods. [sent-321, score-1.001]

73 Subspace clustering errors and subspace-sparse recovery errors for three randomly generated disjoint subspaces with different smallest principle angles and different number of points. [sent-330, score-0.801]

74 The average subspace clustering and subspace-sparse recovery errors of the SSC method are shown in (a) and (d), respectively. [sent-331, score-0.834]

75 The average subspace clustering and subspace-sparse recovery errors of the LS3C method are shown in (b) and (e), respectively. [sent-332, score-0.834]

76 The average subspace clustering and subspace-sparse recovery errors of the NLS3C method are shown in (c) and (f), respectively. [sent-333, score-0.834]

77 Random projections have been used for dimensionality reduction in many sparsity-based algorithms [28], [5] and they have been shown to preserve the sparsity of data provided certain conditions are met [1]. [sent-361, score-0.233]

78 The average clustering error results are summarized in Table 2. [sent-372, score-0.316]

79 It is also interesting to note that the performance of both SSC and LRR varies depending on the projection matrix used for dimensionality reduction. [sent-374, score-0.191]

80 Rotated MNIST Dataset The rotated MNIST benchmark [13] contains gray scale images of hand-written digits of size 28 28 pixels. [sent-380, score-0.251]

81 In the first dataset, called the mnist-rot, digits are rotated by random angles generated uniformly between 0 and 2π radians. [sent-390, score-0.312]

82 The mnist-back-rand dataset is created by inserting random backgrounds in the original MNIST digit images. [sent-392, score-0.144]

83 We evaluate the clustering performance of LS3C and NLS3 as well as SSC and LRR on this challenging dataset. [sent-398, score-0.272]

84 It was shown in [10] that handwritten digits with some variations lie on 12-dimensional subspaces. [sent-399, score-0.265]

85 Hence, n hand- written digits can be modeled as data points lying close to a union of 12-dimensional subspaces. [sent-400, score-0.361]

86 We use these samples for clustering and repeat the process 120 times so that all the samples from the training and the validation sets are used for clustering. [sent-403, score-0.346]

87 We report the average clustering performances of different methods in Table 3. [sent-404, score-0.272]

88 This is the case because these methods can not find the sparse and low-rank representation of the samples when the data contains random rotations. [sent-410, score-0.214]

89 Average clustering errors on the rotated MNIST datasets: (a) mnist-rot, (b) mnist-rot-back-image, (c) mnist-backrand. [sent-426, score-0.392]

90 Two most computationally heavy steps of SSC and our method are the computation of sparse coefficients and spectral clustering. [sent-427, score-0.344]

91 Since the sparse coefficients are found in the low-dimensional space, computation of sparse coefficients is more efficient in our case than SSC. [sent-428, score-0.476]

92 On average our method takes about 13 seconds to cluster 100 digits of size 28 28, whereas SSC takes about 14 seconds. [sent-430, score-0.18]

93 Conclusion We have proposed a simultaneous dimensionality reduction and sparse coding method in the low-dimensional latent space for SSC. [sent-436, score-0.513]

94 Furthermore, the method is made non-linear so that it can deal with the small principle angle problem. [sent-438, score-0.12]

95 Through extensive clustering experiments on several datasets, it was 23 1 shown that the proposed LS3C and NLS3C methods are robust and can perform significantly better than many stateof-the-art subspace clustering methods. [sent-439, score-0.952]

96 A closed form solution to robust subspace estimation and clustering. [sent-488, score-0.408]

97 Latent low-rank representation for subspace segmentation and feature extraction. [sent-546, score-0.459]

98 Sparse embedding: A framework for sparsity promoting dimensionality reduction. [sent-570, score-0.187]

99 Double sparsity: Learning sparse dictionaries for sparse signal approximation. [sent-582, score-0.294]

100 On the dimensionality reduction for sparse representation based face recognition. [sent-640, score-0.318]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ssc', 0.436), ('subspace', 0.408), ('clustering', 0.272), ('digits', 0.18), ('tk', 0.161), ('subspaces', 0.16), ('sparse', 0.147), ('mnist', 0.139), ('lrr', 0.119), ('latent', 0.112), ('recovery', 0.105), ('ppt', 0.102), ('proposition', 0.093), ('trace', 0.093), ('coefficients', 0.091), ('union', 0.088), ('dimensionality', 0.086), ('reduction', 0.085), ('vidal', 0.083), ('mtm', 0.082), ('ng', 0.081), ('trajectories', 0.079), ('principle', 0.074), ('conference', 0.074), ('spectral', 0.073), ('rotated', 0.071), ('projection', 0.07), ('rd', 0.068), ('yc', 0.065), ('cii', 0.062), ('dimensionally', 0.062), ('dtianoahn', 0.062), ('gmoe', 0.062), ('hien', 0.062), ('ktqtk', 0.062), ('umiacs', 0.062), ('vsvt', 0.062), ('sparsity', 0.062), ('smallest', 0.061), ('diag', 0.061), ('yi', 0.057), ('mmin', 0.055), ('scc', 0.055), ('esr', 0.055), ('yci', 0.055), ('motions', 0.051), ('segmentation', 0.051), ('lrsc', 0.051), ('patel', 0.051), ('polynomial', 0.049), ('errors', 0.049), ('lie', 0.048), ('simultaneous', 0.046), ('pattern', 0.046), ('md', 0.046), ('angle', 0.046), ('transformations', 0.045), ('error', 0.044), ('admm', 0.044), ('motion', 0.043), ('onto', 0.043), ('rn', 0.042), ('international', 0.041), ('lowdimensional', 0.041), ('inserting', 0.041), ('ieee', 0.041), ('sj', 0.041), ('kernel', 0.04), ('eigen', 0.04), ('promoting', 0.039), ('solved', 0.038), ('backgrounds', 0.037), ('coefficient', 0.037), ('samples', 0.037), ('space', 0.037), ('affine', 0.037), ('nguyen', 0.037), ('handwritten', 0.037), ('rigid', 0.036), ('elhamifar', 0.036), ('digit', 0.036), ('finds', 0.035), ('matrix', 0.035), ('kernels', 0.035), ('signals', 0.035), ('points', 0.034), ('affinity', 0.033), ('equality', 0.033), ('subject', 0.033), ('computationally', 0.033), ('pages', 0.032), ('angles', 0.031), ('written', 0.03), ('degenerate', 0.03), ('learns', 0.03), ('random', 0.03), ('furthermore', 0.03), ('circular', 0.029), ('curvature', 0.029), ('lying', 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 232 iccv-2013-Latent Space Sparse Subspace Clustering

Author: Vishal M. Patel, Hien Van Nguyen, René Vidal

Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.

2 0.53268826 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

Author: Yingya Zhang, Zhenan Sun, Ran He, Tieniu Tan

Abstract: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e.g., noise and corruptions) existing in high-dimensional data. Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. However large corruptions in real-world applications can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of our method with explicit introduction of representation error terms into the model is also proposed. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods.

3 0.39075002 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

Author: Zhuwen Li, Jiaming Guo, Loong-Fah Cheong, Steven Zhiying Zhou

Abstract: This paper addresses real-world challenges in the motion segmentation problem, including perspective effects, missing data, and unknown number of motions. It first formulates the 3-D motion segmentation from two perspective views as a subspace clustering problem, utilizing the epipolar constraint of an image pair. It then combines the point correspondence information across multiple image frames via a collaborative clustering step, in which tight integration is achieved via a mixed norm optimization scheme. For model selection, wepropose an over-segment and merge approach, where the merging step is based on the property of the ?1-norm ofthe mutual sparse representation oftwo oversegmented groups. The resulting algorithm can deal with incomplete trajectories and perspective effects substantially better than state-of-the-art two-frame and multi-frame methods. Experiments on a 62-clip dataset show the significant superiority of the proposed idea in both segmentation accuracy and model selection.

4 0.31962049 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

5 0.26632968 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

Author: Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I. Jordan

Abstract: Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data. Low-Rank Representation (LRR), a convex formulation of the subspace segmentation problem, is provably and empirically accurate on small problems but does not scale to the massive sizes of modern vision datasets. Moreover, past work aimed at scaling up low-rank matrix factorization is not applicable to LRR given its non-decomposable constraints. In this work, we propose a novel divide-and-conquer algorithm for large-scale subspace segmentation that can cope with LRR ’s non-decomposable constraints and maintains LRR ’s strong recovery guarantees. This has immediate implications for the scalability of subspace segmentation, which we demonstrate on a benchmark face recognition dataset and in simulations. We then introduce novel applications of LRR-based subspace segmentation to large-scale semisupervised learning for multimedia event detection, concept detection, and image tagging. In each case, we obtain stateof-the-art results and order-of-magnitude speed ups.

6 0.23575053 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering

7 0.22419168 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

8 0.22088823 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

9 0.20636593 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation

10 0.19280797 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

11 0.19106951 361 iccv-2013-Robust Trajectory Clustering for Motion Segmentation

12 0.18325683 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

13 0.16879499 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

14 0.15151769 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection

15 0.13545506 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video

16 0.12540516 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications

17 0.12098096 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables

18 0.112985 303 iccv-2013-Orderless Tracking through Model-Averaged Posterior Estimation

19 0.10991157 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

20 0.10905728 392 iccv-2013-Similarity Metric Learning for Face Recognition


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.234), (1, 0.021), (2, -0.102), (3, 0.014), (4, -0.281), (5, 0.207), (6, 0.022), (7, 0.255), (8, 0.251), (9, 0.039), (10, 0.154), (11, -0.011), (12, -0.218), (13, 0.005), (14, -0.14), (15, -0.095), (16, 0.011), (17, -0.041), (18, 0.024), (19, 0.032), (20, -0.054), (21, 0.222), (22, -0.037), (23, -0.095), (24, -0.05), (25, -0.149), (26, -0.068), (27, 0.029), (28, -0.017), (29, -0.012), (30, -0.059), (31, 0.006), (32, 0.001), (33, -0.038), (34, -0.054), (35, -0.01), (36, -0.002), (37, -0.032), (38, -0.006), (39, -0.028), (40, -0.013), (41, -0.041), (42, 0.02), (43, -0.064), (44, -0.009), (45, -0.076), (46, 0.037), (47, 0.02), (48, -0.039), (49, 0.001)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.97839516 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

Author: Yingya Zhang, Zhenan Sun, Ran He, Tieniu Tan

Abstract: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e.g., noise and corruptions) existing in high-dimensional data. Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. However large corruptions in real-world applications can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of our method with explicit introduction of representation error terms into the model is also proposed. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods.

same-paper 2 0.97039396 232 iccv-2013-Latent Space Sparse Subspace Clustering

Author: Vishal M. Patel, Hien Van Nguyen, René Vidal

Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.

3 0.94719696 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

4 0.89322704 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation

Author: Choon-Meng Lee, Loong-Fah Cheong

Abstract: In contrast to the current motion segmentation paradigm that assumes independence between the motion subspaces, we approach the motion segmentation problem by seeking the parsimonious basis set that can represent the data. Our formulation explicitly looks for the overlap between subspaces in order to achieve a minimal basis representation. This parsimonious basis set is important for the performance of our model selection scheme because the sharing of basis results in savings of model complexity cost. We propose the use of affinity propagation based method to determine the number of motion. The key lies in the incorporation of a global cost model into the factor graph, serving the role of model complexity. The introduction of this global cost model requires additional message update in the factor graph. We derive an efficient update for the new messages associated with this global cost model. An important step in the use of affinity propagation is the subspace hypotheses generation. We use the row-sparse convex proxy solution as an initialization strategy. We further encourage the selection of subspace hypotheses with shared basis by integrat- ing a discount scheme that lowers the factor graph facility cost based on shared basis. We verified the model selection and classification performance of our proposed method on both the original Hopkins 155 dataset and the more balanced Hopkins 380 dataset.

5 0.85881478 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering

Author: Canyi Lu, Jinhui Tang, Min Lin, Liang Lin, Shuicheng Yan, Zhouchen Lin

Abstract: In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows/features. The multiplicative form of half-quadratic optimization is used to optimize the nonconvex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.

6 0.85677606 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

7 0.82556462 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

8 0.78984213 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

9 0.73111993 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

10 0.57128602 329 iccv-2013-Progressive Multigrid Eigensolvers for Multiscale Spectral Segmentation

11 0.54306436 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video

12 0.50494695 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

13 0.48622549 361 iccv-2013-Robust Trajectory Clustering for Motion Segmentation

14 0.47199264 301 iccv-2013-Optimal Orthogonal Basis and Image Assimilation: Motion Modeling

15 0.46829376 235 iccv-2013-Learning Coupled Feature Spaces for Cross-Modal Matching

16 0.4649415 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

17 0.43940037 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

18 0.42846456 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

19 0.41011605 443 iccv-2013-Video Synopsis by Heterogeneous Multi-source Correlation

20 0.40408251 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.066), (7, 0.038), (19, 0.011), (26, 0.055), (27, 0.025), (31, 0.052), (42, 0.16), (48, 0.026), (49, 0.212), (64, 0.03), (73, 0.034), (89, 0.158), (96, 0.015), (97, 0.016), (98, 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.8184405 232 iccv-2013-Latent Space Sparse Subspace Clustering

Author: Vishal M. Patel, Hien Van Nguyen, René Vidal

Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.

2 0.79616189 130 iccv-2013-Dynamic Structured Model Selection

Author: David Weiss, Benjamin Sapp, Ben Taskar

Abstract: Ben Taskar University of Washington Seattle, WA t as kar @ c s . washingt on . edu In many cases, the predictive power of structured models for for complex vision tasks is limited by a trade-off between the expressiveness and the computational tractability of the model. However, choosing this trade-off statically a priori is suboptimal, as images and videos in different settings vary tremendously in complexity. On the other hand, choosing the trade-off dynamically requires knowledge about the accuracy of different structured models on any given example. In this work, we propose a novel two-tier architecture that provides dynamic speed/accuracy trade-offs through a simple type of introspection. Our approach, which we call dynamic structured model selection (DMS), leverages typically intractable features in structured learning problems in order to automatically determine ’ which of several models should be used at test-time in order to maximize accuracy under a fixed budgetary constraint. We demonstrate DMS on two sequential modeling vision tasks, and we establish a new state-of-the-art in human pose estimation in video with an implementation that is roughly 23 faster than the prevaino uims sptleanmdeanrtda implementation.

3 0.74955255 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

Author: Hongteng Xu, Hongyuan Zha

Abstract: Data sparsity has been a thorny issuefor manifold-based image synthesis, and in this paper we address this critical problem by leveraging ideas from transfer learning. Specifically, we propose methods based on generating auxiliary data in the form of synthetic samples using transformations of the original sparse samples. To incorporate the auxiliary data, we propose a weighted data synthesis method, which adaptively selects from the generated samples for inclusion during the manifold learning process via a weighted iterative algorithm. To demonstrate the feasibility of the proposed method, we apply it to the problem of face image synthesis from sparse samples. Compared with existing methods, the proposed method shows encouraging results with good performance improvements.

4 0.74704725 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

5 0.74683189 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

Author: Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun

Abstract: Face verification involves determining whether a pair of facial images belongs to the same or different subjects. This problem can prove to be quite challenging in many important applications where labeled training data is scarce, e.g., family album photo organization software. Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. These insights help to both explain the effectiveness of our algorithm as well as elucidate a wide variety of related Bayesian approaches. Experimental testing with challenging datasets validate the utility of the proposed algorithm.

6 0.74661809 106 iccv-2013-Deep Learning Identity-Preserving Face Space

7 0.74643505 392 iccv-2013-Similarity Metric Learning for Face Recognition

8 0.74614447 362 iccv-2013-Robust Tucker Tensor Decomposition for Effective Image Representation

9 0.74608082 45 iccv-2013-Affine-Constrained Group Sparse Coding and Its Application to Image-Based Classifications

10 0.7440837 44 iccv-2013-Adapting Classification Cascades to New Domains

11 0.74386716 277 iccv-2013-Multi-channel Correlation Filters

12 0.74354875 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

13 0.74287856 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding

14 0.74286115 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

15 0.7428143 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

16 0.74187368 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

17 0.7408852 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

18 0.74069971 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering

19 0.74054646 149 iccv-2013-Exemplar-Based Graph Matching for Robust Facial Landmark Localization

20 0.74011809 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables