iccv iccv2013 iccv2013-360 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Yingya Zhang, Zhenan Sun, Ran He, Tieniu Tan
Abstract: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e.g., noise and corruptions) existing in high-dimensional data. Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. However large corruptions in real-world applications can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of our method with explicit introduction of representation error terms into the model is also proposed. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods.
Reference: text
sentIndex sentText sentNum sentScore
1 It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e. [sent-2, score-0.664]
2 Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. [sent-5, score-1.302]
3 However large corruptions in real-world applications can not be well addressed by these methods. [sent-6, score-0.36]
4 A novel optimization model for robust subspace clustering is proposed in this paper. [sent-7, score-0.889]
5 The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. [sent-9, score-0.38]
6 The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. [sent-10, score-0.497]
7 Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. [sent-11, score-1.248]
8 An extension of our method with explicit introduction of representation error terms into the model is also proposed. [sent-12, score-0.07]
9 Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. [sent-13, score-0.959]
10 Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods. [sent-14, score-0.851]
11 Introduction It is desirable to achieve a low-dimensional representation of the complex and redundant high-dimensional data in the era of big data. [sent-16, score-0.266]
12 The conventional solution is to project the data points into a single low-dimensional subspace [2] [8] [9]. [sent-17, score-0.683]
13 However, the data points may be drawn from a union ofmultiple subspaces in practical applications. [sent-18, score-0.529]
14 Therefore the problem of subspace clustering (or segmentation) is proposed to divide high-dimensional data points into multiple subspaces, and find a low-dimensional subspace rhe , tnt }@ nlpr . [sent-19, score-1.706]
15 cn a into which each group of data points can fit simultaneously [24]. [sent-22, score-0.104]
16 Subspace clustering has attracted a great attention due to its promising applications in computer vision and machine learning. [sent-23, score-0.395]
17 The large number of subspace clustering methods proposed in the literature can be classified into four categories: iterative methods, algebraic methods, statistical methods, and spectral clustering-based methods [24]. [sent-24, score-1.129]
18 Iterative methods, such as K-subspaces [23], first assign data to pre-defined multiple subspaces, then update the subspaces and reassign each data point to the nearest subspace. [sent-25, score-0.457]
19 Repeating these two steps iteratively to convergence, we can obtain the segmentation result. [sent-26, score-0.039]
20 The disadvantage of these methods is that they need to know the number of the subspaces and their dimensions in advance. [sent-27, score-0.343]
21 Generalized Principal Component Analysis (GPCA) [25] uses an algebraic way to model and segment the data. [sent-28, score-0.083]
22 This method fits the data with a polynomial, and the gradient of the polynomial at a point gives the normal vector that the point belongs to. [sent-29, score-0.24]
23 However, this method is sensitive to noise and outliers, and as the data dimension increases, its computational complexity grows exponentially. [sent-30, score-0.169]
24 Statistical approaches, such as Mixture of Probabilistic PCA (MPPCA) [21] and Multi-Stage Learning (MSL) [20], assume that the data are drawn from a mixture of probabilistic distributions. [sent-31, score-0.209]
25 Then the Expectation Maximization (EM) technique is used to estimate the subspaces and cluster data iteratively. [sent-32, score-0.321]
26 Although these methods have achieved good performance under constrained conditions, they are sensitive to noise and outliers in real-world applications. [sent-33, score-0.135]
27 Recently spectral clustering-based methods have drawn much attention, which assume that a data point can be represented as a combination of other data points in the same subspace. [sent-34, score-0.348]
28 Then representational coefficients are used to construct the affinity matrix, and the spectral clustering algorithms are applied to obtain correct segmentation. [sent-35, score-0.577]
29 Elhamifar and Vidal [5] introduced the sparse representation technique in Compressed Sensing (CS) literature to subspace clustering and proposed the Sparse Subspace Clus33008969 tering (SSC) algorithm. [sent-36, score-1.129]
30 SSC aims to find the sparsest representation of each data point by using the l1 norm regularization on the coefficient matrix. [sent-37, score-0.501]
31 Low-Rank Representation (LRR) [14] [13], in another way, seeks to find the lowestrank representation of all data points by using trace norm, which can capture the global structures of the data. [sent-38, score-0.258]
32 first proved the Enforced Block Diagonal (EBD) conditions, and then proposed the Least Square Regression (LSR) method based on l2 norm regularization. [sent-40, score-0.096]
33 The main difference among these methods is the regularization of the coefficient matrix. [sent-41, score-0.109]
34 However, the main challenge of subspace clustering is to handle the errors (e. [sent-42, score-0.969]
35 random noise and large corruptions) existing in data [13], which may lead to poor subspace clustering results due to the large weight of errors in optimization. [sent-44, score-1.06]
36 Despite the variety of the regularization, most of these methods assume that the errors have a sparse representation, and correct the errors iteratively. [sent-45, score-0.348]
37 However, large corruptions in real-world problems can not be well addressed by these error correction methods. [sent-46, score-0.406]
38 This paper aims to propose a novel subspace clustering method which is robust against large corruptions. [sent-47, score-0.999]
39 Our basic idea is to minimize the influence of error data points on subspace clustering based on a robust measure of the sim- ×× ilarity between data points and their subspace representations, correntropy [16]. [sent-48, score-1.993]
40 The optimization model of the proposed robust subspace clustering method aims to achieve sparse representation coefficients and minimal reconstruction errors simultaneously. [sent-49, score-1.306]
41 An efficient iterative solution to the proposed problem based on half-quadratic (HQ) optimization is provided. [sent-50, score-0.083]
42 In each iteration, the complex optimization problems are simplified to quadratic problems that have a closed form solution in half-quadratic optimization. [sent-51, score-0.065]
43 The performance of our method is evaluated and compared with state-of-the-art subspace clustering methods on motion segmentation and face clustering problems. [sent-52, score-1.195]
44 Related work To better illustrate the main idea of our method in the context of sparse subspace clustering (SSC) [5], the algorithm of SSC is described as follows. [sent-54, score-0.92]
wordName wordTfidf (topN-words)
[('subspace', 0.546), ('corruptions', 0.309), ('clustering', 0.305), ('subspaces', 0.243), ('ssc', 0.24), ('correntropy', 0.231), ('errors', 0.118), ('aims', 0.11), ('drawn', 0.085), ('algebraic', 0.083), ('aff', 0.083), ('nwh', 0.083), ('rnd', 0.083), ('tieniu', 0.083), ('reassign', 0.083), ('ebd', 0.077), ('lsr', 0.077), ('xig', 0.077), ('polynomial', 0.074), ('spectral', 0.073), ('hq', 0.072), ('representational', 0.072), ('tnt', 0.072), ('representation', 0.07), ('sparse', 0.069), ('tering', 0.069), ('ilarity', 0.069), ('rhe', 0.069), ('nof', 0.064), ('sparsest', 0.064), ('gpca', 0.064), ('nlpr', 0.064), ('ofmultiple', 0.064), ('norm', 0.062), ('lrr', 0.06), ('points', 0.059), ('hopkins', 0.058), ('yale', 0.057), ('coefficient', 0.056), ('oofn', 0.056), ('elhamifar', 0.054), ('regularization', 0.053), ('outliers', 0.051), ('disadvantage', 0.051), ('addressed', 0.051), ('vidal', 0.05), ('coefficients', 0.05), ('iterative', 0.05), ('influence', 0.05), ('era', 0.049), ('dimensions', 0.049), ('beijing', 0.048), ('attention', 0.047), ('repeating', 0.047), ('noise', 0.046), ('din', 0.046), ('correction', 0.046), ('data', 0.045), ('automation', 0.044), ('mixture', 0.044), ('dk', 0.044), ('seeks', 0.044), ('compressed', 0.043), ('correct', 0.043), ('attracted', 0.043), ('enforced', 0.042), ('sensing', 0.042), ('ine', 0.042), ('academy', 0.042), ('point', 0.041), ('trace', 0.04), ('grows', 0.04), ('chinese', 0.039), ('fits', 0.039), ('ran', 0.039), ('segmentation', 0.039), ('conditions', 0.038), ('expectation', 0.038), ('sensitive', 0.038), ('robust', 0.038), ('minimization', 0.037), ('notations', 0.037), ('maximization', 0.037), ('literature', 0.037), ('tan', 0.036), ('redundant', 0.036), ('laboratory', 0.036), ('perception', 0.036), ('ik', 0.036), ('statistical', 0.035), ('probabilistic', 0.035), ('proved', 0.034), ('affinity', 0.034), ('big', 0.033), ('corrupted', 0.033), ('union', 0.033), ('solution', 0.033), ('desirable', 0.033), ('technique', 0.033), ('simplified', 0.032)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999988 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization
Author: Yingya Zhang, Zhenan Sun, Ran He, Tieniu Tan
Abstract: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e.g., noise and corruptions) existing in high-dimensional data. Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. However large corruptions in real-world applications can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of our method with explicit introduction of representation error terms into the model is also proposed. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods.
2 0.53268826 232 iccv-2013-Latent Space Sparse Subspace Clustering
Author: Vishal M. Patel, Hien Van Nguyen, René Vidal
Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.
3 0.40745902 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering
Author: Canyi Lu, Jinhui Tang, Min Lin, Liang Lin, Shuicheng Yan, Zhouchen Lin
Abstract: In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows/features. The multiplicative form of half-quadratic optimization is used to optimize the nonconvex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.
4 0.29784995 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering
Author: Zhuwen Li, Jiaming Guo, Loong-Fah Cheong, Steven Zhiying Zhou
Abstract: This paper addresses real-world challenges in the motion segmentation problem, including perspective effects, missing data, and unknown number of motions. It first formulates the 3-D motion segmentation from two perspective views as a subspace clustering problem, utilizing the epipolar constraint of an image pair. It then combines the point correspondence information across multiple image frames via a collaborative clustering step, in which tight integration is achieved via a mixed norm optimization scheme. For model selection, wepropose an over-segment and merge approach, where the merging step is based on the property of the ?1-norm ofthe mutual sparse representation oftwo oversegmented groups. The resulting algorithm can deal with incomplete trajectories and perspective effects substantially better than state-of-the-art two-frame and multi-frame methods. Experiments on a 62-clip dataset show the significant superiority of the proposed idea in both segmentation accuracy and model selection.
5 0.29619119 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso
Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan
Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.
6 0.23389523 122 iccv-2013-Distributed Low-Rank Subspace Segmentation
7 0.23287445 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing
8 0.22311342 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation
9 0.22112918 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity
10 0.21892276 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold
11 0.19174148 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment
12 0.15839036 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation
13 0.142425 361 iccv-2013-Robust Trajectory Clustering for Motion Segmentation
14 0.13557179 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video
15 0.11805753 392 iccv-2013-Similarity Metric Learning for Face Recognition
16 0.11720092 235 iccv-2013-Learning Coupled Feature Spaces for Cross-Modal Matching
17 0.096428037 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition
18 0.095085301 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?
19 0.087737851 357 iccv-2013-Robust Matrix Factorization with Unknown Noise
20 0.086453132 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection
topicId topicWeight
[(0, 0.162), (1, -0.001), (2, -0.094), (3, -0.019), (4, -0.25), (5, 0.195), (6, 0.041), (7, 0.248), (8, 0.255), (9, 0.05), (10, 0.168), (11, 0.015), (12, -0.242), (13, 0.023), (14, -0.152), (15, -0.127), (16, 0.02), (17, -0.072), (18, 0.029), (19, 0.029), (20, -0.053), (21, 0.233), (22, -0.07), (23, -0.153), (24, -0.039), (25, -0.188), (26, -0.107), (27, 0.002), (28, -0.075), (29, -0.051), (30, -0.057), (31, -0.024), (32, 0.056), (33, -0.036), (34, -0.087), (35, 0.012), (36, 0.005), (37, -0.047), (38, -0.028), (39, -0.011), (40, -0.039), (41, -0.031), (42, 0.011), (43, -0.049), (44, 0.01), (45, -0.072), (46, 0.02), (47, -0.019), (48, -0.004), (49, -0.019)]
simIndex simValue paperId paperTitle
same-paper 1 0.98190629 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization
Author: Yingya Zhang, Zhenan Sun, Ran He, Tieniu Tan
Abstract: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e.g., noise and corruptions) existing in high-dimensional data. Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. However large corruptions in real-world applications can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of our method with explicit introduction of representation error terms into the model is also proposed. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods.
2 0.93354481 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso
Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan
Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.
3 0.90450317 232 iccv-2013-Latent Space Sparse Subspace Clustering
Author: Vishal M. Patel, Hien Van Nguyen, René Vidal
Abstract: We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-theart subspace clustering methods.
4 0.8582496 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering
Author: Canyi Lu, Jinhui Tang, Min Lin, Liang Lin, Shuicheng Yan, Zhouchen Lin
Abstract: In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows/features. The multiplicative form of half-quadratic optimization is used to optimize the nonconvex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.
5 0.83586329 122 iccv-2013-Distributed Low-Rank Subspace Segmentation
Author: Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I. Jordan
Abstract: Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data. Low-Rank Representation (LRR), a convex formulation of the subspace segmentation problem, is provably and empirically accurate on small problems but does not scale to the massive sizes of modern vision datasets. Moreover, past work aimed at scaling up low-rank matrix factorization is not applicable to LRR given its non-decomposable constraints. In this work, we propose a novel divide-and-conquer algorithm for large-scale subspace segmentation that can cope with LRR ’s non-decomposable constraints and maintains LRR ’s strong recovery guarantees. This has immediate implications for the scalability of subspace segmentation, which we demonstrate on a benchmark face recognition dataset and in simulations. We then introduce novel applications of LRR-based subspace segmentation to large-scale semisupervised learning for multimedia event detection, concept detection, and image tagging. In each case, we obtain stateof-the-art results and order-of-magnitude speed ups.
6 0.82953596 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation
7 0.7607767 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold
8 0.7012828 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering
9 0.68809706 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity
10 0.49659157 329 iccv-2013-Progressive Multigrid Eigensolvers for Multiscale Spectral Segmentation
11 0.45751047 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing
12 0.4547765 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video
13 0.39445516 235 iccv-2013-Learning Coupled Feature Spaces for Cross-Modal Matching
15 0.36629704 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation
16 0.35939106 361 iccv-2013-Robust Trajectory Clustering for Motion Segmentation
17 0.35579148 357 iccv-2013-Robust Matrix Factorization with Unknown Noise
18 0.3535448 301 iccv-2013-Optimal Orthogonal Basis and Image Assimilation: Motion Modeling
19 0.32821012 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment
topicId topicWeight
[(2, 0.095), (7, 0.053), (26, 0.059), (31, 0.056), (42, 0.198), (49, 0.037), (64, 0.028), (73, 0.042), (89, 0.113), (90, 0.165), (92, 0.041)]
simIndex simValue paperId paperTitle
same-paper 1 0.81252927 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization
Author: Yingya Zhang, Zhenan Sun, Ran He, Tieniu Tan
Abstract: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to the possible errors (e.g., noise and corruptions) existing in high-dimensional data. Recent subspace clustering methods usually assume a sparse representation of corrupted errors and correct the errors iteratively. However large corruptions in real-world applications can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of our method with explicit introduction of representation error terms into the model is also proposed. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on Hopkins 155 dataset and Extended Yale Database B demonstrate that our method outperforms state-of-the-art subspace clustering methods.
2 0.80682468 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization
Author: Hua Wang, Feiping Nie, Weidong Cai, Heng Huang
Abstract: Representing the raw input of a data set by a set of relevant codes is crucial to many computer vision applications. Due to the intrinsic sparse property of real-world data, dictionary learning, in which the linear decomposition of a data point uses a set of learned dictionary bases, i.e., codes, has demonstrated state-of-the-art performance. However, traditional dictionary learning methods suffer from three weaknesses: sensitivity to noisy and outlier samples, difficulty to determine the optimal dictionary size, and incapability to incorporate supervision information. In this paper, we address these weaknesses by learning a Semi-Supervised Robust Dictionary (SSR-D). Specifically, we use the ℓ2,0+ norm as the loss function to improve the robustness against outliers, and develop a new structured sparse regularization com, , tom. . cai@sydney . edu . au , heng@uta .edu make the learning tasks easier to deal with and reduce the computational cost. For example, in image tagging, instead of using the raw pixel-wise features, semi-local or patch- based features, such as SIFT and geometric blur, are usually more desirable to achieve better performance. In practice, finding a set of compact features bases, also referred to as dictionary, with enhanced representative and discriminative power, plays a significant role in building a successful computer vision system. In this paper, we explore this important problem by proposing a novel formulation and its solution for learning Semi-Supervised Robust Dictionary (SSRD), where we examine the challenges in dictionary learning, and seek opportunities to overcome them and improve the dictionary qualities. 1.1. Challenges in Dictionary Learning to incorporate the supervision information in dictionary learning, without incurring additional parameters. Moreover, the optimal dictionary size is automatically learned from the input data. Minimizing the derived objective function is challenging because it involves many non-smooth ℓ2,0+ -norm terms. We present an efficient algorithm to solve the problem with a rigorous proof of the convergence of the algorithm. Extensive experiments are presented to show the superior performance of the proposed method.
3 0.79073805 94 iccv-2013-Correntropy Induced L2 Graph for Robust Subspace Clustering
Author: Canyi Lu, Jinhui Tang, Min Lin, Liang Lin, Shuicheng Yan, Zhouchen Lin
Abstract: In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows/features. The multiplicative form of half-quadratic optimization is used to optimize the nonconvex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.
4 0.79060471 15 iccv-2013-A Generalized Low-Rank Appearance Model for Spatio-temporally Correlated Rain Streaks
Author: Yi-Lei Chen, Chiou-Ting Hsu
Abstract: In this paper, we propose a novel low-rank appearance model for removing rain streaks. Different from previous work, our method needs neither rain pixel detection nor time-consuming dictionary learning stage. Instead, as rain streaks usually reveal similar and repeated patterns on imaging scene, we propose and generalize a low-rank model from matrix to tensor structure in order to capture the spatio-temporally correlated rain streaks. With the appearance model, we thus remove rain streaks from image/video (and also other high-order image structure) in a unified way. Our experimental results demonstrate competitive (or even better) visual quality and efficient run-time in comparison with state of the art.
5 0.7903924 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso
Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan
Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.
6 0.78628796 54 iccv-2013-Attribute Pivots for Guiding Relevance Feedback in Image Search
7 0.78593665 14 iccv-2013-A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding
8 0.78564513 213 iccv-2013-Implied Feedback: Learning Nuances of User Behavior in Image Search
9 0.78499305 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples
10 0.78298891 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person
11 0.78155601 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration
12 0.78106385 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition
13 0.78104502 231 iccv-2013-Latent Multitask Learning for View-Invariant Action Recognition
14 0.7806381 140 iccv-2013-Elastic Net Constraints for Shape Matching
15 0.77985424 44 iccv-2013-Adapting Classification Cascades to New Domains
16 0.77717012 392 iccv-2013-Similarity Metric Learning for Face Recognition
17 0.7767446 232 iccv-2013-Latent Space Sparse Subspace Clustering
18 0.77655405 277 iccv-2013-Multi-channel Correlation Filters
19 0.77500302 52 iccv-2013-Attribute Adaptation for Personalized Image Search
20 0.77472347 46 iccv-2013-Allocentric Pose Estimation