cvpr cvpr2013 cvpr2013-215 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Shaokang Chen, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell
Abstract: Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
Reference: text
sentIndex sentText sentNum sentScore
1 However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). [sent-4, score-0.278]
2 To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. [sent-5, score-0.35]
3 We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. [sent-6, score-0.541]
4 We then extract local linear subspaces from a gallery image set via sparse representation. [sent-7, score-0.579]
5 For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. [sent-8, score-0.696]
6 We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. [sent-9, score-0.415]
7 Singlemodel methods can be further divided into two groups: single linear subspace methods and affine hull methods. [sent-17, score-0.507]
8 Single linear subspace methods [14, 29] use principal angles to measure the difference between two subspaces. [sent-18, score-0.408]
9 As the similarity of data structures is used for comparing sets, subspace approaches can be robust to noise and relatively small number of samples [29, 28]. [sent-19, score-0.398]
10 However, subspace methods consider the structure of all data samples without selecting optimal subsets for classification. [sent-20, score-0.398]
11 Furthermore, deterioration in discrimination performance can occur if the nearest points between two hulls are outliers or noisy. [sent-24, score-0.264]
12 These clusters may not be optimal for discrimination, as undesirable environmental conditions (such as variations in illumination and pose) may result in the two closest clusters representing two different characteristics of an object. [sent-30, score-0.368]
13 The clusters in the first set represent various poses, while the clusters in the second set represent varying illumination (where the illumination is different to the illumination present in the first set). [sent-33, score-0.393]
14 As the two sets of clusters capture two different variations, matching two image sets based on cluster matching may result in a non-frontal face (eg. [sent-34, score-0.244]
15 The proposed approach first uses sparse approximation to extract local linear subspaces from the first set. [sent-39, score-0.516]
16 Each local linear subspace is then represented as a reference point on a Grassmann manifold. [sent-40, score-0.443]
17 For each reference point, we approximate its closest point on the manifold from a group of points of the second set. [sent-41, score-0.295]
18 We prove that by minimising the joint sparse representation error, we are approaching the nearest point to the reference point on the Grassmann manifold. [sent-43, score-0.49]
19 To our knowledge, this is the first paper to show the link between joint sparse approximation and Grassmann manifolds, and the proposed method is the first that adaptively constructs the closest subspace to a reference subspace from the samples of a set. [sent-49, score-1.072]
20 We then define a Frobenius norm distance between subspaces over Grassmann manifold in Section 3. [sent-52, score-0.529]
21 More rigorous treatment of sparse representation can be found in [5, 6], while manifolds and related topics are covered in [1, 7, 11]. [sent-57, score-0.247]
22 The red dots show that images in set C are adaptively clustered such that the nearest Grassmann point can be constructed corresponding to the reference green points on the manifold. [sent-85, score-0.392]
23 In this way, the corresponding nearest clusters in set A and C capture similar variations. [sent-86, score-0.261]
24 A Grassmann manifold GD,m is a set of m-dimensional linear subspaces of RD. [sent-136, score-0.524]
25 two matrices A and B represent the same point if the subspaces spanned by the column vectors of the two matrices are the same. [sent-139, score-0.379]
26 Residual Distance on Grassmann Manifold Following the form of JSR, we define a Frobenius norm distance, named residual distance, between two subspaces over a Grassmann manifold. [sent-155, score-0.38]
27 For two subspaces Sa and Sb, the distance between subspaces is defined as the summation of distance from the unit vectors of orthonormal basis of the subspace Sa to the subspace Sb. [sent-156, score-1.565]
28 This residual distance is the l2 norm of the sine of principal angles given in Eqn (6) and is proved to be a form of projection distance over Grassmann manifolds [10]. [sent-160, score-0.256]
29 Sparse Approximated Nearest Subspaces We now propose the approach to find the nearest subspace over Grassmann manifolds by minimising the resid- ual distance. [sent-162, score-0.69]
30 For each local linear subspace from a gallery image set, the approximated nearest subspace is adaptively constructed from the samples of the query image set. [sent-169, score-1.261]
31 Joint sparse representation is applied to approximate the nearest subspace. [sent-170, score-0.301]
32 The average distance of all the closest subspace pairs is considered as the distance between two sets. [sent-173, score-0.487]
33 subspaces of rank m from the available Na sample images. [sent-186, score-0.353]
34 A collection of all these subspaces Sma is called the m-order subspace set of Ia. [sent-187, score-0.696]
35 We note that not all of the subspaces can precisely represent the variations of the object and hence only some of the subspaces should be used for classification. [sent-188, score-0.763]
36 Single measurement vector (SMV) sparse representation is applied to create and select local linear subspaces that can accurately represent real samples from the image set. [sent-189, score-0.575]
37 This is in contrast to affine hull based methods [4, 12], where the nearest points are synthetic samples generated through linear combination of real samples. [sent-190, score-0.428]
38 The distance between the sample point xka and the subspace ska can be calculated by the reconstruction error rka = ? [sent-210, score-0.633]
39 From a manifold point of view, each subspace of order m can be represented as a point on Grassmann manifold GD,m. [sent-214, score-0.663]
40 For each image xia ∈ Ia, the subspace ska constructed by SMV sparse representation is represented as a point over the manifold. [sent-215, score-0.689]
41 on the reconstruction error, the SMV sparse representation can select representative subspaces ska that can linearly represent real samples xka with an error smaller than threshold ? [sent-217, score-0.724]
42 This type of subspace extrac- tion is motivated by [8], where SMV sparse representation is used to cluster linear subspaces. [sent-226, score-0.51]
43 Nearest Subspace Approximation After extracting local linear subspaces, traditional multimodel approaches use fixed subspaces (clusters) of each set for classification. [sent-229, score-0.418]
44 In contrast, we propose to adaptively cluster the query image set via considering the clusters from a gallery image set. [sent-230, score-0.3]
45 To match image sets Ia and Ib, we first extract the local linear subspace set S? [sent-231, score-0.429]
46 We first generate the orthonormal basis Ua of subspace ska. [sent-243, score-0.432]
47 We then treat all the samples in Ib as elements in dictionary and apply joint sparse representation to find the optimal solution via Eqn. [sent-244, score-0.285]
48 Given orthonormal basis Ua from ska, we find m samples from matrix Xb, representing image set Ib, that give minimal sparse representation error1: mWin ? [sent-246, score-0.274]
49 (10) 1Note that a rotated basis UaRa may have slightly different solution to Ua due to the limitation of the approximated solution for joint sparse representation. [sent-251, score-0.246]
50 Thus the reconstruction error can be used as a measure of distance D(ska , skb) = Ek between two subspaces on Grassmann manifold. [sent-266, score-0.424]
51 By minimising the error, the nearest subspace over Grassmann manifolds is approached. [sent-267, score-0.69]
52 Distance Calculation We have shown above how to approximate the nearest subspace skb from Smb, given a specific subspace ska from the m-order subspace set S? [sent-270, score-1.357]
53 As we generate Nc local linear subspaces from Ia and f? [sent-272, score-0.39]
54 ance between two image sets Ia and Ib is defined as the average distance of the nearest subspace pairs: D? [sent-274, score-0.605]
55 Thus, the complexity of SANS is O(Ncmndm), where Nc is the number of local linear subspaces generated. [sent-282, score-0.39]
56 Experiments The proposed approach was first evaluated on synthetic data to investigate the accuracy ofnearest subspace approximation, followed by a performance comparison against previous state-of-the-art methods on three image set recognition tasks: face, gesture and object recognition. [sent-285, score-0.397]
57 Synthetic Data We randomly generated m sample points in Rn (n = 100) to construct a reference subspace Sref with rank m. [sent-288, score-0.418]
58 The proposed nearest subspace approximation (NSA) approach is used to find m samples from the dictionary to construct the approximated nearest subspace Sapp and is compared with the actual nearest subspace Sact found by a brute force method. [sent-290, score-1.798]
59 The relative difference ratio |D(Sref,DSa(pSpr)e−f,DSa(cStr)ef,Sact)| and the percentage of r = Sapp in the top k nearest subspaces of Sref are considered as the measurements of performance. [sent-291, score-0.524]
60 Accuracy of the proposed nearest subspace approximation (NSA) on synthetic data. [sent-293, score-0.547]
61 m is the number of samples used to construct the reference subspace Sref. [sent-294, score-0.435]
62 The total number of subspaces for each search is CNm. [sent-296, score-0.353]
63 ‘mean rank’ is the average ranking of the approximated of Sref is shown for k r = |D(Sref ,Sapp) − subspace Sapp in all subspaces. [sent-297, score-0.417]
64 05 The percentage that Sapp is in the top k nearest subspaces CNm. [sent-300, score-0.524]
65 0th5e × ×ac Ctual nearest subspace of Sref found by a brute force method. [sent-302, score-0.549]
66 Table 1 shows how close the approximated nearest subspace is to the actual nearest subspace. [sent-304, score-0.759]
67 This is expected as increasing the dictionary size, the total number of subspaces is exponentially increased, while the ratio is only increased slightly. [sent-309, score-0.412]
68 Evaluating the performance from the point of view of the ranking for the approximated nearest subspace, most ofthe approximated subspaces are in the top 1% closest subspaces and almost all of the approximated subspaces are in the top 5% closest subspaces. [sent-310, score-1.598]
69 The proposed approach can find maximally 32% actual nearest subspaces when dictionary size is small. [sent-311, score-0.583]
70 In the worst case, at least 7% actual nearest subspaces are found when the total number of subspaces is huge (> 160, 000). [sent-312, score-0.877]
71 In contrast, the brute force method to find the actual nearest subspace is hundreds or even thousands of times slower. [sent-314, score-0.549]
72 AHISD, CHISD and SANP are nearest point based methods, which find the closest points between two hulls. [sent-417, score-0.295]
73 MSM and MDA are subspace based methods which model image sets as linear subspaces. [sent-418, score-0.429]
74 To avoid the effect of duplication samples on local linear subspace extraction due to the limitation of sparse representation, we remove the duplication of samples in each image set individually. [sent-434, score-0.647]
75 14 method with component techniques, including Joint Sparse Representation (JSR), Grassmann Manifolds (GM) and local linear subspace (LLS) extraction. [sent-483, score-0.38]
76 By applying local linear subspace extraction, the performance of both JSR and GM is improved. [sent-486, score-0.38]
77 AHISD, CHISD and SANP are all based on the nearest point distance between subspaces, which is inevitably sensitive to the illumination variations. [sent-493, score-0.31]
78 If two image sets are taken in different illumination conditions, the distance between points on two subspaces will be rather large, leading to a deterioration in classification performance. [sent-494, score-0.583]
79 While MDA clusters images to construct local linear models and learns a more discriminant embedding space, the distances between local models/subspaces are based on the Euclidean distance between the center points of models. [sent-495, score-0.251]
80 In contrast, MSM, JSR, GM and the proposed SANS exploit structural similarities between subspaces (eg. [sent-497, score-0.353]
81 PM TCCA DCCA proposed [19] [13] [14] SANS Set 189816390 Set 2 86 81 61 89 Set 3 89 78 65 91 Set 4 87 86 69 89 Average88826590 dimensional linear subspace [20]. [sent-504, score-0.38]
82 In other words, if there are several images of a person’s face taken under varying illumination conditions, the subspace constructed from these images can be used to represent many possible illumination conditions. [sent-506, score-0.569]
83 The Grassmann manifold approach treats all samples lying in the same subspace as one point on a Grassmann manifold, suggesting that illumination variations do not affect the point. [sent-507, score-0.686]
84 The robustness of SANS also comes from being able to exploit the variations present in the training data by local linear subspace (LLS) extraction and the adaptively constructed nearest subspaces. [sent-508, score-0.703]
85 Multiple local linear subspaces can be extracted from a gallery image set that represent variations of a subject. [sent-509, score-0.543]
86 For a given local linear subspace, SANS finds the closest subspace from the subspace set of the query image set, which represents a similar variation. [sent-510, score-0.83]
87 2 shows the sample images of an extracted local linear subspace as well as the sample images of the constructed nearest subspaces. [sent-512, score-0.579]
88 (b) Sample images of the constructed nearest subspace from a query image set of the same class. [sent-519, score-0.589]
89 (c) Sample images of the constructed nearest subspace from a query image set of a different class. [sent-520, score-0.589]
90 Main Findings and Future Directions We have proposed a novel approach to approximate nearest subspaces over Grassmann manifolds. [sent-539, score-0.524]
91 Single measurement vector sparse representation is then employed to create local linear subspaces from a gallery image set, followed by joint sparse representation to approximate the corresponding nearest subspaces from the probe image set. [sent-541, score-1.311]
92 We have shown that by minimising the joint sparse reconstruction error, the nearest subspace on a Grassmann manifold is approached. [sent-542, score-0.87]
93 The average distance of nearest subspace pairs is defined a new distance between two image sets. [sent-543, score-0.598]
94 In contrast to single linear subspace methods, the pro444555668 posed Sparse Approximated Nearest Subspaces (SANS) method extracts multiple local linear subspaces using a subset of samples. [sent-544, score-0.77]
95 Distinct to multi-model based methods, SANS utilises the subspaces (clusters) of one image set to adaptively cluster the samples of another image set by constructing the corresponding closest subspaces without complete pairwise local subspace comparisons. [sent-546, score-1.231]
96 Comparative evaluations on synthetic data show that the proposed method can approximate the nearest subspaces with small errors. [sent-547, score-0.524]
97 The experiments also indicate that subspace structural similarity based methods generally perform better than nearest point based methods for image sets with variations in illumination. [sent-549, score-0.646]
98 Future avenues of research include random rotation of orthogonal basis for more robust nearest subspace approximation and learning more discriminative embedding spaces for manifolds [10, 27]. [sent-550, score-0.702]
99 The proposed nearest subspace approximation can also be extended to use other multi-model approaches for local model extraction, such as Manifold to Manifold Distance (MMD) [28], Manifold Discriminant Analysis (MDA) [27] or Local Linear Embedding (LLE) with k−means clustering [9]. [sent-551, score-0.547]
100 Principal angles between subspaces in an A-based scalar product: Algorithms and perturbation estimates. [sent-661, score-0.381]
wordName wordTfidf (topN-words)
[('grassmann', 0.393), ('sans', 0.354), ('subspaces', 0.353), ('subspace', 0.343), ('jsr', 0.216), ('nearest', 0.171), ('sanp', 0.162), ('ahisd', 0.144), ('mda', 0.14), ('manifold', 0.134), ('chisd', 0.128), ('manifolds', 0.117), ('sref', 0.112), ('smv', 0.108), ('ua', 0.104), ('ska', 0.103), ('gallery', 0.096), ('sparse', 0.093), ('clusters', 0.09), ('msm', 0.08), ('hull', 0.076), ('approximated', 0.074), ('illumination', 0.071), ('adaptively', 0.067), ('atoms', 0.067), ('mwin', 0.064), ('ib', 0.06), ('closest', 0.06), ('minimising', 0.059), ('xia', 0.059), ('ia', 0.059), ('dictionary', 0.059), ('variations', 0.057), ('gm', 0.056), ('face', 0.056), ('samples', 0.055), ('qld', 0.054), ('sact', 0.054), ('skb', 0.054), ('xka', 0.054), ('gesture', 0.054), ('sanderson', 0.051), ('affine', 0.051), ('orthonormal', 0.051), ('sets', 0.049), ('query', 0.047), ('raw', 0.047), ('discriminant', 0.044), ('distance', 0.042), ('lls', 0.042), ('joint', 0.041), ('harandi', 0.04), ('sapp', 0.039), ('subcategory', 0.038), ('basis', 0.038), ('points', 0.038), ('representation', 0.037), ('reference', 0.037), ('linear', 0.037), ('brisbane', 0.036), ('dcca', 0.036), ('equalisation', 0.036), ('equalised', 0.036), ('nmb', 0.036), ('nsa', 0.036), ('queensland', 0.036), ('rka', 0.036), ('smb', 0.036), ('tcca', 0.036), ('tropp', 0.036), ('xana', 0.036), ('canonical', 0.035), ('dw', 0.035), ('brute', 0.035), ('approximation', 0.033), ('sa', 0.032), ('normalised', 0.032), ('mmv', 0.032), ('duplication', 0.032), ('sb', 0.032), ('action', 0.032), ('australia', 0.03), ('nc', 0.03), ('deterioration', 0.03), ('reconstruction', 0.029), ('optimisation', 0.029), ('constructed', 0.028), ('angles', 0.028), ('multimodel', 0.028), ('pages', 0.027), ('residual', 0.027), ('signal', 0.027), ('mka', 0.027), ('resemblance', 0.027), ('resized', 0.026), ('frobenius', 0.026), ('point', 0.026), ('hulls', 0.025), ('ieee', 0.025), ('clustered', 0.025)]
simIndex simValue paperId paperTitle
same-paper 1 1.000001 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces
Author: Shaokang Chen, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell
Abstract: Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
2 0.31298789 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features
Author: Raviteja Vemulapalli, Jaishanker K. Pillai, Rama Chellappa
Abstract: In computer vision applications, features often lie on Riemannian manifolds with known geometry. Popular learning algorithms such as discriminant analysis, partial least squares, support vector machines, etc., are not directly applicable to such features due to the non-Euclidean nature of the underlying spaces. Hence, classification is often performed in an extrinsic manner by mapping the manifolds to Euclidean spaces using kernels. However, for kernel based approaches, poor choice of kernel often results in reduced performance. In this paper, we address the issue of kernelselection for the classification of features that lie on Riemannian manifolds using the kernel learning approach. We propose two criteria for jointly learning the kernel and the classifier using a single optimization problem. Specifically, for the SVM classifier, we formulate the problem of learning a good kernel-classifier combination as a convex optimization problem and solve it efficiently following the multiple kernel learning approach. Experimental results on image set-based classification and activity recognition clearly demonstrate the superiority of the proposed approach over existing methods for classification of manifold features.
3 0.25692981 135 cvpr-2013-Discriminative Subspace Clustering
Author: Vasileios Zografos, Liam Ellis, Rudolf Mester
Abstract: We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.
4 0.24762967 250 cvpr-2013-Learning Cross-Domain Information Transfer for Location Recognition and Clustering
Author: Raghuraman Gopalan
Abstract: Estimating geographic location from images is a challenging problem that is receiving recent attention. In contrast to many existing methods that primarily model discriminative information corresponding to different locations, we propose joint learning of information that images across locations share and vary upon. Starting with generative and discriminative subspaces pertaining to domains, which are obtained by a hierarchical grouping of images from adjacent locations, we present a top-down approach that first models cross-domain information transfer by utilizing the geometry ofthese subspaces, and then encodes the model results onto individual images to infer their location. We report competitive results for location recognition and clustering on two public datasets, im2GPS and San Francisco, and empirically validate the utility of various design choices involved in the approach.
5 0.24186374 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds
Author: Bo Wang, Zhuowen Tu
Abstract: With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. The real-world data however often come with noises and outliers; seldom, all the data live in a single linear subspace. Inspired by the recent advances in sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then derive a new formulation to perform denoising to the original data. Experiments carried out on both toy and real applications demonstrate the effectiveness of our method; it is insensitive to parameter tuning and we show significant improvement over the competing algorithms.
6 0.17061642 253 cvpr-2013-Learning Multiple Non-linear Sub-spaces Using K-RBMs
7 0.17006567 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections
8 0.16716838 233 cvpr-2013-Joint Sparsity-Based Representation and Analysis of Unconstrained Activities
9 0.14233468 259 cvpr-2013-Learning a Manifold as an Atlas
11 0.11977829 64 cvpr-2013-Blessing of Dimensionality: High-Dimensional Feature and Its Efficient Compression for Face Verification
12 0.11524765 379 cvpr-2013-Scalable Sparse Subspace Clustering
13 0.11434311 367 cvpr-2013-Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem
14 0.10562129 399 cvpr-2013-Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer
15 0.10370193 46 cvpr-2013-Articulated and Restricted Motion Subspaces and Their Signatures
16 0.10158674 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds
17 0.10029266 54 cvpr-2013-BRDF Slices: Accurate Adaptive Anisotropic Appearance Acquisition
18 0.10024526 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior
19 0.098012783 319 cvpr-2013-Optimized Product Quantization for Approximate Nearest Neighbor Search
20 0.09750884 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation
topicId topicWeight
[(0, 0.174), (1, -0.039), (2, -0.123), (3, 0.093), (4, -0.024), (5, -0.057), (6, -0.062), (7, -0.177), (8, -0.039), (9, -0.129), (10, 0.035), (11, -0.032), (12, -0.134), (13, -0.118), (14, -0.06), (15, 0.042), (16, -0.159), (17, -0.03), (18, -0.218), (19, -0.03), (20, 0.192), (21, 0.026), (22, 0.113), (23, 0.007), (24, -0.051), (25, -0.15), (26, -0.004), (27, -0.088), (28, 0.005), (29, 0.025), (30, -0.015), (31, 0.13), (32, 0.056), (33, -0.051), (34, -0.042), (35, -0.025), (36, 0.034), (37, -0.022), (38, -0.067), (39, 0.085), (40, -0.032), (41, 0.023), (42, -0.049), (43, -0.022), (44, 0.004), (45, 0.048), (46, 0.006), (47, -0.095), (48, -0.063), (49, 0.002)]
simIndex simValue paperId paperTitle
same-paper 1 0.94587255 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces
Author: Shaokang Chen, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell
Abstract: Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
2 0.81686002 135 cvpr-2013-Discriminative Subspace Clustering
Author: Vasileios Zografos, Liam Ellis, Rudolf Mester
Abstract: We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.
3 0.80286276 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds
Author: Bo Wang, Zhuowen Tu
Abstract: With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. The real-world data however often come with noises and outliers; seldom, all the data live in a single linear subspace. Inspired by the recent advances in sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then derive a new formulation to perform denoising to the original data. Experiments carried out on both toy and real applications demonstrate the effectiveness of our method; it is insensitive to parameter tuning and we show significant improvement over the competing algorithms.
4 0.7627883 250 cvpr-2013-Learning Cross-Domain Information Transfer for Location Recognition and Clustering
Author: Raghuraman Gopalan
Abstract: Estimating geographic location from images is a challenging problem that is receiving recent attention. In contrast to many existing methods that primarily model discriminative information corresponding to different locations, we propose joint learning of information that images across locations share and vary upon. Starting with generative and discriminative subspaces pertaining to domains, which are obtained by a hierarchical grouping of images from adjacent locations, we present a top-down approach that first models cross-domain information transfer by utilizing the geometry ofthese subspaces, and then encodes the model results onto individual images to infer their location. We report competitive results for location recognition and clustering on two public datasets, im2GPS and San Francisco, and empirically validate the utility of various design choices involved in the approach.
5 0.71866471 259 cvpr-2013-Learning a Manifold as an Atlas
Author: Nikolaos Pitelis, Chris Russell, Lourdes Agapito
Abstract: In this work, we return to the underlying mathematical definition of a manifold and directly characterise learning a manifold as finding an atlas, or a set of overlapping charts, that accurately describe local structure. We formulate the problem of learning the manifold as an optimisation that simultaneously refines the continuous parameters defining the charts, and the discrete assignment of points to charts. In contrast to existing methods, this direct formulation of a manifold does not require “unwrapping ” the manifold into a lower dimensional space and allows us to learn closed manifolds of interest to vision, such as those corresponding to gait cycles or camera pose. We report state-ofthe-art results for manifold based nearest neighbour classification on vision datasets, and show how the same techniques can be applied to the 3D reconstruction of human motion from a single image.
6 0.69699258 379 cvpr-2013-Scalable Sparse Subspace Clustering
7 0.69437176 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections
8 0.63204563 191 cvpr-2013-Graph-Laplacian PCA: Closed-Form Solution and Robustness
9 0.61277092 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features
10 0.59517014 253 cvpr-2013-Learning Multiple Non-linear Sub-spaces Using K-RBMs
12 0.57646358 276 cvpr-2013-MKPLS: Manifold Kernel Partial Least Squares for Lipreading and Speaker Identification
13 0.53893667 367 cvpr-2013-Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem
14 0.4618842 238 cvpr-2013-Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices
15 0.45479468 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds
16 0.42697057 64 cvpr-2013-Blessing of Dimensionality: High-Dimensional Feature and Its Efficient Compression for Face Verification
17 0.4230977 93 cvpr-2013-Constraints as Features
18 0.41625363 46 cvpr-2013-Articulated and Restricted Motion Subspaces and Their Signatures
19 0.41184476 358 cvpr-2013-Robust Canonical Time Warping for the Alignment of Grossly Corrupted Sequences
20 0.40234017 54 cvpr-2013-BRDF Slices: Accurate Adaptive Anisotropic Appearance Acquisition
topicId topicWeight
[(10, 0.087), (16, 0.021), (19, 0.012), (26, 0.048), (28, 0.018), (29, 0.261), (33, 0.254), (67, 0.043), (69, 0.113), (87, 0.046)]
simIndex simValue paperId paperTitle
1 0.842749 391 cvpr-2013-Sensing and Recognizing Surface Textures Using a GelSight Sensor
Author: Rui Li, Edward H. Adelson
Abstract: Sensing surface textures by touch is a valuable capability for robots. Until recently it wwas difficult to build a compliant sensor with high sennsitivity and high resolution. The GelSight sensor is coompliant and offers sensitivity and resolution exceeding that of the human fingertips. This opens the possibility of measuring and recognizing highly detailed surface texxtures. The GelSight sensor, when pressed against a surfacce, delivers a height map. This can be treated as an image, aand processed using the tools of visual texture analysis. WWe have devised a simple yet effective texture recognitioon system based on local binary patterns, and enhanced it by the use of a multi-scale pyramid and a Hellinger ddistance metric. We built a database with 40 classes of taactile textures using materials such as fabric, wood, and sanndpaper. Our system can correctly categorize materials fromm this database with high accuracy. This suggests that the GGelSight sensor can be useful for material recognition by roobots.
2 0.82565784 140 cvpr-2013-Efficient Color Boundary Detection with Color-Opponent Mechanisms
Author: Kaifu Yang, Shaobing Gao, Chaoyi Li, Yongjie Li
Abstract: Color information plays an important role in better understanding of natural scenes by at least facilitating discriminating boundaries of objects or areas. In this study, we propose a new framework for boundary detection in complex natural scenes based on the color-opponent mechanisms of the visual system. The red-green and blue-yellow color opponent channels in the human visual system are regarded as the building blocks for various color perception tasks such as boundary detection. The proposed framework is a feedforward hierarchical model, which has direct counterpart to the color-opponent mechanisms involved in from the retina to the primary visual cortex (V1). Results show that our simple framework has excellent ability to flexibly capture both the structured chromatic and achromatic boundaries in complex scenes.
3 0.79971904 418 cvpr-2013-Submodular Salient Region Detection
Author: Zhuolin Jiang, Larry S. Davis
Abstract: The problem of salient region detection is formulated as the well-studied facility location problem from operations research. High-level priors are combined with low-level features to detect salient regions. Salient region detection is achieved by maximizing a submodular objective function, which maximizes the total similarities (i.e., total profits) between the hypothesized salient region centers (i.e., facility locations) and their region elements (i.e., clients), and penalizes the number of potential salient regions (i.e., the number of open facilities). The similarities are efficiently computedbyfinding a closed-form harmonic solution on the constructed graph for an input image. The saliency of a selected region is modeled in terms of appearance and spatial location. By exploiting the submodularity properties of the objectivefunction, a highly efficient greedy-based optimization algorithm can be employed. This algorithm is guaranteed to be at least a (e − 1)/e ≈ 0.632-approximation to t heeed optimum. lEeaxpster aim (een −tal 1 r)e/seult ≈s d 0e.m63o2n-satrpaptero txhimata our approach outperforms several recently proposed saliency detection approaches.
same-paper 4 0.79683673 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces
Author: Shaokang Chen, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell
Abstract: Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
5 0.7912721 35 cvpr-2013-Adaptive Compressed Tomography Sensing
Author: Oren Barkan, Jonathan Weill, Amir Averbuch, Shai Dekel
Abstract: One of the main challenges in Computed Tomography (CT) is how to balance between the amount of radiation the patient is exposed to during scan time and the quality of the CT image. We propose a mathematical model for adaptive CT acquisition whose goal is to reduce dosage levels while maintaining high image quality at the same time. The adaptive algorithm iterates between selective limited acquisition and improved reconstruction, with the goal of applying only the dose level required for sufficient image quality. The theoretical foundation of the algorithm is nonlinear Ridgelet approximation and a discrete form of Ridgelet analysis is used to compute the selective acquisition steps that best capture the image edges. We show experimental results where for the same number of line projections, the adaptive model produces higher image quality, when compared with standard limited angle, non-adaptive acquisition algorithms.
6 0.75523496 338 cvpr-2013-Probabilistic Elastic Matching for Pose Variant Face Verification
7 0.75179791 231 cvpr-2013-Joint Detection, Tracking and Mapping by Semantic Bundle Adjustment
8 0.74988312 86 cvpr-2013-Composite Statistical Inference for Semantic Segmentation
9 0.7495966 172 cvpr-2013-Finding Group Interactions in Social Clutter
10 0.74428272 371 cvpr-2013-SCaLE: Supervised and Cascaded Laplacian Eigenmaps for Visual Object Recognition Based on Nearest Neighbors
11 0.74411803 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns
12 0.74132878 292 cvpr-2013-Multi-agent Event Detection: Localization and Role Assignment
13 0.74047613 1 cvpr-2013-3D-Based Reasoning with Blocks, Support, and Stability
14 0.73956358 392 cvpr-2013-Separable Dictionary Learning
15 0.73690981 61 cvpr-2013-Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics
16 0.7360521 70 cvpr-2013-Bottom-Up Segmentation for Top-Down Detection
17 0.73340261 135 cvpr-2013-Discriminative Subspace Clustering
18 0.73175013 445 cvpr-2013-Understanding Bayesian Rooms Using Composite 3D Object Models
19 0.72989178 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases
20 0.72981864 372 cvpr-2013-SLAM++: Simultaneous Localisation and Mapping at the Level of Objects