cvpr cvpr2013 cvpr2013-109 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Raffay Hamid, Dennis Decoste, Chih-Jen Lin
Abstract: We present a robust and efficient technique for matching dense sets of points undergoing non-rigid spatial transformations. Our main intuition is that the subset of points that can be matched with high confidence should be used to guide the matching procedure for the rest. We propose a novel algorithm that incorporates these high-confidence matches as a spatial prior to learn a discriminative subspace that simultaneously encodes both the feature similarity as well as their spatial arrangement. Conventional subspace learning usually requires spectral decomposition of the pair-wise distance matrix across the point-sets, which can become inefficient even for moderately sized problems. To this end, we propose the use of random projections for approximate subspace learning, which can provide significant time improvements at the cost of minimal precision loss. This efficiency gain allows us to iteratively find and remove high-confidence matches from the point sets, resulting in high recall. To show the effectiveness of our approach, we present a systematic set of experiments and results for the problem of dense non-rigid image-feature matching.
Reference: text
sentIndex sentText sentNum sentScore
1 com Abstract We present a robust and efficient technique for matching dense sets of points undergoing non-rigid spatial transformations. [sent-4, score-0.359]
2 Our main intuition is that the subset of points that can be matched with high confidence should be used to guide the matching procedure for the rest. [sent-5, score-0.292]
3 We propose a novel algorithm that incorporates these high-confidence matches as a spatial prior to learn a discriminative subspace that simultaneously encodes both the feature similarity as well as their spatial arrangement. [sent-6, score-0.68]
4 Conventional subspace learning usually requires spectral decomposition of the pair-wise distance matrix across the point-sets, which can become inefficient even for moderately sized problems. [sent-7, score-0.578]
5 To this end, we propose the use of random projections for approximate subspace learning, which can provide significant time improvements at the cost of minimal precision loss. [sent-8, score-0.713]
6 This problem is particularly challenging as point-sets become more dense, and their spatial transformations become more nonrigid. [sent-13, score-0.158]
7 Some of these challenges can be addressed by trying to maintain the spatial arrangements of corresponding points during matching. [sent-15, score-0.186]
8 Most of the previous approaches that bring the spatial arrangement of points into account are computationally expensive, and are therefore not feasible for dense matching [2] [28] [16] [4]. [sent-16, score-0.339]
9 Recently, Torki and Elgammal [26] proposed an efficient method for matching points in a lower-dimensional subspace, that simultaneously encodes spatial consistency and feature similarity [26] [27]. [sent-17, score-0.321]
10 We use random projections to approximately learn S efficiently. [sent-22, score-0.184]
11 We project feature-points to S, and use bipartite graph to find their matchings. [sent-23, score-0.13]
12 We select points with high confidence matches, and use them as a spatial prior to to learn a subspace that reduces the confusion among the remaining set of points. [sent-24, score-0.558]
13 sition for subspace learning, which limits its efficiency improvements. [sent-26, score-0.339]
14 In addition, the method is not robust to large amounts of non-rigid distortion or feature noise. [sent-27, score-0.198]
15 Our approach has two key elements: • Iterative matching with spatial priors: We propose to use athtiev esmu bastecht oinfg high c sponatfiiadeln pcrei mrsa:tc Whees p as spatial priors, to learn a subspace that reduces the confusion among the remaining set of points. [sent-29, score-0.62]
16 • Approximate subspace learning with random projectAipopnsr:o Iimnsatetaed s uobfs using leexaarcnti spectral decomposition, we propose the use of random projections [3 1] for approximate subspace learning. [sent-32, score-1.07]
17 This significantly reduces the computational complexity of subspace learning at the cost of minimal precision loss, and makes it feasible to tackle matching problems that are prohibitively expensive for existing approaches. [sent-33, score-0.592]
18 To show the competence of our framework, we present a comparative analysis of how different approaches perform for varying levels of feature noise and non-rigid distor- tion. [sent-34, score-0.149]
19 We start by formalizing an approach for point matching using subspace learning in Section 2, followed by the details of our approach of iterative subspace learning using spatial priors, presented in Section 3. [sent-37, score-0.966]
20 In Section 4 we explain how to efficiently learn approximate subspaces using random projections as opposed to exact spectral decomposition. [sent-38, score-0.409]
21 Point Matching Using Subspace Learning For dense point matching problems, it is important that not only the feature similarity of matched points is maximized, but also that their spatial arrangements are maintained. [sent-41, score-0.452]
22 To this end, subspace learning approaches try to find a lower dimensional manifold that maintains the across-set feature similarity as well as the within-set spatial arrangement of points. [sent-42, score-0.641]
23 The intuition here is that a matching problem based on the similarities in the learned subspace, will implicitly also take into account their spatial arrangements. [sent-43, score-0.217]
24 The matching problem can then be expressed as a classic bipartite problem in the learned subspace, which can be solved using various methods [10] [23]. [sent-44, score-0.154]
25 We now formally define the subspace learning problem that forms the basis of our approach described in the following sections. [sent-45, score-0.378]
26 Preliminaries We follow Torki and Elgammal’s formulation of the subspace learning problem [26]. [sent-48, score-0.378]
27 Consider two1 sets of feature points X1 and X2 from two images, where Xk = {(x1k, f1k), · · ·, (xkNk, fNkk)} for k ∈ {1, 2}, and Nk denotes the number of points in the kfothr point-set, while N denotes the total number of points in both point-sets. [sent-49, score-0.264]
28 xik ∈ R2, and its feature descriptor fik ∈ RD, where D is the dimensionality of the descriptor. [sent-51, score-0.203]
29 The spatial arrangement of points Xk is encoded in a spatial affinity matrix denoted by Sik,j = Ks (xik , xjk). [sent-52, score-0.39]
30 Here, Ks (·, ·) is a spatial kernel that measures the spatial proximity (o·,f points pi atniadl j rinn sle tth akt. [sent-53, score-0.236]
31 Similarly, teh sep afetiaatulr per sximimi-larity of point pairs across X1 and X2 is encoded in a feature affinity matrix Uip,,jq = Kf (fi1, fj2) , where Kf (·, ·) is an affinity kernel that measures the similarity of fea(·t,u·r)e is in set p to feature j in set q. [sent-54, score-0.265]
32 Here yik ∈ Ringd denotes the projected coordinates of point xik, and∈ ∈d R is the subspace dimensionality. [sent-61, score-0.444]
33 t=he subspace eclyoo, trhdiena fitresst yik and yjk of any two points xik and xjk close to each other tibfeartsmhe dtrvioaenlsut eho fmoir nstiphmaetirazlef tkahet urnde ils twiamnecil garhbitey Swik e,j . [sent-72, score-0.819]
34 The matrix A is symmetric by definition, since the diagonal blocks are symmetric, and Up,q = Equation 2 is equivalent to the Laplacian embedding problem for the point-set defined by the matrix A [1]. [sent-84, score-0.14]
35 subspace Tcohoerd Nin ×ateds, m sautcrhix th Ya ti:s Y = [y11, · · ·, yN11, y12, · · ·, yN22, y1K, · · ·, yNKK] Equation 4 is a generalized Eigenvectors problem [1]: (5) Ly = λDy (6) The optimal solution of Equation 4 can therefore be obtained by the smallest d generalized Eigenvectors of L. [sent-114, score-0.52]
36 The required N subspace-embedded points Y are stacked in d vectors such that the embedding of the first point-set is the first N1 rows followed by N2 rows for the second point-set. [sent-115, score-0.13]
37 Bipartite Matching in the Learned Subspace Once the lower-dimensional subspace has been learned, we can compute the affinity matrix G among projected pointsets Y1 and Y2. [sent-118, score-0.487]
38 Following the Scott-Higgins correspondence algorithm [23], the matching problem can be expressed as finding a permutation matrix C that rearranges the rows of G to maximize its trace. [sent-119, score-0.251]
39 When the exact permutation constraint on C is relaxed to an orthonormal matrix constraint, the permuted version of matrix G maximized trace can be computed as: G∗ = UIVT (7) where the SVD decomposition of G is UΣVT. [sent-120, score-0.311]
40 The value of Gi∗,j can be interpreted as the confidence of the matching between points iand j. [sent-122, score-0.194]
41 The overall subspace learning algorithm is summarized in Algorithm 1. [sent-123, score-0.378]
42 Subspace Learning with Spatial Priors The space complexity of Algorithm 1 is O(N2), which is significantly less than the O(N4) complexity for most ofthe previous point matching approaches that also incorporate spatial constraints [2] [28] [16] [4]. [sent-125, score-0.168]
43 To motivate the usage of spatial priors in subspace learning, we analyze how the gain in space complexity offered by Algorithm 1 affects its matching-performance, specially when there is significant confusion between feature points. [sent-126, score-0.571]
44 This ensures that there are many local minima in the matching space of the points from the two images. [sent-131, score-0.164]
45 We vary either or σu, while keeping the threshold for selecting confident matches at a fixed value. [sent-132, score-0.132]
46 Notice however that considering precision alone, this approach can successfully find a subset of highly confident matches, which are very likely to be correct. [sent-136, score-0.185]
47 Based on these observations, we propose to use the maximally confident matches in order to limit the search-space of the points that initially do not have any sufficiently confident correspondence. [sent-137, score-0.264]
48 We use this spatial prior to iteratively learn a more discriminative subspace which enables us to find and remove strong matches, until all the sufficiently × strong matches are found. [sent-138, score-0.528]
49 Incorporating Spatial Priors The main intuition behind how we incorporate spatial priors in subspace learning is as follows. [sent-141, score-0.59]
50 Suppose we know the ground truth mappings between a subset of points P ∈ X1 each with a mapping to another ssuetb oseft points nQt s∈ P PX ∈2. [sent-144, score-0.209]
51 This operat{ioXn can b}e, interpreted as a projection o\f points fhriosm o pthereairrespective 2 −D image coordinate spaces to a shared m dirmeesnpseicotnivael space, iwmhaegree ceaoochrd ddiinmaeten ssipoanc corresponds dto m mh dowiclose a point is to one of the m anchor-points (see Figure 3). [sent-149, score-0.211]
52 With both point-sets {X1 \ P} and {X2 \ Q} in a shared projected space, we compute \thPei}r pair-wise LQ2} }d i nst aan scheasr etod construct an (N1 −m) (N2 −m) matrix D. [sent-150, score-0.14]
53 To convert D from a distance to− a similarity −mmat)ri mx,a we apply a decaying exponential kernel to construct a similarity matrix H. [sent-151, score-0.152]
54 25 Total:182,Detced:879,TP:789(0%),FP:90(1%),FN:30 (25%) Figure 4: The matching result for the problem given in Figure 2 using iterative spatial priors (Algorithm 2). [sent-160, score-0.293]
55 The matching result for the problem given in Figure 2 using our iterative spatial priors scheme (Algorithm 2) is presented in Figure 4. [sent-166, score-0.293]
56 As shown, we are able to achieve a significantly higher precision and recall rates compared to Algorithm 1 for any value of σs or σu. [sent-167, score-0.153]
57 Random Projections for Subspace Learning × We now turn our attention to the efficiency aspects of our iterative subspace learning approach for the dense point matching problem. [sent-169, score-0.56]
58 Conventional subspace learning usually requires spectral decomposition of the point-set’s pairwise distance matrix, which for larger matrices can be computationally expensive2. [sent-170, score-0.54]
59 Repeating this procedure multiple times in Algorithm 2 while using an exact matrix decomposition approach would make it infeasible. [sent-171, score-0.176]
60 To this end, we propose to use the method of random projections [3 1] as an approximate way to perform spectral partitioning of matrix L. [sent-172, score-0.38]
61 M Q= ∪ X X2 \ Xm2c 222999111755 least impacted by the errors introduced by using an approximate subspace instead of an exact one [14]. [sent-174, score-0.441]
62 From Generalized to Regular Eigenvectors To use random projections for subspace learning, we first need to convert the generalized Eigenvector problem of Equation 6, to a regular Eigenvector problem. [sent-177, score-0.641]
63 Step 2 - Equation 9 can be re-written as: D−1/2AD−1/2D1/2x = λ2D1/2x (12) Denoting D1/2x = y (13) and × B = D−1/2AD−1/2 (14) Equation 12 becomes By = λ2y (15) Using Equation 13, we can find the largest k generalized Eigenvectors of A from the largest k regular Eigenvectors of B. [sent-183, score-0.215]
64 In step 1 we have already shown that the largest k generalized Eigenvectors of A correspond to the smallest k generalized Eigenvectors of L. [sent-184, score-0.213]
65 Combining step 1 and 2 lets us find the smallest k generalized Eigenvectors of L, by finding the largest k regular Eigenvectors of B. [sent-185, score-0.218]
66 Approximate Subspace Learning Having converted our generalized Eigenvector problem in L and D, into a regular Eigenvector one in B, we now explain how to find the top k approximate Eigenvectors of B using random projections [14]. [sent-188, score-0.399]
67 Given a matrix B, a target rank k, and an oversampling parameter p, we seek to construct a matrix Q such that: ||B − QQ? [sent-189, score-0.14]
68 c||h r a mreasterinxts sQ th, we seek to find an approximate decomposition of B such that B ≈ UΣVT, where U and V are tphoes Eigenvectors cfhor t htahet row a UndΣ Vcolumn spaces of B respectively, while Σ are the corresponding Eigenvalues. [sent-198, score-0.209]
69 One could therefore generate a linearly independent subspace Y , of rank k that spans the range of matrix B, by simply stacking k randomly generated vectors Ω, and multiplying them by B. [sent-206, score-0.438]
70 This allows one to generate a linearly independent subspace in a single sweep of B, while fully exploiting the multi-core processing power of modern machines using BLAS 3. [sent-207, score-0.339]
71 The overall scheme to find the top k approximate Eigenvectors of B using random projections is listed in Algorithm 3. [sent-210, score-0.281]
72 While there are public data sets available for image feature matching problems, they either tackle 222999111 686 dense but affine transformations [20], or sparse but nonrigid transformations [9]. [sent-213, score-0.399]
73 We therefore decided to simulate nonrigid transformations on our test images to have the groundtruth feature mappings, and systematically study the performance of different algorithms for dense non-rigid matching. [sent-215, score-0.233]
74 We add random amounts of perturbations to the grid-points. [sent-219, score-0.17]
75 We also control how much rotation and feature noise we add to make the matching task more or less difficult. [sent-222, score-0.202]
76 The precision and recall curves for this set of experiments are shown in Figure 6. [sent-230, score-0.153]
77 The average time taken, precision and recall rates for these algorithms for a fixed noise level (14) are given in Table 1. [sent-231, score-0.2]
78 While the greedy algorithm takes less then a second to complete, its precision rate degrades quite steeply with noise. [sent-233, score-0.175]
79 The best performance in terms of both precision and recall is achieved by ISP, however it takes more than twice as much time as Torki and Elgammal [26] does. [sent-235, score-0.153]
80 Non-Rigid Distortion Analysis To analyze the performance of the considered algorithms for different amounts of non-rigid transformations, we gen- Input Image Less Distortion More Distortion Figure 5: Given an image, we define a grid of points over it. [sent-239, score-0.156]
81 We can control the amount of added distortion by varying the random amounts of perturbations added to each of these grid-points. [sent-240, score-0.288]
82 F19780i5g uncriesPo6IGRN:TS-r7Pkeidoy8srcA9entag10lNyosie–12av3rg40987e5 prlaceR6isIGNTSr-P7oekndiya8Prce9ntg1a0Nloisceurv12f3o4 10 trials of varying amounts of feature noise. [sent-241, score-0.195]
83 The amount of feature noise for this experiment was set at 15 times the norm of average feature values. [sent-244, score-0.148]
84 The precision and recall curves for this experiments are shown in Figure 7. [sent-245, score-0.153]
85 Torki and Elgammal [26] report high precision performance, however its recall gradually degrades with increasing amounts of nonrigid transformation. [sent-247, score-0.344]
86 Related Work Feature matching is a well-explored problem, where points undergoing perspective [29] [15] as well as non-rigid geometric transformations have been studied [17]. [sent-259, score-0.305]
87 SVD [12] [13]) to find manifold sub- ture matching is framed as a graph isomorphism problem between two weighted or unweighted graphs in order to enforce edge compatibility [24] [30]. [sent-331, score-0.152]
88 Several approaches use higher order spatial consistency constraints [7], however such constraints are not necessarily always helpful [2], and spaces that minimize distances between corresponding points [2] [27]. [sent-332, score-0.2]
89 Conventionally, work in manifold learning has focused on finding exact subspaces which can be computationally quite costing [6]. [sent-333, score-0.138]
90 6 Figure 7: Distortion Analysis average precision and recall curves for 10 trials of varying amounts of non-rigid image distortion. [sent-342, score-0.312]
91 l0 8(R) of different algorithms for ten runs of tests at a noise level of 14 times the norm of average feature values. [sent-357, score-0.142]
92 subspaces in return for significant efficiency gains only for minimal precision loss [3] [3 1]. [sent-363, score-0.187]
93 This work shows how approximate subspace learning techniques could be useful for the feature-matching problem. [sent-365, score-0.442]
94 Conclusions and Future Work We presented a novel method for matching dense point-sets undergoing non-rigid transformation. [sent-367, score-0.203]
95 Our approach iteratively incorporates high-confidence matches as a spatial prior to learn discriminative subspaces that encode both the feature similarity and their spatial arrangement. [sent-368, score-0.402]
96 We proposed the use of random projections for approximate subspace learning that provides significant time improvements at the cost of minimal precision loss. [sent-369, score-0.752]
97 Currently, for each iteration of our algorithm we find a low-dimensional subspace independent of our previously computed subspaces. [sent-371, score-0.372]
98 Going forward, we plan to use the previously computed subspace as a warmstart for the next subspace to gain more efficiency. [sent-372, score-0.678]
99 Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. [sent-455, score-0.134]
100 An eigen decomposition approach to weighted graph matching problems. [sent-545, score-0.187]
wordName wordTfidf (topN-words)
[('fp', 0.351), ('subspace', 0.339), ('torki', 0.336), ('fn', 0.258), ('tp', 0.216), ('elgammal', 0.186), ('eigenvectors', 0.168), ('isp', 0.15), ('projections', 0.141), ('xik', 0.124), ('precision', 0.096), ('xjk', 0.096), ('eigenvector', 0.088), ('matching', 0.088), ('rp', 0.086), ('priors', 0.083), ('distortion', 0.082), ('spatial', 0.08), ('amounts', 0.08), ('transformations', 0.078), ('matches', 0.076), ('points', 0.076), ('kf', 0.073), ('generalized', 0.073), ('matrix', 0.07), ('decomposition', 0.068), ('yik', 0.068), ('nonrigid', 0.067), ('bipartite', 0.066), ('approximate', 0.064), ('undergoing', 0.063), ('spectral', 0.062), ('subspaces', 0.061), ('dirmeesnpseicotnivael', 0.058), ('eho', 0.058), ('ffoorrmm', 0.058), ('uivt', 0.058), ('yjk', 0.058), ('yjq', 0.058), ('ynkk', 0.058), ('ks', 0.058), ('recall', 0.057), ('mappings', 0.057), ('confident', 0.056), ('equation', 0.056), ('svd', 0.054), ('dense', 0.052), ('ebay', 0.052), ('matched', 0.049), ('intuition', 0.049), ('noise', 0.047), ('perturbations', 0.047), ('bandwidths', 0.045), ('taiwan', 0.045), ('regular', 0.045), ('degrades', 0.044), ('spaces', 0.044), ('arrangement', 0.043), ('golub', 0.043), ('fik', 0.043), ('trials', 0.043), ('random', 0.043), ('iterative', 0.042), ('similarity', 0.041), ('affinity', 0.041), ('duchenne', 0.04), ('learning', 0.039), ('atr', 0.039), ('exact', 0.038), ('pages', 0.038), ('projected', 0.037), ('feature', 0.036), ('varying', 0.036), ('smallest', 0.035), ('greedy', 0.035), ('sk', 0.035), ('correspondence', 0.034), ('find', 0.033), ('confusion', 0.033), ('maximized', 0.033), ('shared', 0.033), ('xk', 0.033), ('largest', 0.032), ('permutation', 0.032), ('matrices', 0.032), ('laplacian', 0.031), ('rotation', 0.031), ('graph', 0.031), ('arrangements', 0.03), ('maintains', 0.03), ('perturbation', 0.03), ('minimal', 0.03), ('runs', 0.03), ('comparative', 0.03), ('confidence', 0.03), ('stacking', 0.029), ('mc', 0.029), ('norm', 0.029), ('incorporates', 0.028), ('rows', 0.027)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000007 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections
Author: Raffay Hamid, Dennis Decoste, Chih-Jen Lin
Abstract: We present a robust and efficient technique for matching dense sets of points undergoing non-rigid spatial transformations. Our main intuition is that the subset of points that can be matched with high confidence should be used to guide the matching procedure for the rest. We propose a novel algorithm that incorporates these high-confidence matches as a spatial prior to learn a discriminative subspace that simultaneously encodes both the feature similarity as well as their spatial arrangement. Conventional subspace learning usually requires spectral decomposition of the pair-wise distance matrix across the point-sets, which can become inefficient even for moderately sized problems. To this end, we propose the use of random projections for approximate subspace learning, which can provide significant time improvements at the cost of minimal precision loss. This efficiency gain allows us to iteratively find and remove high-confidence matches from the point sets, resulting in high recall. To show the effectiveness of our approach, we present a systematic set of experiments and results for the problem of dense non-rigid image-feature matching.
2 0.1803131 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds
Author: Bo Wang, Zhuowen Tu
Abstract: With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. The real-world data however often come with noises and outliers; seldom, all the data live in a single linear subspace. Inspired by the recent advances in sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then derive a new formulation to perform denoising to the original data. Experiments carried out on both toy and real applications demonstrate the effectiveness of our method; it is insensitive to parameter tuning and we show significant improvement over the competing algorithms.
3 0.17472953 135 cvpr-2013-Discriminative Subspace Clustering
Author: Vasileios Zografos, Liam Ellis, Rudolf Mester
Abstract: We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.
4 0.17006567 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces
Author: Shaokang Chen, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell
Abstract: Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
5 0.12172864 379 cvpr-2013-Scalable Sparse Subspace Clustering
Author: Xi Peng, Lei Zhang, Zhang Yi
Abstract: In this paper, we address two problems in Sparse Subspace Clustering algorithm (SSC), i.e., scalability issue and out-of-sample problem. SSC constructs a sparse similarity graph for spectral clustering by using ?1-minimization based coefficients, has achieved state-of-the-art results for image clustering and motion segmentation. However, the time complexity of SSC is proportion to the cubic of problem size such that it is inefficient to apply SSC into large scale setting. Moreover, SSC does not handle with out-ofsample data that are not used to construct the similarity graph. For each new datum, SSC needs recalculating the cluster membership of the whole data set, which makes SSC is not competitive in fast online clustering. To address the problems, this paper proposes out-of-sample extension of SSC, named as Scalable Sparse Subspace Clustering (SSSC), which makes SSC feasible to cluster large scale data sets. The solution of SSSC adopts a ”sampling, clustering, coding, and classifying” strategy. Extensive experimental results on several popular data sets demonstrate the effectiveness and efficiency of our method comparing with the state-of-the-art algorithms.
6 0.11806628 437 cvpr-2013-Towards Fast and Accurate Segmentation
7 0.11296041 46 cvpr-2013-Articulated and Restricted Motion Subspaces and Their Signatures
8 0.10819165 64 cvpr-2013-Blessing of Dimensionality: High-Dimensional Feature and Its Efficient Compression for Face Verification
10 0.096496366 250 cvpr-2013-Learning Cross-Domain Information Transfer for Location Recognition and Clustering
11 0.092648536 360 cvpr-2013-Robust Estimation of Nonrigid Transformation for Point Set Registration
12 0.087971441 107 cvpr-2013-Deformable Spatial Pyramid Matching for Fast Dense Correspondences
13 0.081107341 319 cvpr-2013-Optimized Product Quantization for Approximate Nearest Neighbor Search
14 0.079771876 234 cvpr-2013-Joint Spectral Correspondence for Disparate Image Matching
15 0.079698652 388 cvpr-2013-Semi-supervised Learning of Feature Hierarchies for Object Detection in a Video
16 0.079666682 253 cvpr-2013-Learning Multiple Non-linear Sub-spaces Using K-RBMs
17 0.079350576 113 cvpr-2013-Dense Variational Reconstruction of Non-rigid Surfaces from Monocular Video
18 0.07921946 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features
19 0.078556754 82 cvpr-2013-Class Generative Models Based on Feature Regression for Pose Estimation of Object Categories
20 0.077696979 93 cvpr-2013-Constraints as Features
topicId topicWeight
[(0, 0.189), (1, 0.024), (2, -0.043), (3, 0.061), (4, 0.045), (5, -0.025), (6, -0.029), (7, -0.133), (8, -0.056), (9, -0.078), (10, 0.029), (11, 0.02), (12, -0.056), (13, -0.104), (14, -0.021), (15, -0.026), (16, -0.086), (17, -0.023), (18, -0.041), (19, -0.012), (20, 0.1), (21, 0.027), (22, 0.051), (23, -0.019), (24, 0.028), (25, -0.062), (26, -0.035), (27, -0.115), (28, 0.011), (29, -0.009), (30, 0.01), (31, 0.137), (32, 0.048), (33, -0.097), (34, -0.043), (35, 0.067), (36, -0.021), (37, 0.051), (38, -0.052), (39, 0.032), (40, -0.022), (41, 0.045), (42, -0.012), (43, 0.017), (44, 0.074), (45, -0.051), (46, 0.006), (47, -0.06), (48, -0.074), (49, -0.006)]
simIndex simValue paperId paperTitle
same-paper 1 0.95215917 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections
Author: Raffay Hamid, Dennis Decoste, Chih-Jen Lin
Abstract: We present a robust and efficient technique for matching dense sets of points undergoing non-rigid spatial transformations. Our main intuition is that the subset of points that can be matched with high confidence should be used to guide the matching procedure for the rest. We propose a novel algorithm that incorporates these high-confidence matches as a spatial prior to learn a discriminative subspace that simultaneously encodes both the feature similarity as well as their spatial arrangement. Conventional subspace learning usually requires spectral decomposition of the pair-wise distance matrix across the point-sets, which can become inefficient even for moderately sized problems. To this end, we propose the use of random projections for approximate subspace learning, which can provide significant time improvements at the cost of minimal precision loss. This efficiency gain allows us to iteratively find and remove high-confidence matches from the point sets, resulting in high recall. To show the effectiveness of our approach, we present a systematic set of experiments and results for the problem of dense non-rigid image-feature matching.
2 0.84732729 135 cvpr-2013-Discriminative Subspace Clustering
Author: Vasileios Zografos, Liam Ellis, Rudolf Mester
Abstract: We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.
3 0.83931667 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces
Author: Shaokang Chen, Conrad Sanderson, Mehrtash T. Harandi, Brian C. Lovell
Abstract: Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
4 0.83462703 379 cvpr-2013-Scalable Sparse Subspace Clustering
Author: Xi Peng, Lei Zhang, Zhang Yi
Abstract: In this paper, we address two problems in Sparse Subspace Clustering algorithm (SSC), i.e., scalability issue and out-of-sample problem. SSC constructs a sparse similarity graph for spectral clustering by using ?1-minimization based coefficients, has achieved state-of-the-art results for image clustering and motion segmentation. However, the time complexity of SSC is proportion to the cubic of problem size such that it is inefficient to apply SSC into large scale setting. Moreover, SSC does not handle with out-ofsample data that are not used to construct the similarity graph. For each new datum, SSC needs recalculating the cluster membership of the whole data set, which makes SSC is not competitive in fast online clustering. To address the problems, this paper proposes out-of-sample extension of SSC, named as Scalable Sparse Subspace Clustering (SSSC), which makes SSC feasible to cluster large scale data sets. The solution of SSSC adopts a ”sampling, clustering, coding, and classifying” strategy. Extensive experimental results on several popular data sets demonstrate the effectiveness and efficiency of our method comparing with the state-of-the-art algorithms.
5 0.80524641 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds
Author: Bo Wang, Zhuowen Tu
Abstract: With the increasing availability of high dimensional data and demand in sophisticated data analysis algorithms, manifold learning becomes a critical technique to perform dimensionality reduction, unraveling the intrinsic data structure. The real-world data however often come with noises and outliers; seldom, all the data live in a single linear subspace. Inspired by the recent advances in sparse subspace learning and diffusion-based approaches, we propose a new manifold denoising algorithm in which data neighborhoods are adaptively inferred via sparse subspace reconstruction; we then derive a new formulation to perform denoising to the original data. Experiments carried out on both toy and real applications demonstrate the effectiveness of our method; it is insensitive to parameter tuning and we show significant improvement over the competing algorithms.
6 0.77766782 191 cvpr-2013-Graph-Laplacian PCA: Closed-Form Solution and Robustness
7 0.70242852 250 cvpr-2013-Learning Cross-Domain Information Transfer for Location Recognition and Clustering
8 0.64552021 93 cvpr-2013-Constraints as Features
9 0.62912792 9 cvpr-2013-A Fast Semidefinite Approach to Solving Binary Quadratic Problems
11 0.58477765 106 cvpr-2013-Deformable Graph Matching
12 0.57922471 259 cvpr-2013-Learning a Manifold as an Atlas
13 0.57485831 35 cvpr-2013-Adaptive Compressed Tomography Sensing
14 0.57459736 429 cvpr-2013-The Generalized Laplacian Distance and Its Applications for Visual Matching
15 0.55868644 46 cvpr-2013-Articulated and Restricted Motion Subspaces and Their Signatures
16 0.54089153 184 cvpr-2013-Gauging Association Patterns of Chromosome Territories via Chromatic Median
17 0.53609508 253 cvpr-2013-Learning Multiple Non-linear Sub-spaces Using K-RBMs
18 0.53446078 23 cvpr-2013-A Practical Rank-Constrained Eight-Point Algorithm for Fundamental Matrix Estimation
19 0.53390878 270 cvpr-2013-Local Fisher Discriminant Analysis for Pedestrian Re-identification
20 0.52380598 64 cvpr-2013-Blessing of Dimensionality: High-Dimensional Feature and Its Efficient Compression for Face Verification
topicId topicWeight
[(10, 0.11), (16, 0.035), (26, 0.059), (28, 0.018), (33, 0.26), (67, 0.089), (69, 0.075), (83, 0.206), (87, 0.065)]
simIndex simValue paperId paperTitle
1 0.89385951 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems
Author: Amit Agrawal, Srikumar Ramalingam
Abstract: Imaging systems consisting of a camera looking at multiple spherical mirrors (reflection) or multiple refractive spheres (refraction) have been used for wide-angle imaging applications. We describe such setups as multi-axial imaging systems, since a single sphere results in an axial system. Assuming an internally calibrated camera, calibration of such multi-axial systems involves estimating the sphere radii and locations in the camera coordinate system. However, previous calibration approaches require manual intervention or constrained setups. We present a fully automatic approach using a single photo of a 2D calibration grid. The pose of the calibration grid is assumed to be unknown and is also recovered. Our approach can handle unconstrained setups, where the mirrors/refractive balls can be arranged in any fashion, not necessarily on a grid. The axial nature of rays allows us to compute the axis of each sphere separately. We then show that by choosing rays from two or more spheres, the unknown pose of the calibration grid can be obtained linearly and independently of sphere radii and locations. Knowing the pose, we derive analytical solutions for obtaining the sphere radius and location. This leads to an interesting result that 6-DOF pose estimation of a multi-axial camera can be done without the knowledge of full calibration. Simulations and real experiments demonstrate the applicability of our algorithm.
2 0.86988121 432 cvpr-2013-Three-Dimensional Bilateral Symmetry Plane Estimation in the Phase Domain
Author: Ramakrishna Kakarala, Prabhu Kaliamoorthi, Vittal Premachandran
Abstract: We show that bilateral symmetry plane estimation for three-dimensional (3-D) shapes may be carried out accurately, and efficiently, in the spherical harmonic domain. Our methods are valuable for applications where spherical harmonic expansion is already employed, such as 3-D shape registration, morphometry, and retrieval. We show that the presence of bilateral symmetry in the 3-D shape is equivalent to a linear phase structure in the corresponding spherical harmonic coefficients, and provide algorithms for estimating the orientation of the symmetry plane. The benefit of using spherical harmonic phase is that symmetry estimation reduces to matching a compact set of descriptors, without the need to solve a correspondence problem. Our methods work on point clouds as well as large-scale mesh models of 3-D shapes.
same-paper 3 0.85561705 109 cvpr-2013-Dense Non-rigid Point-Matching Using Random Projections
Author: Raffay Hamid, Dennis Decoste, Chih-Jen Lin
Abstract: We present a robust and efficient technique for matching dense sets of points undergoing non-rigid spatial transformations. Our main intuition is that the subset of points that can be matched with high confidence should be used to guide the matching procedure for the rest. We propose a novel algorithm that incorporates these high-confidence matches as a spatial prior to learn a discriminative subspace that simultaneously encodes both the feature similarity as well as their spatial arrangement. Conventional subspace learning usually requires spectral decomposition of the pair-wise distance matrix across the point-sets, which can become inefficient even for moderately sized problems. To this end, we propose the use of random projections for approximate subspace learning, which can provide significant time improvements at the cost of minimal precision loss. This efficiency gain allows us to iteratively find and remove high-confidence matches from the point sets, resulting in high recall. To show the effectiveness of our approach, we present a systematic set of experiments and results for the problem of dense non-rigid image-feature matching.
Author: Thomas Berg, Peter N. Belhumeur
Abstract: From a set ofimages in aparticular domain, labeled with part locations and class, we present a method to automatically learn a large and diverse set of highly discriminative intermediate features that we call Part-based One-vs-One Features (POOFs). Each of these features specializes in discrimination between two particular classes based on the appearance at a particular part. We demonstrate the particular usefulness of these features for fine-grained visual categorization with new state-of-the-art results on bird species identification using the Caltech UCSD Birds (CUB) dataset and parity with the best existing results in face verification on the Labeled Faces in the Wild (LFW) dataset. Finally, we demonstrate the particular advantage of POOFs when training data is scarce.
5 0.82702512 465 cvpr-2013-What Object Motion Reveals about Shape with Unknown BRDF and Lighting
Author: Manmohan Chandraker, Dikpal Reddy, Yizhou Wang, Ravi Ramamoorthi
Abstract: We present a theory that addresses the problem of determining shape from the (small or differential) motion of an object with unknown isotropic reflectance, under arbitrary unknown distant illumination, , for both orthographic and perpsective projection. Our theory imposes fundamental limits on the hardness of surface reconstruction, independent of the method involved. Under orthographic projection, we prove that three differential motions suffice to yield an invariant that relates shape to image derivatives, regardless of BRDF and illumination. Under perspective projection, we show that four differential motions suffice to yield depth and a linear constraint on the surface gradient, with unknown BRDF and lighting. Further, we delineate the topological classes up to which reconstruction may be achieved using the invariants. Finally, we derive a general stratification that relates hardness of shape recovery to scene complexity. Qualitatively, our invariants are homogeneous partial differential equations for simple lighting and inhomogeneous for complex illumination. Quantitatively, our framework shows that the minimal number of motions required to resolve shape is greater for more complex scenes. Prior works that assume brightness constancy, Lambertian BRDF or a known directional light source follow as special cases of our stratification. We illustrate with synthetic and real data how potential reconstruction methods may exploit our framework.
6 0.82594734 111 cvpr-2013-Dense Reconstruction Using 3D Object Shape Priors
7 0.82496005 248 cvpr-2013-Learning Collections of Part Models for Object Recognition
8 0.82117838 119 cvpr-2013-Detecting and Aligning Faces by Image Retrieval
9 0.82047617 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases
10 0.81960219 4 cvpr-2013-3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image
11 0.819507 104 cvpr-2013-Deep Convolutional Network Cascade for Facial Point Detection
12 0.81906605 221 cvpr-2013-Incorporating Structural Alternatives and Sharing into Hierarchy for Multiclass Object Recognition and Detection
13 0.81797546 70 cvpr-2013-Bottom-Up Segmentation for Top-Down Detection
14 0.81794536 325 cvpr-2013-Part Discovery from Partial Correspondence
15 0.81720811 416 cvpr-2013-Studying Relationships between Human Gaze, Description, and Computer Vision
16 0.81710458 94 cvpr-2013-Context-Aware Modeling and Recognition of Activities in Video
17 0.81670725 414 cvpr-2013-Structure Preserving Object Tracking
18 0.81657261 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation
19 0.81653619 254 cvpr-2013-Learning SURF Cascade for Fast and Accurate Object Detection
20 0.81498325 372 cvpr-2013-SLAM++: Simultaneous Localisation and Mapping at the Level of Objects