cvpr cvpr2013 cvpr2013-238 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
Abstract: Symmetric Positive Definite (SPD) matrices have become popular to encode image information. Accounting for the geometry of the Riemannian manifold of SPD matrices has proven key to the success of many algorithms. However, most existing methods only approximate the true shape of the manifold locally by its tangent plane. In this paper, inspired by kernel methods, we propose to map SPD matrices to a high dimensional Hilbert space where Euclidean geometry applies. To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices. These kernels are derived from the Gaussian kernel, but exploit different metrics on the manifold. This lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM and kernel PCA, to the Riemannian manifold of SPD matrices. We demonstrate the benefits of our approach on the problems of pedestrian detection, object categorization, texture analysis, 2D motion segmentation and Diffusion Tensor Imaging (DTI) segmentation.
Reference: text
sentIndex sentText sentNum sentScore
1 Accounting for the geometry of the Riemannian manifold of SPD matrices has proven key to the success of many algorithms. [sent-5, score-0.327]
2 However, most existing methods only approximate the true shape of the manifold locally by its tangent plane. [sent-6, score-0.313]
3 In this paper, inspired by kernel methods, we propose to map SPD matrices to a high dimensional Hilbert space where Euclidean geometry applies. [sent-7, score-0.397]
4 To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices. [sent-8, score-1.005]
5 This lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM and kernel PCA, to the Riemannian manifold of SPD matrices. [sent-10, score-0.442]
6 Symmetric positive definite (SPD) matrices are another class of entities lying on a Riemannian manifold. [sent-15, score-0.447]
7 Examples of SPD matrices in computer vision include covariance region descriptors [19], diffusion tensors [13] and structure tensors [8]. [sent-16, score-0.273]
8 The most common approach consists in computing the tangent space to the manifold at the mean of the data points to obtain a Euclidean approximation of the manifold [20]. [sent-22, score-0.508]
9 The logarithmic and exponential maps are then × iteratively used to map points from the manifold to the tangent space, and vice-versa. [sent-23, score-0.348]
10 Unfortunately, the resulting algorithms suffer from two drawbacks: The iterative use of the logarithmic and exponential maps makes them computationally expensive, and, more importantly, they only approximate true distances on the manifold by Euclidean distances on the tangent space. [sent-24, score-0.406]
11 To overcome this limitation, one could think of following the idea of kernel methods, and embed the manifold in a high dimensional Reproducing Kernel Hilbert Space (RKHS), to which many Euclidean algorithms can be generalized. [sent-25, score-0.481]
12 In Rn, kernel methods have proven effective for many computer vision tasks. [sent-26, score-0.282]
13 The mapping to a RKHS relies on a kernel function, which, according to Mercer’s theorem, must be positive definite. [sent-27, score-0.341]
14 The Gaussian kernel is perhaps the most popular example of such positive definite kernels on Rn. [sent-28, score-0.738]
15 It would therefore seem natural to adapt this kernel to account for the geometry of Riemannian manifolds by replacing the Euclidean distance in the Gaussian kernel with the geodesic distance on the manifold. [sent-29, score-0.739]
16 However, a kernel derived in this manner is not positive definite in general. [sent-30, score-0.648]
17 In this paper, we aim to generalize the successful and powerful kernel methods to manifold-valued data. [sent-31, score-0.245]
18 We present a family of provably positive definite kernels on Symd+ derived by accounting for the non-linear geometry of the manifold. [sent-33, score-0.662]
19 More specifically, we propose a theoretical framework to analyze the positive definiteness of the Gaussian kernel generated by a distance function on any non-linear manifold. [sent-34, score-0.517]
20 Using this framework, we show that a family of metrics on Symd+ define valid positive definite Gaussian kernels when replacing the Euclidean distance with the dis777333 tance corresponding to these metrics. [sent-35, score-0.624]
21 We demonstrate the benefits of our manifold-based kernel by exploiting it in four different algorithms. [sent-37, score-0.265]
22 Our experiments show that the resulting manifold kernel methods outperform the corresponding Euclidean kernel methods, as well as the manifold methods that use tangent space approximations. [sent-38, score-0.998]
23 This algorithm has the drawbacks of approximating the manifold by tangent spaces and not scaling with the number of training samples due to the iterative use of exponential and logarithmic maps. [sent-46, score-0.384]
24 Making use of our positive definite kernels yields more efficient and accurate classification algorithms on non-linear manifolds. [sent-47, score-0.517]
25 In the first case, the kernel, derived from the affine-invariant distance, is not positive definite in general [10]. [sent-52, score-0.403]
26 In the second case, the kernel uses the Stein divergence, which is not a true geodesic distance, as the distance measure and is positive definite only for some values of the Gaussian bandwidth parameter σ [9]. [sent-53, score-0.761]
27 For all kernel methods, the optimal choice of σ largely depends on the data distribution and hence constraints on σ are not desirable. [sent-54, score-0.245]
28 Other than for satisfying Mercer’s theorem to generate a valid RKHS, positive definiteness of the kernel is a required condition for the convergence of many kernel based algorithms. [sent-56, score-0.86]
29 For instance, the Support Vector Machine (SVM) learning problem is convex only when the kernel is positive definite [14]. [sent-57, score-0.63]
30 Similarly, positive definiteness of all participating kernels is required to guarantee the convexity in Multiple Kernel Learning (MKL) [22]. [sent-58, score-0.355]
31 Although theories have been proposed to exploit non-positive definite kernels [12, 23], they have not experienced a widespread success. [sent-59, score-0.397]
32 Many of these methods first enforce positive definiteness of the kernel by flipping or shifting its negative eigenvalues [23]. [sent-60, score-0.532]
33 Recently, mean-shift clustering with a positive definite heat kernel on Riemannian manifolds was introduced [4]. [sent-62, score-0.742]
34 However, due to the mathematical complexity of the kernel function, computing it is not tractable and hence only an approximation of the true kernel was used in the algorithm. [sent-63, score-0.512]
35 Here, we introduce a family of provably positive definite kernels on Symd+, and show their benefits in various kernelbased algorithms and on several computer vision tasks. [sent-64, score-0.623]
36 Background In this section, we introduce some notions of Riemannian geometry on the manifold of SPD matrices, and discuss the use of kernel methods on non-linear manifolds. [sent-66, score-0.491]
37 The tangent space at a point p on the manifold, TpM, is a vector space that consists of the tangent vectors of alMl possible curves passing through p. [sent-70, score-0.247]
38 A Riemannian manifold is a differentiable manifold equipped with a smoothly varying inner product on each tangent space. [sent-71, score-0.535]
39 The family of inner products on all tangent spaces is known as the Riemannian metric of the manifold. [sent-72, score-0.249]
40 The geodesic distance between two points on the manifold is defined as the length of the shortest curve connecting the two points. [sent-74, score-0.306]
41 Although a number of metrics1 on Symd+ have been recently proposed to capture its non-linearity, not all of them arise from a smoothly varying inner product on tangent spaces and thus define a true geodesic distance. [sent-78, score-0.283]
42 The fundamental idea of kernel methods is to map the input data to a high (possibly infinite) dimensional feature space to obtain a richer representation of the data distribution. [sent-86, score-0.304]
43 This concept can be generalized to non-linear manifolds as follows: Each point x on a non-linear manifold M is mapped tso: a Efaeacthur peo vinetct xor o nφ( ax) n oinn a Hneialbre mrt space dH M, th ies Cauchy completion vofe ttoher space spanned by raecael- Hva,lu tehde functions defined on M. [sent-87, score-0.321]
44 A kernel function k : (M ×M) → fRu nisc iuosneds dteof dineefidnoen nthMe . [sent-88, score-0.245]
45 h According t iot Mercer’s theorem, however, only positive definite kernels define valid RKHS. [sent-90, score-0.493]
46 To overcome this, most existing methods map the points on the manifold to the tangent space at one point (usually the mean point), thus obtaining a Euclidean representation of the manifold-valued data. [sent-92, score-0.311]
47 As a consequence, there are two advantages in using kernel functions to embed a manifold in an RKHS. [sent-95, score-0.442]
48 Second, as evidenced by the theory of kernel methods on Rn, it yields a much richer representation of the original data distribution. [sent-97, score-0.269]
49 These benefits, however, depend on the condition that the kernel be positive definite. [sent-98, score-0.341]
50 Positive Definite Kernels on Manifolds In this section, we first present a general theory to analyze the positive definiteness of Gaussian kernels defined on manifolds and then introduce a family of provably positive definite kernels on Symd+. [sent-101, score-1.019]
51 The Gaussian Kernel on a Metric Space The Gaussian radial basis function (RBF) has proven very effective in Euclidean space as a positive definite kernel for kernel based algorithms. [sent-104, score-0.932]
52 In Rn, the Gaussian kernel can be expressed as kG(xi , xj) := exp( ? [sent-106, score-0.245]
53 To define a kernel on a Riemannian manifold, we would like to replace the Euclidean distance by a more accurate geodesic distance on the manifold. [sent-111, score-0.379]
54 However, not all geodesic distances yield positive definite kernels. [sent-112, score-0.487]
55 We now state our main theorem, which states sufficient and necessary conditions to obtain a positive definite Gaussian kernel from a distance function. [sent-113, score-0.655]
56 (TMhen ×, k M Mis) a positive definite )k e:=rne elx for adll σ > 0 if and only if there exists an inner product space V and a function oψn : yM if →the rVe seuxicshts st ahnat, i ndn(exri , xj) u=c ? [sent-117, score-0.452]
57 We start with the definition of positive and negative definite functions [3]. [sent-123, score-0.425]
58 ll mim=1 ∈ ci =, 0x in the negative definite case. [sent-143, score-0.329]
59 3 implies that positive definiteness of the Gaussian kernel induced by a distance is equivalent to negative definiteness of the squared distance function. [sent-156, score-0.773]
60 Therefore, to prove the positive definiteness of k in Theorem 4. [sent-157, score-0.247]
61 1applies to the metrics claimed to generate positive definite Gaussian kernels, examples of non-positive definite Gaussian kernels exist for other metrics. [sent-168, score-0.846]
62 Kernels on Symd+ We now discuss the different metrics on Symd+ that can be used to define positive definite Gaussian kernels. [sent-233, score-0.449]
63 × Furthermore, it yields a positive definite Gaussian kernel as stated in the following corollary to Theorem 4. [sent-251, score-0.685]
64 Then, kR is a positive definite kernel for Xall) σ ∈ l Rg. [sent-258, score-0.63]
65 Note that only some of them were derived by considering the Riemannian geometry of the manifold and hence define true geodesic distances. [sent-264, score-0.352]
66 1, it directly follows that the Cholesky and power-Euclidean metrics also define positive definite Gaussian kernels for all values of σ. [sent-266, score-0.557]
67 Note that some metrics may yield a positive definite Gaussian kernel for some value of σ only. [sent-267, score-0.694]
68 Kernel-based Algorithms on Symd+ A major advantage ofbeing able to compute positive definite kernels on a Riemannian manifold is that it directly allows us to make use of algorithms developed for Rn, while still accounting for the geometry of the manifold. [sent-272, score-0.754]
69 Although we use φic(hX X) f o∈r explanation purposes, following the kernel trick, it never needs be explicitly computed. [sent-281, score-0.245]
70 Kernel Support Vector Machines on Symd+ We first consider the case of using kernel SVM for binary classification on a manifold. [sent-284, score-0.245]
71 e C Citl only requires ttoh evaluate the kernel at the support vectors. [sent-288, score-0.245]
72 , image features) to obtain a kernel that optimally separates two classes for a given classifier. [sent-297, score-0.245]
73 Let K(j) be the kernel matrix generated by gj }1N Kp(jq) and k as = k(gj(xp), ? [sent-304, score-0.291]
74 Kernel PCA on Symd+ We now describe the key concepts of kernel PCA on Symd+. [sent-318, score-0.245]
75 Since it works in feature space, kernel PCA may, however, extract a number of dimensions that exceeds the dimensionality of the input space. [sent-320, score-0.266]
76 tTohres icnov Har,ia tnhucse ymieatlrdiixn gof t hthei str tarnasnfosfromrmeded s ste,t { iφs tXhen)} computed, which really amounts to computing the kernel matrix of the original data using the function k. [sent-322, score-0.245]
77 An l-dimensional representation of the data is obtained by computing the eigenvectors of the kernel matrix. [sent-323, score-0.245]
78 Kernel k-means on Symd+ For clustering problems, we propose to make use of kernel k-means on Symd+. [sent-328, score-0.273]
79 Applications and Experiments We now present our experimental evaluation of the kernel methods on Symd+ described in Section 5. [sent-335, score-0.245]
80 In the remainder of this section, we use Riemannian kernel and Euclidean kernel to refer to the kernel defined in Corollary 4. [sent-336, score-0.735]
81 Pedestrian Detection We first demonstrate the use of our Riemannian kernel for the task of pedestrian detection with kernel SVM and MKL on Symd+. [sent-340, score-0.52]
82 2 to learn the final classifier, where each kernel is defined on one of the 100 selected subwindows. [sent-380, score-0.245]
83 Its training set consists of 2,416 positive windows and 1,280 person-free negative images, and its test set of 1,237 positive windows and 453 negative images. [sent-383, score-0.328]
84 For kernel k-means on Sym5+, distances in the RKHS were used. [sent-407, score-0.263]
85 both k-means and kernel k-means on Sym5+ with different metrics that generate positive definite Gaussian kernels (see Table 1). [sent-410, score-0.802]
86 Manifold kernel k-means with the log-Euclidean metric performs significantly better than all other methods in all test cases. [sent-415, score-0.275]
87 Texture Recognition We then utilized our Riemannian kernel to demonstrate the effectiveness of manifold kernel PCA on texture recognition. [sent-420, score-0.707]
88 The better recognition accuracy indicates that kernel PCA with the Riemannian kernel more effectively captures the information of the manifold-valued descriptors than the Euclidean kernel. [sent-433, score-0.517]
89 Recognition accuracies on the Brodatz dataset with k-NN in a l-dimensional Euclidean space obtained by kernel PCA. [sent-439, score-0.265]
90 We utilized kernel k-means on Sym3+ with our Riemannian kernel to segment a real DTI image of the human brain. [sent-444, score-0.49]
91 Note that, up to some noise due to the lack of spatial smoothing, Riemannian kernel k-means was able to correctly segment the corpus callosum from the rest of the image. [sent-450, score-0.275]
92 segmentation can br eep performed by clustering these matrices using kernel k-means on Sym3+ . [sent-457, score-0.335]
93 Figure 3 compares the results of kernel k-means with our Riemannian kernel with the results of [8] obtained by first performing LLE, LE, or HLLE on Sym3+ and then clustering in the low dimensional space. [sent-459, score-0.557]
94 777999 Ellipsoids Fractional Anisotropy Riemannian kernel Euclidean kernel Figure 2: DTI segmentation. [sent-460, score-0.49]
95 Segmentation of the corpus callosum with kernel k-means on Sym3+. [sent-461, score-0.275]
96 Conclusion In this paper, we have introduced a family of provably positive definite kernels on the Riemannian manifold of SPD matrices. [sent-465, score-0.777]
97 We have shown that such kernels could be used to design Riemannian extensions of existing kernelbased algorithms, such as SVM and kernel k-means. [sent-466, score-0.376]
98 Although developed for the Riemannian manifold of SPD matrices, the theory of this paper could apply to other non-linear manifolds, provided that their metrics define negative definite squared distances. [sent-468, score-0.61]
99 We also plan to investigate the positive definiteness of non-Gaussian kernels. [sent-470, score-0.247]
100 Comparison of the segmentations obtained with kernel k-means with our Riemannian kernel (KKM), LLE, LE and HLLE on . [sent-519, score-0.49]
wordName wordTfidf (topN-words)
[('symd', 0.602), ('riemannian', 0.348), ('definite', 0.289), ('spd', 0.254), ('kernel', 0.245), ('manifold', 0.197), ('definiteness', 0.151), ('theorem', 0.123), ('xj', 0.12), ('kernels', 0.108), ('positive', 0.096), ('tangent', 0.094), ('mkl', 0.088), ('xi', 0.087), ('dti', 0.087), ('manifolds', 0.084), ('geodesic', 0.084), ('euclidean', 0.081), ('rkhs', 0.075), ('metrics', 0.064), ('matrices', 0.062), ('logitboost', 0.058), ('iy', 0.057), ('hilbert', 0.052), ('covariance', 0.051), ('tensors', 0.05), ('inner', 0.047), ('rn', 0.047), ('gj', 0.046), ('hlle', 0.045), ('kkm', 0.045), ('provably', 0.045), ('family', 0.042), ('tensor', 0.042), ('negative', 0.04), ('dimensional', 0.039), ('proven', 0.037), ('logeuclidean', 0.037), ('spaces', 0.036), ('gaussian', 0.036), ('lle', 0.034), ('pca', 0.034), ('nonempty', 0.033), ('pennec', 0.033), ('accounting', 0.033), ('diffusion', 0.033), ('logarithmic', 0.032), ('log', 0.031), ('geometry', 0.031), ('stein', 0.031), ('corollary', 0.031), ('metric', 0.03), ('callosum', 0.03), ('cicjf', 0.03), ('iyy', 0.03), ('pedestrian', 0.03), ('mercer', 0.029), ('subwindows', 0.029), ('clustering', 0.028), ('yi', 0.028), ('windows', 0.028), ('descriptors', 0.027), ('brodatz', 0.027), ('arsigny', 0.027), ('symmetric', 0.026), ('distance', 0.025), ('exponential', 0.025), ('fillard', 0.025), ('yields', 0.024), ('kernelbased', 0.023), ('karcher', 0.022), ('harandi', 0.022), ('anisotropy', 0.022), ('ix', 0.022), ('continuously', 0.022), ('true', 0.022), ('exp', 0.021), ('canberra', 0.021), ('sanderson', 0.021), ('dimensionality', 0.021), ('im', 0.02), ('descriptor', 0.02), ('svm', 0.02), ('space', 0.02), ('texture', 0.02), ('squared', 0.02), ('induced', 0.02), ('chapter', 0.02), ('benefits', 0.02), ('curves', 0.019), ('fractional', 0.019), ('reproduced', 0.019), ('kr', 0.019), ('reader', 0.018), ('inria', 0.018), ('derived', 0.018), ('distances', 0.018), ('notions', 0.018), ('porikli', 0.018), ('een', 0.018)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 238 cvpr-2013-Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices
Author: Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
Abstract: Symmetric Positive Definite (SPD) matrices have become popular to encode image information. Accounting for the geometry of the Riemannian manifold of SPD matrices has proven key to the success of many algorithms. However, most existing methods only approximate the true shape of the manifold locally by its tangent plane. In this paper, inspired by kernel methods, we propose to map SPD matrices to a high dimensional Hilbert space where Euclidean geometry applies. To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices. These kernels are derived from the Gaussian kernel, but exploit different metrics on the manifold. This lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM and kernel PCA, to the Riemannian manifold of SPD matrices. We demonstrate the benefits of our approach on the problems of pedestrian detection, object categorization, texture analysis, 2D motion segmentation and Diffusion Tensor Imaging (DTI) segmentation.
2 0.38713229 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features
Author: Raviteja Vemulapalli, Jaishanker K. Pillai, Rama Chellappa
Abstract: In computer vision applications, features often lie on Riemannian manifolds with known geometry. Popular learning algorithms such as discriminant analysis, partial least squares, support vector machines, etc., are not directly applicable to such features due to the non-Euclidean nature of the underlying spaces. Hence, classification is often performed in an extrinsic manner by mapping the manifolds to Euclidean spaces using kernels. However, for kernel based approaches, poor choice of kernel often results in reduced performance. In this paper, we address the issue of kernelselection for the classification of features that lie on Riemannian manifolds using the kernel learning approach. We propose two criteria for jointly learning the kernel and the classifier using a single optimization problem. Specifically, for the SVM classifier, we formulate the problem of learning a good kernel-classifier combination as a convex optimization problem and solve it efficiently following the multiple kernel learning approach. Experimental results on image set-based classification and activity recognition clearly demonstrate the superiority of the proposed approach over existing methods for classification of manifold features.
3 0.33935282 367 cvpr-2013-Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem
Author: Rui Caseiro, Pedro Martins, João F. Henriques, Fátima Silva Leite, Jorge Batista
Abstract: In the past few years there has been a growing interest on geometric frameworks to learn supervised classification models on Riemannian manifolds [31, 27]. A popular framework, valid over any Riemannian manifold, was proposed in [31] for binary classification. Once moving from binary to multi-class classification thisparadigm is not valid anymore, due to the spread of multiple positive classes on the manifold [27]. It is then natural to ask whether the multi-class paradigm could be extended to operate on a large class of Riemannian manifolds. We propose a mathematically well-founded classification paradigm that allows to extend the work in [31] to multi-class models, taking into account the structure of the space. The idea is to project all the data from the manifold onto an affine tangent space at a particular point. To mitigate the distortion induced by local diffeomorphisms, we introduce for the first time in the computer vision community a well-founded mathematical concept, so-called Rolling map [21, 16]. The novelty in this alternate school of thought is that the manifold will be firstly rolled (without slipping or twisting) as a rigid body, then the given data is unwrapped onto the affine tangent space, where the classification is performed.
Author: Stefan Harmeling, Michael Hirsch, Bernhard Schölkopf
Abstract: We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map. Using the Fraunhofer approximation, it identifies the kernel with the squared Fourier transform of the aperture. This allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of Fraunhofer diffraction, showing that imaging processes with arbitrarily small apertures can in principle be invertible, i.e., do not lose information, provided the objects to be imaged satisfy a generic condition. A real world experiment shows that we can super-resolve beyond the Rayleigh limit.
5 0.11491308 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior
Author: Lili Tao, Bogdan J. Matuszewski
Abstract: In this paper, a novel approach based on a non-linear manifold learning technique is proposed to recover 3D nonrigid structures from 2D image sequences captured by a single camera. Most ofthe existing approaches assume that 3D shapes can be accurately modelled in a linear subspace. These techniques perform well when the deformations are relatively small or simple, but fail when more complex deformations need to be recovered. The non-linear deformations are often observed in highly flexible objects for which the use of the linear model is impractical. A specific type of shape variations might be governed by only a small number of parameters, therefore can be wellrepresented in a low dimensional manifold. We learn a nonlinear shape prior using diffusion maps method. The key contribution in this paper is the introduction of the shape prior that constrain the reconstructed shapes to lie in the learned manifold. The proposed methodology has been validated quantitatively and qualitatively on 2D points sequences projected from the 3D motion capture data and real 2D video sequences. The comparisons oftheproposed man- ifold based method against several state-of-the-art techniques are shown on different types of deformable objects.
6 0.10246886 259 cvpr-2013-Learning a Manifold as an Atlas
7 0.10024223 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds
8 0.099570319 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds
9 0.094076402 276 cvpr-2013-MKPLS: Manifold Kernel Partial Least Squares for Lipreading and Speaker Identification
10 0.089565895 201 cvpr-2013-Heterogeneous Visual Features Fusion via Sparse Multimodal Machine
11 0.087064058 175 cvpr-2013-First-Person Activity Recognition: What Are They Doing to Me?
12 0.085264415 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition
13 0.08229585 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification
14 0.073669508 180 cvpr-2013-Fully-Connected CRFs with Non-Parametric Pairwise Potential
15 0.073047981 223 cvpr-2013-Inductive Hashing on Manifolds
16 0.069464944 198 cvpr-2013-Handling Noise in Single Image Deblurring Using Directional Filters
17 0.069144398 270 cvpr-2013-Local Fisher Discriminant Analysis for Pedestrian Re-identification
18 0.068480715 92 cvpr-2013-Constrained Clustering and Its Application to Face Clustering in Videos
19 0.068250358 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces
20 0.067966811 91 cvpr-2013-Consensus of k-NNs for Robust Neighborhood Selection on Graph-Based Manifolds
topicId topicWeight
[(0, 0.141), (1, 0.005), (2, -0.055), (3, 0.046), (4, 0.006), (5, 0.052), (6, -0.039), (7, -0.139), (8, -0.079), (9, -0.069), (10, 0.023), (11, -0.033), (12, -0.14), (13, -0.153), (14, -0.082), (15, -0.005), (16, -0.18), (17, 0.006), (18, -0.147), (19, 0.002), (20, 0.032), (21, 0.122), (22, 0.093), (23, 0.107), (24, -0.074), (25, 0.048), (26, -0.042), (27, 0.186), (28, -0.072), (29, 0.046), (30, -0.054), (31, -0.193), (32, 0.035), (33, 0.13), (34, -0.008), (35, -0.128), (36, 0.116), (37, 0.039), (38, 0.105), (39, 0.088), (40, 0.063), (41, -0.029), (42, -0.026), (43, -0.013), (44, 0.02), (45, 0.058), (46, 0.041), (47, -0.06), (48, 0.021), (49, 0.069)]
simIndex simValue paperId paperTitle
same-paper 1 0.95704812 238 cvpr-2013-Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices
Author: Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
Abstract: Symmetric Positive Definite (SPD) matrices have become popular to encode image information. Accounting for the geometry of the Riemannian manifold of SPD matrices has proven key to the success of many algorithms. However, most existing methods only approximate the true shape of the manifold locally by its tangent plane. In this paper, inspired by kernel methods, we propose to map SPD matrices to a high dimensional Hilbert space where Euclidean geometry applies. To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices. These kernels are derived from the Gaussian kernel, but exploit different metrics on the manifold. This lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM and kernel PCA, to the Riemannian manifold of SPD matrices. We demonstrate the benefits of our approach on the problems of pedestrian detection, object categorization, texture analysis, 2D motion segmentation and Diffusion Tensor Imaging (DTI) segmentation.
2 0.95273668 367 cvpr-2013-Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem
Author: Rui Caseiro, Pedro Martins, João F. Henriques, Fátima Silva Leite, Jorge Batista
Abstract: In the past few years there has been a growing interest on geometric frameworks to learn supervised classification models on Riemannian manifolds [31, 27]. A popular framework, valid over any Riemannian manifold, was proposed in [31] for binary classification. Once moving from binary to multi-class classification thisparadigm is not valid anymore, due to the spread of multiple positive classes on the manifold [27]. It is then natural to ask whether the multi-class paradigm could be extended to operate on a large class of Riemannian manifolds. We propose a mathematically well-founded classification paradigm that allows to extend the work in [31] to multi-class models, taking into account the structure of the space. The idea is to project all the data from the manifold onto an affine tangent space at a particular point. To mitigate the distortion induced by local diffeomorphisms, we introduce for the first time in the computer vision community a well-founded mathematical concept, so-called Rolling map [21, 16]. The novelty in this alternate school of thought is that the manifold will be firstly rolled (without slipping or twisting) as a rigid body, then the given data is unwrapped onto the affine tangent space, where the classification is performed.
3 0.87324309 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features
Author: Raviteja Vemulapalli, Jaishanker K. Pillai, Rama Chellappa
Abstract: In computer vision applications, features often lie on Riemannian manifolds with known geometry. Popular learning algorithms such as discriminant analysis, partial least squares, support vector machines, etc., are not directly applicable to such features due to the non-Euclidean nature of the underlying spaces. Hence, classification is often performed in an extrinsic manner by mapping the manifolds to Euclidean spaces using kernels. However, for kernel based approaches, poor choice of kernel often results in reduced performance. In this paper, we address the issue of kernelselection for the classification of features that lie on Riemannian manifolds using the kernel learning approach. We propose two criteria for jointly learning the kernel and the classifier using a single optimization problem. Specifically, for the SVM classifier, we formulate the problem of learning a good kernel-classifier combination as a convex optimization problem and solve it efficiently following the multiple kernel learning approach. Experimental results on image set-based classification and activity recognition clearly demonstrate the superiority of the proposed approach over existing methods for classification of manifold features.
4 0.80504382 276 cvpr-2013-MKPLS: Manifold Kernel Partial Least Squares for Lipreading and Speaker Identification
Author: Amr Bakry, Ahmed Elgammal
Abstract: Visual speech recognition is a challenging problem, due to confusion between visual speech features. The speaker identification problem is usually coupled with speech recognition. Moreover, speaker identification is important to several applications, such as automatic access control, biometrics, authentication, and personal privacy issues. In this paper, we propose a novel approach for lipreading and speaker identification. Wepropose a new approachfor manifold parameterization in a low-dimensional latent space, where each manifold is represented as a point in that space. We initially parameterize each instance manifold using a nonlinear mapping from a unified manifold representation. We then factorize the parameter space using Kernel Partial Least Squares (KPLS) to achieve a low-dimension manifold latent space. We use two-way projections to achieve two manifold latent spaces, one for the speech content and one for the speaker. We apply our approach on two public databases: AVLetters and OuluVS. We show the results for three different settings of lipreading: speaker independent, speaker dependent, and speaker semi-dependent. Our approach outperforms for the speaker semi-dependent setting by at least 15% of the baseline, and competes in the other two settings.
5 0.71504694 259 cvpr-2013-Learning a Manifold as an Atlas
Author: Nikolaos Pitelis, Chris Russell, Lourdes Agapito
Abstract: In this work, we return to the underlying mathematical definition of a manifold and directly characterise learning a manifold as finding an atlas, or a set of overlapping charts, that accurately describe local structure. We formulate the problem of learning the manifold as an optimisation that simultaneously refines the continuous parameters defining the charts, and the discrete assignment of points to charts. In contrast to existing methods, this direct formulation of a manifold does not require “unwrapping ” the manifold into a lower dimensional space and allows us to learn closed manifolds of interest to vision, such as those corresponding to gait cycles or camera pose. We report state-ofthe-art results for manifold based nearest neighbour classification on vision datasets, and show how the same techniques can be applied to the 3D reconstruction of human motion from a single image.
7 0.57943726 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds
8 0.44739476 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification
9 0.42213532 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition
10 0.42082074 239 cvpr-2013-Kernel Null Space Methods for Novelty Detection
12 0.40868852 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds
13 0.40056372 90 cvpr-2013-Computing Diffeomorphic Paths for Large Motion Interpolation
14 0.39006773 270 cvpr-2013-Local Fisher Discriminant Analysis for Pedestrian Re-identification
15 0.38476726 215 cvpr-2013-Improved Image Set Classification via Joint Sparse Approximated Nearest Subspaces
16 0.37695509 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior
17 0.37188864 223 cvpr-2013-Inductive Hashing on Manifolds
18 0.36792111 201 cvpr-2013-Heterogeneous Visual Features Fusion via Sparse Multimodal Machine
19 0.33476391 91 cvpr-2013-Consensus of k-NNs for Robust Neighborhood Selection on Graph-Based Manifolds
topicId topicWeight
[(1, 0.195), (10, 0.16), (16, 0.018), (26, 0.038), (28, 0.014), (33, 0.251), (67, 0.047), (69, 0.038), (76, 0.013), (80, 0.01), (87, 0.076), (96, 0.016)]
simIndex simValue paperId paperTitle
1 0.86970657 84 cvpr-2013-Cloud Motion as a Calibration Cue
Author: Nathan Jacobs, Mohammad T. Islam, Scott Workman
Abstract: We propose cloud motion as a natural scene cue that enables geometric calibration of static outdoor cameras. This work introduces several new methods that use observations of an outdoor scene over days and weeks to estimate radial distortion, focal length and geo-orientation. Cloud-based cues provide strong constraints and are an important alternative to methods that require specific forms of static scene geometry or clear sky conditions. Our method makes simple assumptions about cloud motion and builds upon previous work on motion-based and line-based calibration. We show results on real scenes that highlight the effectiveness of our proposed methods.
same-paper 2 0.85922015 238 cvpr-2013-Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices
Author: Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
Abstract: Symmetric Positive Definite (SPD) matrices have become popular to encode image information. Accounting for the geometry of the Riemannian manifold of SPD matrices has proven key to the success of many algorithms. However, most existing methods only approximate the true shape of the manifold locally by its tangent plane. In this paper, inspired by kernel methods, we propose to map SPD matrices to a high dimensional Hilbert space where Euclidean geometry applies. To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices. These kernels are derived from the Gaussian kernel, but exploit different metrics on the manifold. This lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM and kernel PCA, to the Riemannian manifold of SPD matrices. We demonstrate the benefits of our approach on the problems of pedestrian detection, object categorization, texture analysis, 2D motion segmentation and Diffusion Tensor Imaging (DTI) segmentation.
3 0.84539735 374 cvpr-2013-Saliency Aggregation: A Data-Driven Approach
Author: Long Mai, Yuzhen Niu, Feng Liu
Abstract: A variety of methods have been developed for visual saliency analysis. These methods often complement each other. This paper addresses the problem of aggregating various saliency analysis methods such that the aggregation result outperforms each individual one. We have two major observations. First, different methods perform differently in saliency analysis. Second, the performance of a saliency analysis method varies with individual images. Our idea is to use data-driven approaches to saliency aggregation that appropriately consider the performance gaps among individual methods and the performance dependence of each method on individual images. This paper discusses various data-driven approaches and finds that the image-dependent aggregation method works best. Specifically, our method uses a Conditional Random Field (CRF) framework for saliency aggregation that not only models the contribution from individual saliency map but also the interaction between neighboringpixels. To account for the dependence of aggregation on an individual image, our approach selects a subset of images similar to the input image from a training data set and trains the CRF aggregation model only using this subset instead of the whole training set. Our experiments on public saliency benchmarks show that our aggregation method outperforms each individual saliency method and is robust with the selection of aggregated methods.
4 0.84450066 383 cvpr-2013-Seeking the Strongest Rigid Detector
Author: Rodrigo Benenson, Markus Mathias, Tinne Tuytelaars, Luc Van_Gool
Abstract: The current state of the art solutions for object detection describe each class by a set of models trained on discovered sub-classes (so called “components ”), with each model itself composed of collections of interrelated parts (deformable models). These detectors build upon the now classic Histogram of Oriented Gradients+linear SVM combo. In this paper we revisit some of the core assumptions in HOG+SVM and show that by properly designing the feature pooling, feature selection, preprocessing, and training methods, it is possible to reach top quality, at least for pedestrian detections, using a single rigid component. We provide experiments for a large design space, that give insights into the design of classifiers, as well as relevant information for practitioners. Our best detector is fully feed-forward, has a single unified architecture, uses only histograms of oriented gradients and colour information in monocular static images, and improves over 23 other methods on the INRIA, ETHand Caltech-USA datasets, reducing the average miss-rate over HOG+SVM by more than 30%.
5 0.83702612 248 cvpr-2013-Learning Collections of Part Models for Object Recognition
Author: Ian Endres, Kevin J. Shih, Johnston Jiaa, Derek Hoiem
Abstract: We propose a method to learn a diverse collection of discriminative parts from object bounding box annotations. Part detectors can be trained and applied individually, which simplifies learning and extension to new features or categories. We apply the parts to object category detection, pooling part detections within bottom-up proposed regions and using a boosted classifier with proposed sigmoid weak learners for scoring. On PASCAL VOC 2010, we evaluate the part detectors ’ ability to discriminate and localize annotated keypoints. Our detection system is competitive with the best-existing systems, outperforming other HOG-based detectors on the more deformable categories.
6 0.83646637 414 cvpr-2013-Structure Preserving Object Tracking
7 0.83532339 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking
8 0.83339256 408 cvpr-2013-Spatiotemporal Deformable Part Models for Action Detection
9 0.83295369 314 cvpr-2013-Online Object Tracking: A Benchmark
10 0.8324706 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation
11 0.83234406 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds
12 0.8321597 324 cvpr-2013-Part-Based Visual Tracking with Online Latent Structural Learning
13 0.83145928 131 cvpr-2013-Discriminative Non-blind Deblurring
14 0.8308726 406 cvpr-2013-Spatial Inference Machines
15 0.83036804 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems
16 0.83034819 360 cvpr-2013-Robust Estimation of Nonrigid Transformation for Point Set Registration
17 0.83021373 325 cvpr-2013-Part Discovery from Partial Correspondence
18 0.82952923 462 cvpr-2013-Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines
19 0.8293978 143 cvpr-2013-Efficient Large-Scale Structured Learning
20 0.82923108 267 cvpr-2013-Least Soft-Threshold Squares Tracking