iccv iccv2013 iccv2013-295 knowledge-graph by maker-knowledge-mining

295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties


Source: pdf

Author: Stefanos Zafeiriou, Irene Kotsia

Abstract: Kernels have been a common tool of machine learning and computer vision applications for modeling nonlinearities and/or the design of robust1 similarity measures between objects. Arguably, the class of positive semidefinite (psd) kernels, widely known as Mercer’s Kernels, constitutes one of the most well-studied cases. For every psd kernel there exists an associated feature map to an arbitrary dimensional Hilbert space H, the so-called feature space. Tdihme mnsaiionn reason ebreth sipnadc ep s Hd ,ke threne slos’-c c aplolpedul aferiattyu rise the fact that classification/regression techniques (such as Support Vector Machines (SVMs)) and component analysis algorithms (such as Kernel Principal Component Analysis (KPCA)) can be devised in H, without an explicit defisnisiti (oKnP of t)h)e c feature map, only by using athne xkperlniceitl (dtehfeso-called kernel trick). Recently, due to the development of very efficient solutions for large scale linear SVMs and for incremental linear component analysis, the research to- wards finding feature map approximations for classes of kernels has attracted significant interest. In this paper, we attempt the derivation of explicit feature maps of a recently proposed class of kernels, the so-called one-shot similarity kernels. We show that for this class of kernels either there exists an explicit representation in feature space or the kernel can be expressed in such a form that allows for exact incremental learning. We theoretically explore the properties of these kernels and show how these kernels can be used for the development of robust visual tracking, recognition and deformable fitting algorithms. 1Robustness may refer to either the presence of outliers and noise the robustness to a class of transformations (e.g., translation). or to ∗ Irene Kotsia ,†,? ∗Electronics Laboratory, Department of Physics, University of Patras, Greece ?School of Science and Technology, Middlesex University, London i .kot s i @mdx . ac .uk a

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract Kernels have been a common tool of machine learning and computer vision applications for modeling nonlinearities and/or the design of robust1 similarity measures between objects. [sent-4, score-0.184]

2 For every psd kernel there exists an associated feature map to an arbitrary dimensional Hilbert space H, the so-called feature space. [sent-6, score-0.484]

3 Recently, due to the development of very efficient solutions for large scale linear SVMs and for incremental linear component analysis, the research to- wards finding feature map approximations for classes of kernels has attracted significant interest. [sent-8, score-0.544]

4 In this paper, we attempt the derivation of explicit feature maps of a recently proposed class of kernels, the so-called one-shot similarity kernels. [sent-9, score-0.29]

5 We show that for this class of kernels either there exists an explicit representation in feature space or the kernel can be expressed in such a form that allows for exact incremental learning. [sent-10, score-0.975]

6 We theoretically explore the properties of these kernels and show how these kernels can be used for the development of robust visual tracking, recognition and deformable fitting algorithms. [sent-11, score-0.483]

7 Introduction In kernel learning2 [26], for each positive semi definite (psd) kernel k there exists an associated feature map φ to an arbitrary dimensional (in some cases infinite) feature space H. [sent-21, score-0.794]

8 In that case, the kernel learning problem, even though being n tohnatli cneasaer ,in th tehe k original space, pbroecbolemmes, elvineenar t hionu utgheh new space H. [sent-22, score-0.397]

9 The explicit form of φ is not required to perfnoewrm s palal computations, as tth feo rsmo-c oafl φled is k neornte rel qtruiicrke d(i t. [sent-23, score-0.145]

10 Recently, the following reverse problem has attracted a lot of attention: Given a kernel k, one should find an efficient and effective approximation of φ that successfully replaces the kernel [27, 18, 2, 15, 16, 2]. [sent-26, score-0.794]

11 The second concerned the unavailability of both exact (or effective) and efficient incremental versions of Principal Component Analysis (PCA) algorithms with kernels [4, 11, 7] (there exist only for the linear case [14, 23]). [sent-29, score-0.462]

12 Indeed, the most well known incremental Kernel PCA (KPCA) algorithms presented in [11, 7, 4] use only approximations. [sent-30, score-0.25]

13 In [4], the authors kernelized an exact algorithm for incremental PCA [14, 23], but, in order to maintain a constant update speed, they constructed a reduced set of expansions of the kernel principal components and of the mean, using pre-images. [sent-32, score-0.749]

14 2As kernel learning we refer to the general framework of classification, regression and component analysis with kernels. [sent-34, score-0.446]

15 22339922 As mentioned above, a kernel learning problem becomes linear in the feature space H. [sent-35, score-0.397]

16 i cHieennct aen, wd hleownl-odwim-ednimsieonnsailo approximations exist for φ we can take full advantage of efficient packages for regression and classification [9] but also of exact and low cost incremental PCA algorithms [14, 23]. [sent-37, score-0.413]

17 However, it was recently shown that for some particular classes of kernels such approximations do exist. [sent-39, score-0.277]

18 The main lines of research towards efficient approximation of features map include (a) exploiting particular kernel properties to find the approximation (e. [sent-40, score-0.425]

19 , in [27, 18] the authors exploited various properties to propose efficient and effective approximations of large families of additive kernels); (b) the application of random sampling on Fourier features [15, 16, 2] (e. [sent-42, score-0.148]

20 , in [22] methodologies have been proposed for encoding stationary kernels by randomly sampling their Fourier features); (c) the application of the so-called Nystrom method, which is a data-dependent methodology that requires training [29, 28, 21]. [sent-44, score-0.204]

21 In this paper, we study a recently proposed kernel, the so-called one-shot similarity kernel, which was shown to be particularly useful for the recently introduced similarity problems (face and action similarity [32, 30, 12]). [sent-46, score-0.616]

22 In particular, we show that (1) a special form of the kernel has a closed form feature map and (2) the general kernel can be written in a form which allows for efficient incremental solutions. [sent-47, score-1.246]

23 Hence, the proposed form of the one-shot similarity kernel makes it suitable for incremental PCA, which is particular useful for visual tracking [23]. [sent-48, score-0.961]

24 Summarizing, the contributions of this paper are: • • We study the recently proposed class of one-shot similarity u kdeyrn tehles raencde snhtlyow p rthopato sthedere c aexssist o fcl oonsee-ds hfootrm si solutions that can be acquired after simple data normalization. [sent-49, score-0.216]

25 For this case we show that (1) the use of oneshot similarity kernel with SVMs can be re-interpreted as a margin maximization and (2) the one-shot similarity kernels can be used with the recently introduced SVM packages which can train linear SVMs in linear time (i. [sent-50, score-1.104]

26 We show that the proposed one-shot similarity kernel can s bheo fwor thmautl athteed p rino a sfoedrm o nwe-hsichhot a slilmowilas fitoyr kinercnree-l mental Principal Component Analysis. [sent-53, score-0.581]

27 • We apply the one-shot similarity kernel to object tracking waphpelrye t shtea toen-eo-f-sthhoet-a sirmt rielasruiltyts are ealch toie ovebjde. [sent-54, score-0.669]

28 The One-Shot Similarity Kernel In this section, we will define the one-shot and multipleshot similarity kernels, having as an example the recently introduced face similarity problem in the wild [30, 12, 3 1, 32]. [sent-56, score-0.576]

29 Face similarity is conceptually different to the standard face recognition, in which the algorithm, given a test facial image, should find the most similar face (or the kmost similar faces) from a pre-defined dataset (corresponding to the same identity). [sent-57, score-0.465]

30 Indeed, face similarity tries to determine whether two given facial images belong to the same face or not. [sent-58, score-0.465]

31 Furthermore, there is a subtle, yet crucial difference between the face similarity and verification problems [32, 8, 34, 35]. [sent-59, score-0.341]

32 In face verification the identity being claimed is known, hence person specific models can be learned and used. [sent-60, score-0.185]

33 This is not the case with face similarity, as such models can not be used or trained (the interested reader can refer to [8, 32] and in the references within for more details regarding the face similarity problem). [sent-61, score-0.541]

34 In order to construct the one-shot similarity kernel, background samples are required. [sent-62, score-0.291]

35 For example, in face recognition, as background samples we can consider a set of facial images that do not belong to the list of faces of the system (very similar to the so-called world model in the face verification problem). [sent-68, score-0.473]

36 Their oneshoLt similarity score ois computed by considering the set of background samples A with cardinality NA, which contains samples nnodt belonging wtoit hthcea same tcylaNss as neither x nor y but otherwise not-labeled [32]. [sent-71, score-0.353]

37 The one-shot similarity score in the Fisher’s Linear Discriminant Analysis (FLDA) framework can be described as follows. [sent-72, score-0.184]

38 the similarity between x and y via the background samples? [sent-77, score-0.229]

39 (3) 22339933 The above kernel in (3), as proven in [30], is a psd kernel (for further details regarding the one-shot similarity kernel the interested reader may refer to [30, 12, 31, 32]). [sent-80, score-1.581]

40 (4) The kernel k can thus take the following functional form: k(x, y) = f(x)Tg(y) (5) where f(u) =⎡⎢⎣−√12√? [sent-85, score-0.448]

41 r dA p nriocdeu property toofr sthe kernel is: k(x, y) = f(x)Tg(y) = f(y)Tg(x) = g(y)Tf(x). [sent-104, score-0.397]

42 (8) In the next section we are going to exploit this property to formulate an exact and incremental Principal Component Analysis (iPCA). [sent-105, score-0.294]

43 A Special Case of the one-shot similarity kernel Assuming that all data are normalized such that | |x||2S = ˜x T x˜ = ˜y T y˜ = 1 (i. [sent-118, score-0.581]

44 (9) Hence, the closed feature map that can be used has the following closed form: φ(x) = S−12x. [sent-121, score-0.152]

45 (10) We will now study the interpretation of the application of this kernel within the SVMs framework. [sent-122, score-0.397]

46 Typically, w and b are found by solving the Wolf dual problem where in the case of the kernel (3) can be written as: max0≤αi≤CαT1 −12αTKsα, s. [sent-131, score-0.397]

47 (12) Thus, when the one-shot similarity kernel is used with SVMs, it attempts to maximize a squared Mahalanobis type distance margin, which is inversely proportional to wTSw. [sent-142, score-0.581]

48 Hence, the one-shot similarity kernel can be interpreted as a type of margin being maximized within the linear SVM framework. [sent-143, score-0.611]

49 In case the matrix S is singular or in non-linear case where the one-shot similarity kernel is used in a features space (i. [sent-144, score-0.581]

50 , a kernel is used in the SVM problem (12)), solutions can be provided by using the tools in ([36, 13]). [sent-146, score-0.397]

51 Since the kernel has a closed form it can be directly used with the recently proposed linear SVMs which can be trained in linear time with regards to the number of training samples and solve the following reformulated optimization 22339944 problem 3: mwi,ξn 21wTw + Cξ s. [sent-147, score-0.609]

52 It is worth noting here that the functional form of the similarity kernel in (3) does not allow the use of fast cutting plane algorithm for solving (13), as proposed in [10]. [sent-152, score-0.725]

53 Exact Incremental Component Analysis using the one-shot similarity kernel In this section, we will show how the property in (8) can be harnessed in order to define a special version of KPCA. [sent-154, score-0.581]

54 The proposed KPCA, contrary to the general incremental KPCA approaches [4], does not require the computation of pre-images. [sent-155, score-0.25]

55 have explicit mappings for Xg and Xf makes feasible the computation of incremental PCA without the use of pre-images. [sent-178, score-0.378]

56 4Centering in the feature space is straightforward by centering the kernel matrix [25]. [sent-190, score-0.397]

57 Additionally, using the kernel properties (8) the following properties hold: UφTφ(x) = UfTg(x) = UgTf(x) (17) and also Uf and Ug are mutually orthogonal UfTUg = Λ −−1212V T X φTfTX φgV Λ −−112 = UφTUφ = I. [sent-192, score-0.453]

58 (18) We proceed with showing that by using the explicit definition of Uf and Ug we can define an incremental KPCA without the need of pre-images. [sent-193, score-0.324]

59 ect approach to KPCA, the storage requirements for the incremental update is of fixed complexity (e. [sent-264, score-0.25]

60 ,T wheh complexity of the update is also fixed for our kernel (e. [sent-268, score-0.397]

61 One of the main applications of iPCA is object tracking [23], in which the object subspace is adaptively learned and online updated. [sent-275, score-0.136]

62 In this paper we combine the proposed kernel with the tracking framework proposed in [23], but instead of PCA we use the KPCA with the proposed kernel. [sent-276, score-0.485]

63 The particle chosen is the one corresponding to an image which can be best reconstructed within a subspace of choice (in our case, our kernel subspace). [sent-279, score-0.478]

64 In the tracking framework the set of background samples A needed can be images of objects ootfh bearc cthkganro tuhned one we sw Aish n teoe sample. [sent-281, score-0.195]

65 Experimental Results For our experiments, we evaluated the proposed kernel in a number of applications including face recognition, object tracking and deformable model fitting. [sent-285, score-0.661]

66 For face recognition we used a similar framework to [32]. [sent-286, score-0.119]

67 We show that by exploiting the functional form of the one-shot similarity kernel state-of-the-art tracking results can be produced. [sent-318, score-0.762]

68 Finally, we combined the oneshot similarity kernel- SVMs with the recently introduced discriminative fitting algorithms, such as the Constrained Local Models (CLMs) [24] and we report state-of-the-art fitting results. [sent-319, score-0.407]

69 Face Recognition The usefulness of the one-shot similarity kernel for face verification has been shown in [32] , using the LFW database [8], hence we do not repeat these experiments. [sent-322, score-0.766]

70 We did perform multi-identity face classification in the LFW image set to show the gain in computational time by using the proposed formulation of the one-shot similarity kernel. [sent-323, score-0.303]

71 As in [32, 33], we fused various features and measured the face recognition performance by varying the number of probes (from 5, 10, 20 and 50 subjects) and performing 20 random repetitions per experiment (for more details regarding the features the interested reader can refer to [32, 33]). [sent-327, score-0.238]

72 We used the one-vs-all linear SVM proposed in [6] with the original form of the one-shot similarity kernel in (3). [sent-328, score-0.623]

73 We also used the proposed form of the kernel (9) with a fast implementation of one-vs-all linear SVM in [10], for comparison reasons. [sent-329, score-0.439]

74 This implementation can be used only for the case of linear kernels (or, as in our case, only when φ(x) is known). [sent-330, score-0.168]

75 0624 shot similarity kernel and the closed form of the one-shot similarity kernel given in (9)) are denoted as OSK and FOSK, respectively. [sent-368, score-1.28]

76 As we can see, the proposed closed form solution of the one-shot similarity kernel in (9) produces similar results as the original form of the one-shot similarity kernel, but in at least one order of magnitude less time (O(n) over O(n2) where n are the training samples). [sent-369, score-0.925]

77 Similar gain in computational time was recently reported in [27] using approximations of various additive kernels. [sent-370, score-0.152]

78 Object Tracking We evaluated the performance of our subspace learning algorithms for the application of appearance-based face tracking. [sent-373, score-0.167]

79 The appearance-based approach to tracking has been one of the de facto choices for tracking objects in image sequences. [sent-374, score-0.176]

80 As discussed, the proposed kernel subspacebased tracking algorithm is closely related to the incremental visual tracker in [23] (abbreviated as IVT). [sent-375, score-0.819]

81 1 tracker proposed in [19] and the Multiple Instance Learning (MIL) tracker in [1]. [sent-379, score-0.168]

82 The Mean RMS is summarized in Table 2 (the proposed tracker is under the abbreviation OSK-IVT, while the oneshot similarity kernel using the online kernel learning technique with pre-images [4] is abbreviated as OSK-IVT-K). [sent-419, score-1.169]

83 As can be seen, by exploiting the functional form (8) of the proposed kernel we obtain an adaptive tracking algorithm with the same complexity of IVT while producing state-ofthe-art results. [sent-420, score-0.578]

84 Some indicative examples in which the proposed tracker (using the proposed kernel) outperforms the IVT kernel are shown in Fig. [sent-422, score-0.481]

85 Deformable Model Fitting The state-of-the-art algorithms for deformable model fitting are the recently introduced non-parametric CLMs using non-parametric density estimation. [sent-428, score-0.151]

86 We should note here that the functional form of the one-shot similarity kernel (3) cannot be combined with the linear cutting-plane SVMs. [sent-438, score-0.674]

87 For all one-shot similarity kernels the background samples for each of the points are selected as patches of other points (i. [sent-439, score-0.459]

88 In face deformable model fitting experiments, results are often reported in a curve of the proportion of the images vs the shape root mean square error (RMSE) between the predicted shape and the ground truth shape. [sent-448, score-0.238]

89 As we can see, the use of one-shot similarity kernels indeed increases the performance over standard SVMs. [sent-454, score-0.352]

90 Furthermore, the use the closed form of one-shot similarity kernel (9) with the cutting-plane SVMs boosts even further the results over the original one-shot similarity kernel. [sent-455, score-0.883]

91 Finally, in order to perform fair comparisons, we compared the actual time for solving the SVM optimization problem with the proposed kernel and either the cutting plane linear algorithm or the standard kernel SVM for the original version ofthe kernel. [sent-456, score-0.845]

92 Training with the proposed kernel and the original form required 3 hours and around 1day, respectively. [sent-458, score-0.439]

93 Conclusions In this paper we studied a recently introduced class of kernels, the one-shot similarity kernels. [sent-460, score-0.216]

94 We derived closed form feature maps and proved that they can be used for efficient exact incremental learning. [sent-461, score-0.412]

95 We successfully combined them with typical classification algorithms (SVMs) and incremental learning techniques (iPCA) and applied them in several problems (face recognition, object tracking and deformable model fitting), acquiring state-of-the-art results. [sent-462, score-0.395]

96 Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. [sent-520, score-0.166]

97 Efficient online subspace learning with an indefinite kernel for visual tracking and recognition. [sent-579, score-0.533]

98 Using the nystrom method to speed up kernel machines. [sent-651, score-0.439]

99 Effective unconstrained face recognition by combining multiple descriptors and learned background statistics. [sent-671, score-0.164]

100 The discriminant elastic graph matching algorithm applied to frontal face verification. [sent-683, score-0.156]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('kernel', 0.397), ('kpca', 0.333), ('incremental', 0.25), ('lost', 0.204), ('similarity', 0.184), ('ug', 0.178), ('kernels', 0.168), ('uf', 0.166), ('zafeiriou', 0.156), ('svms', 0.149), ('face', 0.119), ('ipca', 0.108), ('kotsia', 0.108), ('xts', 0.108), ('xn', 0.106), ('ivt', 0.092), ('clms', 0.089), ('tx', 0.089), ('tracking', 0.088), ('psd', 0.087), ('tracker', 0.084), ('hg', 0.084), ('hfthg', 0.081), ('irene', 0.081), ('tefas', 0.081), ('uguftx', 0.081), ('xf', 0.079), ('hassner', 0.077), ('approximations', 0.077), ('closed', 0.076), ('xg', 0.075), ('explicit', 0.074), ('oneshot', 0.067), ('wolf', 0.066), ('rms', 0.064), ('ma', 0.064), ('ka', 0.063), ('qg', 0.063), ('fitting', 0.062), ('samples', 0.062), ('pca', 0.06), ('hf', 0.059), ('principal', 0.058), ('wild', 0.057), ('deformable', 0.057), ('particles', 0.057), ('ufugtx', 0.054), ('yts', 0.054), ('tg', 0.054), ('mappings', 0.054), ('pages', 0.053), ('cutting', 0.051), ('functional', 0.051), ('component', 0.049), ('subspace', 0.048), ('qf', 0.048), ('stefanos', 0.048), ('flda', 0.048), ('faces', 0.047), ('reader', 0.046), ('background', 0.045), ('eigenspaces', 0.044), ('exact', 0.044), ('fourier', 0.044), ('additive', 0.043), ('facial', 0.043), ('form', 0.042), ('lfw', 0.042), ('packages', 0.042), ('nystrom', 0.042), ('ieee', 0.041), ('regarding', 0.041), ('abbreviated', 0.04), ('urls', 0.04), ('trackers', 0.038), ('verification', 0.038), ('eigendecomposition', 0.038), ('discriminant', 0.037), ('methodologies', 0.036), ('xi', 0.035), ('svm', 0.034), ('drastic', 0.034), ('particle', 0.033), ('interested', 0.032), ('vt', 0.032), ('recently', 0.032), ('vd', 0.031), ('mil', 0.031), ('margin', 0.03), ('lkopf', 0.03), ('tth', 0.029), ('rmse', 0.029), ('bi', 0.028), ('ts', 0.028), ('tu', 0.028), ('properties', 0.028), ('hence', 0.028), ('london', 0.028), ('machines', 0.028), ('bounding', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999923 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties

Author: Stefanos Zafeiriou, Irene Kotsia

Abstract: Kernels have been a common tool of machine learning and computer vision applications for modeling nonlinearities and/or the design of robust1 similarity measures between objects. Arguably, the class of positive semidefinite (psd) kernels, widely known as Mercer’s Kernels, constitutes one of the most well-studied cases. For every psd kernel there exists an associated feature map to an arbitrary dimensional Hilbert space H, the so-called feature space. Tdihme mnsaiionn reason ebreth sipnadc ep s Hd ,ke threne slos’-c c aplolpedul aferiattyu rise the fact that classification/regression techniques (such as Support Vector Machines (SVMs)) and component analysis algorithms (such as Kernel Principal Component Analysis (KPCA)) can be devised in H, without an explicit defisnisiti (oKnP of t)h)e c feature map, only by using athne xkperlniceitl (dtehfeso-called kernel trick). Recently, due to the development of very efficient solutions for large scale linear SVMs and for incremental linear component analysis, the research to- wards finding feature map approximations for classes of kernels has attracted significant interest. In this paper, we attempt the derivation of explicit feature maps of a recently proposed class of kernels, the so-called one-shot similarity kernels. We show that for this class of kernels either there exists an explicit representation in feature space or the kernel can be expressed in such a form that allows for exact incremental learning. We theoretically explore the properties of these kernels and show how these kernels can be used for the development of robust visual tracking, recognition and deformable fitting algorithms. 1Robustness may refer to either the presence of outliers and noise the robustness to a class of transformations (e.g., translation). or to ∗ Irene Kotsia ,†,? ∗Electronics Laboratory, Department of Physics, University of Patras, Greece ?School of Science and Technology, Middlesex University, London i .kot s i @mdx . ac .uk a

2 0.21867026 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding

Author: Sadeep Jayasumana, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi

Abstract: We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall’s shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort to nearest neighbor methods and to learning distances on shape spaces. Here, we propose to map shapes on Kendall’s shape manifold to a high dimensional Hilbert space where Euclidean geometry applies. To this end, we introduce a kernel on this manifold that permits such a mapping, and prove its positive definiteness. This kernel lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM, MKL and kernel PCA, to the shape manifold. We demonstrate the benefits of our approach over the state-of-the-art methods on shape classification, clustering and retrieval.

3 0.15955803 85 iccv-2013-Compositional Models for Video Event Detection: A Multiple Kernel Learning Latent Variable Approach

Author: Arash Vahdat, Kevin Cannons, Greg Mori, Sangmin Oh, Ilseo Kim

Abstract: We present a compositional model for video event detection. A video is modeled using a collection of both global and segment-level features and kernel functions are employed for similarity comparisons. The locations of salient, discriminative video segments are treated as a latent variable, allowing the model to explicitly ignore portions of the video that are unimportant for classification. A novel, multiple kernel learning (MKL) latent support vector machine (SVM) is defined, that is used to combine and re-weight multiple feature types in a principled fashion while simultaneously operating within the latent variable framework. The compositional nature of the proposed model allows it to respond directly to the challenges of temporal clutter and intra-class variation, which are prevalent in unconstrained internet videos. Experimental results on the TRECVID Multimedia Event Detection 2011 (MED11) dataset demonstrate the efficacy of the method.

4 0.15397637 293 iccv-2013-Nonparametric Blind Super-resolution

Author: Tomer Michaeli, Michal Irani

Abstract: Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function ‘PSF’ of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for “blind” super resolution. In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. This leads to significant improvement in SR results.

5 0.14523397 392 iccv-2013-Similarity Metric Learning for Face Recognition

Author: Qiong Cao, Yiming Ying, Peng Li

Abstract: Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].

6 0.14424124 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution

7 0.13970664 116 iccv-2013-Directed Acyclic Graph Kernels for Action Recognition

8 0.13715139 81 iccv-2013-Combining the Right Features for Complex Event Recognition

9 0.12868318 425 iccv-2013-Tracking via Robust Multi-task Multi-view Joint Sparse Representation

10 0.12537515 298 iccv-2013-Online Robust Non-negative Dictionary Learning for Visual Tracking

11 0.11928023 335 iccv-2013-Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition

12 0.11303271 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

13 0.11102317 359 iccv-2013-Robust Object Tracking with Online Multi-lifespan Dictionary Learning

14 0.11022487 129 iccv-2013-Dynamic Scene Deblurring

15 0.10920681 157 iccv-2013-Fast Face Detector Training Using Tailored Views

16 0.1070068 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

17 0.10661036 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning

18 0.10409589 212 iccv-2013-Image Set Classification Using Holistic Multiple Order Statistics Features and Localized Multi-kernel Metric Learning

19 0.096673101 321 iccv-2013-Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model

20 0.096654646 424 iccv-2013-Tracking Revisited Using RGBD Camera: Unified Benchmark and Baselines


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.229), (1, 0.026), (2, -0.058), (3, -0.037), (4, -0.062), (5, -0.026), (6, 0.091), (7, 0.119), (8, 0.006), (9, -0.016), (10, -0.078), (11, -0.165), (12, 0.052), (13, -0.089), (14, -0.019), (15, 0.006), (16, 0.08), (17, -0.018), (18, -0.07), (19, -0.13), (20, -0.028), (21, 0.055), (22, 0.009), (23, -0.026), (24, 0.027), (25, 0.081), (26, 0.02), (27, 0.059), (28, -0.04), (29, 0.138), (30, -0.015), (31, -0.025), (32, -0.102), (33, -0.119), (34, 0.028), (35, 0.071), (36, 0.014), (37, -0.097), (38, -0.01), (39, -0.041), (40, -0.036), (41, -0.074), (42, 0.021), (43, 0.026), (44, 0.058), (45, -0.02), (46, -0.034), (47, -0.054), (48, -0.048), (49, 0.0)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97569871 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties

Author: Stefanos Zafeiriou, Irene Kotsia

Abstract: Kernels have been a common tool of machine learning and computer vision applications for modeling nonlinearities and/or the design of robust1 similarity measures between objects. Arguably, the class of positive semidefinite (psd) kernels, widely known as Mercer’s Kernels, constitutes one of the most well-studied cases. For every psd kernel there exists an associated feature map to an arbitrary dimensional Hilbert space H, the so-called feature space. Tdihme mnsaiionn reason ebreth sipnadc ep s Hd ,ke threne slos’-c c aplolpedul aferiattyu rise the fact that classification/regression techniques (such as Support Vector Machines (SVMs)) and component analysis algorithms (such as Kernel Principal Component Analysis (KPCA)) can be devised in H, without an explicit defisnisiti (oKnP of t)h)e c feature map, only by using athne xkperlniceitl (dtehfeso-called kernel trick). Recently, due to the development of very efficient solutions for large scale linear SVMs and for incremental linear component analysis, the research to- wards finding feature map approximations for classes of kernels has attracted significant interest. In this paper, we attempt the derivation of explicit feature maps of a recently proposed class of kernels, the so-called one-shot similarity kernels. We show that for this class of kernels either there exists an explicit representation in feature space or the kernel can be expressed in such a form that allows for exact incremental learning. We theoretically explore the properties of these kernels and show how these kernels can be used for the development of robust visual tracking, recognition and deformable fitting algorithms. 1Robustness may refer to either the presence of outliers and noise the robustness to a class of transformations (e.g., translation). or to ∗ Irene Kotsia ,†,? ∗Electronics Laboratory, Department of Physics, University of Patras, Greece ?School of Science and Technology, Middlesex University, London i .kot s i @mdx . ac .uk a

2 0.77062756 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding

Author: Sadeep Jayasumana, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi

Abstract: We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall’s shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort to nearest neighbor methods and to learning distances on shape spaces. Here, we propose to map shapes on Kendall’s shape manifold to a high dimensional Hilbert space where Euclidean geometry applies. To this end, we introduce a kernel on this manifold that permits such a mapping, and prove its positive definiteness. This kernel lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM, MKL and kernel PCA, to the shape manifold. We demonstrate the benefits of our approach over the state-of-the-art methods on shape classification, clustering and retrieval.

3 0.73161578 212 iccv-2013-Image Set Classification Using Holistic Multiple Order Statistics Features and Localized Multi-kernel Metric Learning

Author: Jiwen Lu, Gang Wang, Pierre Moulin

Abstract: This paper presents a new approach for image set classification, where each training and testing example contains a set of image instances of an object captured from varying viewpoints or under varying illuminations. While a number of image set classification methods have been proposed in recent years, most of them model each image set as a single linear subspace or mixture of linear subspaces, which may lose some discriminative information for classification. To address this, we propose exploring multiple order statistics as features of image sets, and develop a localized multikernel metric learning (LMKML) algorithm to effectively combine different order statistics information for classification. Our method achieves the state-of-the-art performance on four widely used databases including the Honda/UCSD, CMU Mobo, and Youtube face datasets, and the ETH-80 object dataset.

4 0.73055756 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning

Author: Zheyun Feng, Rong Jin, Anil Jain

Abstract: One of the key challenges in search-based image annotation models is to define an appropriate similarity measure between images. Many kernel distance metric learning (KML) algorithms have been developed in order to capture the nonlinear relationships between visual features and semantics ofthe images. Onefundamental limitation in applying KML to image annotation is that it requires converting image annotations into binary constraints, leading to a significant information loss. In addition, most KML algorithms suffer from high computational cost due to the requirement that the learned matrix has to be positive semi-definitive (PSD). In this paper, we propose a robust kernel metric learning (RKML) algorithm based on the regression technique that is able to directly utilize image annotations. The proposed method is also computationally more efficient because PSD property is automatically ensured by regression. We provide the theoretical guarantee for the proposed algorithm, and verify its efficiency and effectiveness for image annotation by comparing it to state-of-the-art approaches for both distance metric learning and image annotation. ,

5 0.64338976 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

Author: Zhenyu Guo, Z. Jane Wang

Abstract: Digital images nowadays show large appearance variabilities on picture styles, in terms of color tone, contrast, vignetting, and etc. These ‘picture styles’ are directly related to the scene radiance, image pipeline of the camera, and post processing functions (e.g., photography effect filters). Due to the complexity and nonlinearity of these factors, popular gradient-based image descriptors generally are not invariant to different picture styles, which could degrade the performance for object recognition. Given that images shared online or created by individual users are taken with a wide range of devices and may be processed by various post processing functions, to find a robust object recognition system is useful and challenging. In this paper, we investigate the influence of picture styles on object recognition by making a connection between image descriptors and a pixel mapping function g, and accordingly propose an adaptive approach based on a g-incorporated kernel descriptor and multiple kernel learning, without estimating or specifying the image styles used in training and testing. We conduct experiments on the Domain Adaptation data set, the Oxford Flower data set, and several variants of the Flower data set by introducing popular photography effects through post-processing. The results demonstrate that theproposedmethod consistently yields recognition improvements over standard descriptors in all studied cases.

6 0.62862468 392 iccv-2013-Similarity Metric Learning for Face Recognition

7 0.61743903 293 iccv-2013-Nonparametric Blind Super-resolution

8 0.61475611 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition

9 0.59999454 257 iccv-2013-Log-Euclidean Kernels for Sparse Representation and Dictionary Learning

10 0.5837667 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

11 0.57916802 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution

12 0.56849378 177 iccv-2013-From Point to Set: Extend the Learning of Distance Metrics

13 0.56822115 154 iccv-2013-Face Recognition via Archetype Hull Ranking

14 0.54356605 425 iccv-2013-Tracking via Robust Multi-task Multi-view Joint Sparse Representation

15 0.54174858 117 iccv-2013-Discovering Details and Scene Structure with Hierarchical Iconoid Shift

16 0.52919513 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

17 0.52715468 25 iccv-2013-A Novel Earth Mover's Distance Methodology for Image Matching with Gaussian Mixture Models

18 0.52584404 77 iccv-2013-Codemaps - Segment, Classify and Search Objects Locally

19 0.52253789 129 iccv-2013-Dynamic Scene Deblurring

20 0.52055728 168 iccv-2013-Finding the Best from the Second Bests - Inhibiting Subjective Bias in Evaluation of Visual Tracking Algorithms


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.037), (7, 0.023), (26, 0.407), (31, 0.03), (35, 0.011), (40, 0.014), (42, 0.106), (64, 0.055), (73, 0.016), (89, 0.173), (97, 0.02), (98, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.98033452 405 iccv-2013-Structured Light in Sunlight

Author: Mohit Gupta, Qi Yin, Shree K. Nayar

Abstract: Strong ambient illumination severely degrades the performance of structured light based techniques. This is especially true in outdoor scenarios, where the structured light sources have to compete with sunlight, whose power is often 2-5 orders of magnitude larger than the projected light. In this paper, we propose the concept of light-concentration to overcome strong ambient illumination. Our key observation is that given a fixed light (power) budget, it is always better to allocate it sequentially in several portions of the scene, as compared to spreading it over the entire scene at once. For a desired level of accuracy, we show that by distributing light appropriately, the proposed approach requires 1-2 orders lower acquisition time than existing approaches. Our approach is illumination-adaptive as the optimal light distribution is determined based on a measurement of the ambient illumination level. Since current light sources have a fixed light distribution, we have built a prototype light source that supports flexible light distribution by controlling the scanning speed of a laser scanner. We show several high quality 3D scanning results in a wide range of outdoor scenarios. The proposed approach will benefit 3D vision systems that need to operate outdoors under extreme ambient illumination levels on a limited time and power budget.

2 0.95856714 395 iccv-2013-Slice Sampling Particle Belief Propagation

Author: Oliver Müller, Michael Ying Yang, Bodo Rosenhahn

Abstract: Inference in continuous label Markov random fields is a challenging task. We use particle belief propagation (PBP) for solving the inference problem in continuous label space. Sampling particles from the belief distribution is typically done by using Metropolis-Hastings (MH) Markov chain Monte Carlo (MCMC) methods which involves sampling from a proposal distribution. This proposal distribution has to be carefully designed depending on the particular model and input data to achieve fast convergence. We propose to avoid dependence on a proposal distribution by introducing a slice sampling based PBP algorithm. The proposed approach shows superior convergence performance on an image denoising toy example. Our findings are validated on a challenging relational 2D feature tracking application.

3 0.95720202 51 iccv-2013-Anchored Neighborhood Regression for Fast Example-Based Super-Resolution

Author: Radu Timofte, Vincent De_Smet, Luc Van_Gool

Abstract: Recently there have been significant advances in image upscaling or image super-resolution based on a dictionary of low and high resolution exemplars. The running time of the methods is often ignored despite the fact that it is a critical factor for real applications. This paper proposes fast super-resolution methods while making no compromise on quality. First, we support the use of sparse learned dictionaries in combination with neighbor embedding methods. In this case, the nearest neighbors are computed using the correlation with the dictionary atoms rather than the Euclidean distance. Moreover, we show that most of the current approaches reach top performance for the right parameters. Second, we show that using global collaborative coding has considerable speed advantages, reducing the super-resolution mapping to a precomputed projective matrix. Third, we propose the anchored neighborhood regression. That is to anchor the neighborhood embedding of a low resolution patch to the nearest atom in the dictionary and to precompute the corresponding embedding matrix. These proposals are contrasted with current state-of- the-art methods on standard images. We obtain similar or improved quality and one or two orders of magnitude speed improvements.

4 0.94734395 125 iccv-2013-Drosophila Embryo Stage Annotation Using Label Propagation

Author: Tomáš Kazmar, Evgeny Z. Kvon, Alexander Stark, Christoph H. Lampert

Abstract: In this work we propose a system for automatic classification of Drosophila embryos into developmental stages. While the system is designed to solve an actual problem in biological research, we believe that the principle underlying it is interesting not only for biologists, but also for researchers in computer vision. The main idea is to combine two orthogonal sources of information: one is a classifier trained on strongly invariant features, which makes it applicable to images of very different conditions, but also leads to rather noisy predictions. The other is a label propagation step based on a more powerful similarity measure that however is only consistent within specific subsets of the data at a time. In our biological setup, the information sources are the shape and the staining patterns of embryo images. We show experimentally that while neither of the methods can be used by itself to achieve satisfactory results, their combination achieves prediction quality comparable to human per- formance.

5 0.93798053 282 iccv-2013-Multi-view Object Segmentation in Space and Time

Author: Abdelaziz Djelouah, Jean-Sébastien Franco, Edmond Boyer, François Le_Clerc, Patrick Pérez

Abstract: In this paper, we address the problem of object segmentation in multiple views or videos when two or more viewpoints of the same scene are available. We propose a new approach that propagates segmentation coherence information in both space and time, hence allowing evidences in one image to be shared over the complete set. To this aim the segmentation is cast as a single efficient labeling problem over space and time with graph cuts. In contrast to most existing multi-view segmentation methods that rely on some form of dense reconstruction, ours only requires a sparse 3D sampling to propagate information between viewpoints. The approach is thoroughly evaluated on standard multiview datasets, as well as on videos. With static views, results compete with state of the art methods but they are achieved with significantly fewer viewpoints. With multiple videos, we report results that demonstrate the benefit of segmentation propagation through temporal cues.

6 0.93108296 348 iccv-2013-Refractive Structure-from-Motion on Underwater Images

7 0.9304558 198 iccv-2013-Hierarchical Part Matching for Fine-Grained Visual Categorization

same-paper 8 0.8928597 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties

9 0.86816871 102 iccv-2013-Data-Driven 3D Primitives for Single Image Understanding

10 0.86804557 8 iccv-2013-A Deformable Mixture Parsing Model with Parselets

11 0.79594117 414 iccv-2013-Temporally Consistent Superpixels

12 0.79396784 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions

13 0.7755748 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

14 0.763327 150 iccv-2013-Exemplar Cut

15 0.75827032 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

16 0.75587779 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

17 0.74608099 432 iccv-2013-Uncertainty-Driven Efficiently-Sampled Sparse Graphical Models for Concurrent Tumor Segmentation and Atlas Registration

18 0.744367 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

19 0.74363512 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes

20 0.74237227 225 iccv-2013-Joint Segmentation and Pose Tracking of Human in Natural Videos