iccv iccv2013 iccv2013-259 knowledge-graph by maker-knowledge-mining

259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples


Source: pdf

Author: Hongteng Xu, Hongyuan Zha

Abstract: Data sparsity has been a thorny issuefor manifold-based image synthesis, and in this paper we address this critical problem by leveraging ideas from transfer learning. Specifically, we propose methods based on generating auxiliary data in the form of synthetic samples using transformations of the original sparse samples. To incorporate the auxiliary data, we propose a weighted data synthesis method, which adaptively selects from the generated samples for inclusion during the manifold learning process via a weighted iterative algorithm. To demonstrate the feasibility of the proposed method, we apply it to the problem of face image synthesis from sparse samples. Compared with existing methods, the proposed method shows encouraging results with good performance improvements.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Specifically, we propose methods based on generating auxiliary data in the form of synthetic samples using transformations of the original sparse samples. [sent-3, score-0.894]

2 To incorporate the auxiliary data, we propose a weighted data synthesis method, which adaptively selects from the generated samples for inclusion during the manifold learning process via a weighted iterative algorithm. [sent-4, score-1.449]

3 To demonstrate the feasibility of the proposed method, we apply it to the problem of face image synthesis from sparse samples. [sent-5, score-0.491]

4 This learning problem can be viewed as estimating a nonlinear function mapping from a learned parameter space to the sample space (e. [sent-11, score-0.234]

5 Many nonlinear synthesis algorithms have been proposed to solve this problem. [sent-15, score-0.435]

6 Particularly, because of their strong capability of extracting low-dimensional structural information from high-dimensional data, manifold based methods are widely applied for synthesis and learning probHongyuan Zha College of Computing, Georgia Tech Atlanta, GA 30332 zha @ cc . [sent-16, score-0.877]

7 In [5], Locally-Smooth-ManifoldLearning (LSML) is proposed to learn a warping function from a point on an manifold to its neighbors. [sent-26, score-0.478]

8 However, a common issue for all the methods above is that because the manifold is fitted locally by linear models, they can produce good results only if the number of samples are relatively large. [sent-34, score-0.609]

9 In the case of sparse samples, the underlying manifold will be poorly captured because enough neighbors of sample can2208 not be found so that a good linear fit will not be feasible. [sent-35, score-0.622]

10 In this paper, we prove that it is possible to synthesize some special images based on sparse samples by leveraging the methodology of transfer learning in manifold-based image synthesis. [sent-36, score-0.448]

11 Specifically, we use the idea of leveraging auxiliary data to enhance the learning for the target domain [18, 15, 17, 4, 16, 12]. [sent-37, score-0.522]

12 For example, for the image clas- sification problem in [12], for those classes with very few training samples, new training samples are borrowed from transformed samples generated from other classes. [sent-38, score-0.457]

13 Such an auxiliary data based strategy can also be applied in image synthesis. [sent-40, score-0.371]

14 Given sparse samples, most regions of the manifold are not adequately covered. [sent-41, score-0.492]

15 Fortunately, with the help of certain transformations, we can generate auxiliary data and then obtain a more comprehensive albeit noisier coverage of the manifold. [sent-42, score-0.371]

16 The key difference between the proposed method and the works mentioned above is that we do not have external data set transformations are applied to the original data points in order to generate the auxiliary data. [sent-43, score-0.602]

17 To incorporate the noisy auxiliary data, we develop a new de-noising scheme into the manifold based synthesis methods. [sent-44, score-1.126]

18 Section 2 gives a brief review of manifold learning and data synthesize from dense samples. [sent-49, score-0.61]

19 Section 3 presents the strategy for designing transformations for sparse samples and the proposed learning algorithm. [sent-50, score-0.441]

20 The transformations applied to head pose images and the synthesis results are discussed in section 4. [sent-51, score-0.561]

21 Manifold Learning and Data Synthesis from Dense Samples The basic assumption of manifold learning is that the high-dimensional data can be viewed as a manifold embedded in the sample space. [sent-54, score-1.028]

22 , a bijection function) between samples and their coordinates in the low-dimensional latent space. [sent-57, score-0.264]

23 In [5], these two problems can bMe addressed simultaneously by learning a warping function that maps a point on the manifold to its neighbors. [sent-68, score-0.533]

24 e Wbeetw heaevne Wthe( samples i Mn the latent space. [sent-76, score-0.23]

25 So, the manifold learning becomes a problem about learning parameter Θ and ? [sent-84, score-0.546]

26 lTohsse function of manifold learning can be written− as follows, minΘ,yi ? [sent-90, score-0.491]

27 (1) measures the error between 2209 the point xi on the manifold and its local linear estimate. [sent-102, score-0.554]

28 Given optimal Θ and yi, the new data corresponding to ynew can be synthesized as xnew = ? [sent-108, score-0.235]

29 In the case of dense samples, the manifold can be recovered with high accuracy. [sent-113, score-0.436]

30 Our main idea to overcome this is to introduce auxiliary data points into the data set resulting in noisy dense samples. [sent-115, score-0.406]

31 Create Auxiliary Data via Transformations We propose to create auxiliary data by applying certain class of transformations to the original sparse samples. [sent-119, score-0.677]

32 In other words, although transformed samples are no longer on the target manifold in general, they may not be very far from the target manifold, which can be viewed as “noisy” samples for the recovery of the target manifold. [sent-146, score-1.069]

33 For example, xi and xj are tw ino t original samples. [sent-148, score-0.299]

34 It requires that the two synthetic manifolds have some overlaps, so that the path is composed of Xi and Xj . [sent-151, score-0.25]

35 Condition 2: To arbitrary xi ∈ X, we apply transCfoormndatitiioonn nT 2 t:o Tcore aatreb new samples XX,i ,w weh aopsepl yele tmranenstsfaotrmisfaiteiso nco Tnd tioti ocnre 1a . [sent-153, score-0.291]

36 s W thhaitle M Mthe condition 2 en• sures that the samples in Mi are connected with other syntshuerteisc mhaatn tihfeol sdasm, so sth iant Mthe global structure of the target manifold can be captured. [sent-156, score-0.793]

37 As a result, the samples of Mi can ibfeo bldor craonw ebde fcoarp learning tsh ae rsetrsuucltt,ur the eo fs aMmp alresou onfd M xi. [sent-157, score-0.228]

38 Tthhee barlgueet a mnda green curves are synthetic manifolds created by two transformations. [sent-162, score-0.296]

39 For some samples, their neighbors are close to the other manifold (condition 2), which is labeled by orange circles. [sent-165, score-0.598]

40 The samples are blue “◦”s, whose neighbors can only be found in the set created by th bleu corresponding teraignhsbfoorrmsa ctainon o. [sent-167, score-0.314]

41 All of the synthetic manifolds and the target manifold we want to recover are fused together in the same latent space. [sent-169, score-0.812]

42 The auxiliary data we created is the samples of synthetic manifolds meeting condition 1and 2. [sent-170, score-1.022]

43 By borrowing auxiliary data from synthetic manifolds, the partial structural information of synthetic manifolds is shared by the unknown target manifold. [sent-171, score-0.824]

44 As a result, we can fit the target manifold with synthetic manifolds, rather than a linear hyperplane. [sent-172, score-0.603]

45 The principle of this method is based on transfer learning, which is inspired by the work in [12] where the feature space of different classes are shared and their samples can be borrowed by each other. [sent-173, score-0.288]

46 Here, all the manifolds are in the same latent space and parts of their samples can be shared with each other. [sent-174, score-0.382]

47 The difference is that the proposed method is not dependent on external data set we apply transformations on the original data set to generate the auxiliary data. [sent-175, score-0.602]

48 Another nonlinear manifold learning work appears in [8], which fits manifold with piecewise polynomial regression, but it still requires dense samples to estimate the model — 2210 parameters. [sent-176, score-1.181]

49 For sparse images of the horizontal rotation of head, we use shifting, flipping and rotation, to create auxiliary data. [sent-180, score-0.592]

50 We observe that the auxiliary data indeed can help to recover the original manifold. [sent-184, score-0.41]

51 Modifications of the Loss Function + + Given auxiliary data {xi , i = 1 N, . [sent-193, score-0.371]

52 we get a new daatetad set, wsevheerrae lth trea nfirsfsto rNm samples are the original samples while the rest L ones are auxiliary data. [sent-198, score-0.721]

53 The parameters of transformations and number of neighbors will be given in Section 4. [sent-201, score-0.252]

54 analysis above, only the samples in the auxiliary data satisfying condition 1and 2 can be used. [sent-202, score-0.659]

55 The scheme is described in Algorithm 1which removes samples not meeting condition 2 from the data set. [sent-205, score-0.39]

56 eat1,dN+L For samples meeting condition 2, their neighbors are also selected adaptively. [sent-208, score-0.45]

57 Weighting neighbors of samples and error × terms ensures that the influence of the samples far from the target manifold will be suppressed, so that the condition 1 will be satisfied. [sent-221, score-1.061]

58 , wN+L]T is the weight vector denoting the auxiliary data meeting condition 2. [sent-242, score-0.585]

59 According to the evaluation about various manifold learning algorithms shown in [20], Local-Tangent-Space-Alignment (LTSA) algorithm [24] performs the best. [sent-258, score-0.491]

60 Experimental Results To demonstrate the feasibility, we apply the proposed method on the synthesis of face images2 [7]. [sent-332, score-0.403]

61 We resize images to 100 90 and create auxiliary data by following ti mhreaeg steps. [sent-336, score-0.425]

62 From the 2nd to the 5th column are synthesis results gotten by LSML [5] with sparse samples, LLE based method [21], LGGA [9], LSML with dense samples and proposed method. [sent-345, score-0.63]

63 According to [9], the manifold of the image shifting sequence × is a curve. [sent-349, score-0.557]

64 3 that a part of the curve can be viewed as a good local fitting for the manifold of rotated head image. [sent-351, score-0.517]

65 After learning manifold, the location of the new image on the manifold is decided by its coordinates in the tangent space. [sent-368, score-0.594]

66 2213 Given original sparse samples and corresponding auxiliary data, we compare the proposed method with other competitors, including the LLE based method [21], LGGA [9] and the state of the art method, LSML [5]. [sent-373, score-0.604]

67 Specifically, for showing the influence of auxiliary data on the result of synthesis, LSML are applied in two cases — purely using original sparse samples and combining sparse samples with auxiliary data. [sent-374, score-1.204]

68 On the other hand, when samples are sparse, the number of sample is too small to avoid over-fitting phenomenon of parameters in the learning phase. [sent-380, score-0.263]

69 As a result, LSML leads to obvious “ghost effect” the synthesis result is similar to that of traditional linear interpolation. [sent-381, score-0.354]

70 Even if applying LSML with the help of auxiliary data, without adaptive strategies for sample and neighbor selection, the performance are also inferior to the proposed method. [sent-382, score-0.415]

71 4, we can find that some LSML results in the situation having auxiliary data still have “ghost effect”. [sent-384, score-0.371]

72 The reason for this problem is that the outliers in the auxiliary data are not removed in the learning phase. [sent-385, score-0.511]

73 So, in the synthesis phase, it is possible that the neighbors we find include outliers. [sent-386, score-0.449]

74 In such a situation, synthesis results will be corrupted by outliers. [sent-387, score-0.354]

75 The image labeled by green frame is the synthesis result of LSML. [sent-392, score-0.381]

76 Another problem may happen in the result ofLSML with auxiliary data is “missing rotation”. [sent-394, score-0.371]

77 4, the synthesis results of LSML looks like the repeat of original image. [sent-396, score-0.393]

78 Because the outliers disobeying condition 2 are not removed, the synthetic manifolds are isolated to each other. [sent-400, score-0.489]

79 As a result, the global structure of the target manifold cannot be learned by LSML. [sent-401, score-0.505]

80 The proposed method, on the other hand, makes samples sufficient by transformations and removes outliers during the iterations of learning algorithm. [sent-404, score-0.429]

81 4 and 5, the synthesis results of the proposed method avoid serious “ghost effect”. [sent-406, score-0.354]

82 At the same time, the subtle change of image is learned by the proposed method while the synthesis result of LSML is almost the same with original image. [sent-407, score-0.393]

83 images are original samples and red curve shows the target manifold. [sent-408, score-0.281]

84 The orange “+”s and corresponding images are synthetic results based on wrong samples (green “+”s). [sent-410, score-0.311]

85 The “ghost effect” is caused by choosing the outliers disobeying condition 1 as samples. [sent-411, score-0.239]

86 The outliers disobeying condition 2 are not removed, which causes that structure of target manifold is not captured. [sent-413, score-0.744]

87 We now discuss a limitation of the proposed method sometimes the samples cannot be created or borrowed correctly. [sent-414, score-0.281]

88 This is because the samples are created by flipping are not on the original manifold perfectly. [sent-417, score-0.759]

89 After learning the manifold and getting the coordinates of all the images in the latent space, we remove one original image and auxiliary data created from it. [sent-421, score-1.074]

90 The average mean-square-error (MSE) of the synthesis results of 15 people are measured for various methods. [sent-423, score-0.354]

91 According to Table 1, the performances of LLE based method and LGGA is not satisfying because of the failed synthesis results like Fig. [sent-424, score-0.354]

92 Conclusion and Future Work In this paper, a manifold-based face synthesis method is proposed for the case of sparse samples. [sent-430, score-0.459]

93 By combining transfer learning strategy with manifold learning algorithm, the samples are supplied by their transformed results, which provide additional structural information for manifold recovery. [sent-431, score-1.257]

94 Additionally, the auxiliary data is weighted for outlier detection during the learning phase, which improves learning and final synthesis results. [sent-432, score-0.865]

95 To the data set having certain special properties that can be used to design transformations, the proposed method has potential to improve the learning result when the number of samples is insufficient. [sent-433, score-0.263]

96 Learning mappings for face synthesis from near infrared to visual light images. [sent-469, score-0.441]

97 Simultaneous learning of nonlinear manifold and dynamical models for highdimensional time series. [sent-517, score-0.629]

98 Dynamic textures synthesis as nonlinear manifold learning and traversing. [sent-534, score-0.926]

99 Face synthesis and recognition from a single image under arbitrary unknown lighting using a spherical harmonic basis morphable model. [sent-595, score-0.39]

100 Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. [sent-607, score-0.298]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('manifold', 0.436), ('synthesis', 0.354), ('auxiliary', 0.336), ('lsml', 0.267), ('samples', 0.173), ('transformations', 0.157), ('manifolds', 0.152), ('lgga', 0.134), ('nnew', 0.134), ('shifting', 0.121), ('xi', 0.118), ('condition', 0.115), ('ghost', 0.103), ('synthetic', 0.098), ('xj', 0.098), ('neighbors', 0.095), ('lle', 0.086), ('synthesize', 0.084), ('nonlinear', 0.081), ('rotation', 0.081), ('disobeying', 0.08), ('ynew', 0.08), ('xnew', 0.071), ('ni', 0.071), ('target', 0.069), ('meeting', 0.067), ('flipping', 0.065), ('mi', 0.064), ('borrowed', 0.062), ('yi', 0.061), ('fi', 0.06), ('latent', 0.057), ('sparse', 0.056), ('learning', 0.055), ('create', 0.054), ('yj', 0.054), ('ltsa', 0.053), ('transfer', 0.053), ('head', 0.05), ('transformed', 0.049), ('synthesized', 0.049), ('face', 0.049), ('gotten', 0.047), ('tm', 0.046), ('created', 0.046), ('phase', 0.046), ('pages', 0.045), ('enlarged', 0.045), ('ij', 0.044), ('neighbor', 0.044), ('yyt', 0.044), ('ino', 0.044), ('aik', 0.044), ('atlanta', 0.044), ('rd', 0.044), ('outliers', 0.044), ('aij', 0.043), ('wi', 0.042), ('regression', 0.042), ('warping', 0.042), ('opinion', 0.041), ('xim', 0.041), ('missing', 0.041), ('removed', 0.041), ('orange', 0.04), ('zij', 0.039), ('original', 0.039), ('tangent', 0.038), ('infrared', 0.038), ('ga', 0.037), ('getting', 0.036), ('morphable', 0.036), ('borrowing', 0.036), ('sample', 0.035), ('data', 0.035), ('georgia', 0.034), ('coordinates', 0.034), ('tech', 0.033), ('mthe', 0.032), ('zha', 0.032), ('mapping', 0.032), ('cybernetics', 0.032), ('feasibility', 0.032), ('vec', 0.032), ('weight', 0.032), ('viewed', 0.031), ('salakhutdinov', 0.031), ('mse', 0.031), ('decided', 0.031), ('synthesizing', 0.031), ('weighted', 0.03), ('pt', 0.03), ('effect', 0.03), ('fj', 0.029), ('highdimensional', 0.029), ('wn', 0.029), ('dynamical', 0.028), ('leveraging', 0.027), ('dimensionality', 0.027), ('labeled', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

Author: Hongteng Xu, Hongyuan Zha

Abstract: Data sparsity has been a thorny issuefor manifold-based image synthesis, and in this paper we address this critical problem by leveraging ideas from transfer learning. Specifically, we propose methods based on generating auxiliary data in the form of synthetic samples using transformations of the original sparse samples. To incorporate the auxiliary data, we propose a weighted data synthesis method, which adaptively selects from the generated samples for inclusion during the manifold learning process via a weighted iterative algorithm. To demonstrate the feasibility of the proposed method, we apply it to the problem of face image synthesis from sparse samples. Compared with existing methods, the proposed method shows encouraging results with good performance improvements.

2 0.20084615 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding

Author: Sadeep Jayasumana, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi

Abstract: We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall’s shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort to nearest neighbor methods and to learning distances on shape spaces. Here, we propose to map shapes on Kendall’s shape manifold to a high dimensional Hilbert space where Euclidean geometry applies. To this end, we introduce a kernel on this manifold that permits such a mapping, and prove its positive definiteness. This kernel lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM, MKL and kernel PCA, to the shape manifold. We demonstrate the benefits of our approach over the state-of-the-art methods on shape classification, clustering and retrieval.

3 0.17927529 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

Author: Jan Lellmann, Evgeny Strekalovskiy, Sabrina Koetter, Daniel Cremers

Abstract: While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories.

4 0.15073727 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

Author: Ali Elqursh, Ahmed Elgammal

Abstract: The vast majority of work on motion segmentation adopts the affine camera model due to its simplicity. Under the affine model, the motion segmentation problem becomes that of subspace separation. Due to this assumption, such methods are mainly offline and exhibit poor performance when the assumption is not satisfied. This is made evident in state-of-the-art methods that relax this assumption by using piecewise affine spaces and spectral clustering techniques to achieve better results. In this paper, we formulate the problem of motion segmentation as that of manifold separation. We then show how label propagation can be used in an online framework to achieve manifold separation. The performance of our framework is evaluated on a benchmark dataset and achieves competitive performance while being online.

5 0.13726375 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

6 0.12267324 114 iccv-2013-Dictionary Learning and Sparse Coding on Grassmann Manifolds: An Extrinsic Solution

7 0.12213362 178 iccv-2013-From Semi-supervised to Transfer Counting of Crowds

8 0.11901502 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition

9 0.1134679 437 iccv-2013-Unsupervised Random Forest Manifold Alignment for Lipreading

10 0.10599015 100 iccv-2013-Curvature-Aware Regularization on Riemannian Submanifolds

11 0.099749714 6 iccv-2013-A Convex Optimization Framework for Active Learning

12 0.095049404 232 iccv-2013-Latent Space Sparse Subspace Clustering

13 0.089467824 157 iccv-2013-Fast Face Detector Training Using Tailored Views

14 0.088715762 148 iccv-2013-Example-Based Facade Texture Synthesis

15 0.085669778 186 iccv-2013-GrabCut in One Cut

16 0.0843082 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition

17 0.083928898 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables

18 0.082786307 196 iccv-2013-Hierarchical Data-Driven Descent for Efficient Optimal Deformation Estimation

19 0.079980135 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

20 0.076539822 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.196), (1, 0.006), (2, -0.078), (3, -0.05), (4, -0.101), (5, 0.023), (6, 0.04), (7, 0.035), (8, 0.045), (9, -0.037), (10, -0.032), (11, -0.063), (12, -0.064), (13, -0.033), (14, 0.08), (15, -0.0), (16, -0.037), (17, -0.016), (18, 0.005), (19, -0.034), (20, 0.069), (21, -0.014), (22, 0.079), (23, 0.067), (24, 0.018), (25, 0.059), (26, 0.058), (27, 0.048), (28, 0.031), (29, 0.027), (30, -0.034), (31, 0.05), (32, -0.09), (33, 0.031), (34, 0.077), (35, 0.065), (36, 0.007), (37, 0.066), (38, -0.067), (39, -0.113), (40, 0.04), (41, -0.028), (42, -0.028), (43, -0.172), (44, 0.018), (45, 0.065), (46, 0.08), (47, 0.001), (48, -0.095), (49, -0.124)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93162972 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

Author: Hongteng Xu, Hongyuan Zha

Abstract: Data sparsity has been a thorny issuefor manifold-based image synthesis, and in this paper we address this critical problem by leveraging ideas from transfer learning. Specifically, we propose methods based on generating auxiliary data in the form of synthetic samples using transformations of the original sparse samples. To incorporate the auxiliary data, we propose a weighted data synthesis method, which adaptively selects from the generated samples for inclusion during the manifold learning process via a weighted iterative algorithm. To demonstrate the feasibility of the proposed method, we apply it to the problem of face image synthesis from sparse samples. Compared with existing methods, the proposed method shows encouraging results with good performance improvements.

2 0.76835895 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

Author: Jan Lellmann, Evgeny Strekalovskiy, Sabrina Koetter, Daniel Cremers

Abstract: While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories.

3 0.75688165 178 iccv-2013-From Semi-supervised to Transfer Counting of Crowds

Author: Chen Change Loy, Shaogang Gong, Tao Xiang

Abstract: Regression-based techniques have shown promising results for people counting in crowded scenes. However, most existing techniques require expensive and laborious data annotation for model training. In this study, we propose to address this problem from three perspectives: (1) Instead of exhaustively annotating every single frame, the most informative frames are selected for annotation automatically and actively. (2) Rather than learning from only labelled data, the abundant unlabelled data are exploited. (3) Labelled data from other scenes are employed to further alleviate the burden for data annotation. All three ideas are implemented in a unified active and semi-supervised regression framework with ability to perform transfer learning, by exploiting the underlying geometric structure of crowd patterns via manifold analysis. Extensive experiments validate the effectiveness of our approach.

4 0.69782817 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding

Author: Sadeep Jayasumana, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi

Abstract: We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall’s shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort to nearest neighbor methods and to learning distances on shape spaces. Here, we propose to map shapes on Kendall’s shape manifold to a high dimensional Hilbert space where Euclidean geometry applies. To this end, we introduce a kernel on this manifold that permits such a mapping, and prove its positive definiteness. This kernel lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM, MKL and kernel PCA, to the shape manifold. We demonstrate the benefits of our approach over the state-of-the-art methods on shape classification, clustering and retrieval.

5 0.61944324 114 iccv-2013-Dictionary Learning and Sparse Coding on Grassmann Manifolds: An Extrinsic Solution

Author: Mehrtash Harandi, Conrad Sanderson, Chunhua Shen, Brian Lovell

Abstract: Recent advances in computer vision and machine learning suggest that a wide range of problems can be addressed more appropriately by considering non-Euclidean geometry. In this paper we explore sparse dictionary learning over the space of linear subspaces, which form Riemannian structures known as Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into the space of symmetric matrices by an isometric mapping, which enables us to devise a closed-form solution for updating a Grassmann dictionary, atom by atom. Furthermore, to handle non-linearity in data, we propose a kernelised version of the dictionary learning algorithm. Experiments on several classification tasks (face recognition, action recognition, dynamic texture classification) show that the proposed approach achieves considerable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as kernelised Affine Hull Method and graphembedding Grassmann discriminant analysis.

6 0.56970298 437 iccv-2013-Unsupervised Random Forest Manifold Alignment for Lipreading

7 0.54062766 100 iccv-2013-Curvature-Aware Regularization on Riemannian Submanifolds

8 0.53429705 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning

9 0.53252947 177 iccv-2013-From Point to Set: Extend the Learning of Distance Metrics

10 0.53027016 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

11 0.52759981 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

12 0.52718389 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

13 0.4875735 364 iccv-2013-SGTD: Structure Gradient and Texture Decorrelating Regularization for Image Decomposition

14 0.48531562 443 iccv-2013-Video Synopsis by Heterogeneous Multi-source Correlation

15 0.47134173 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties

16 0.46781471 307 iccv-2013-Parallel Transport of Deformations in Shape Space of Elastic Surfaces

17 0.46764025 257 iccv-2013-Log-Euclidean Kernels for Sparse Representation and Dictionary Learning

18 0.46536246 324 iccv-2013-Potts Model, Parametric Maxflow and K-Submodular Functions

19 0.45916328 61 iccv-2013-Beyond Hard Negative Mining: Efficient Detector Learning via Block-Circulant Decomposition

20 0.4578073 140 iccv-2013-Elastic Net Constraints for Shape Matching


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.056), (7, 0.052), (12, 0.013), (26, 0.095), (27, 0.011), (31, 0.069), (34, 0.016), (38, 0.013), (42, 0.149), (48, 0.018), (63, 0.036), (64, 0.035), (73, 0.054), (76, 0.097), (89, 0.157), (98, 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92150211 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

Author: Hongteng Xu, Hongyuan Zha

Abstract: Data sparsity has been a thorny issuefor manifold-based image synthesis, and in this paper we address this critical problem by leveraging ideas from transfer learning. Specifically, we propose methods based on generating auxiliary data in the form of synthetic samples using transformations of the original sparse samples. To incorporate the auxiliary data, we propose a weighted data synthesis method, which adaptively selects from the generated samples for inclusion during the manifold learning process via a weighted iterative algorithm. To demonstrate the feasibility of the proposed method, we apply it to the problem of face image synthesis from sparse samples. Compared with existing methods, the proposed method shows encouraging results with good performance improvements.

2 0.89922869 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

3 0.89642704 260 iccv-2013-Manipulation Pattern Discovery: A Nonparametric Bayesian Approach

Author: Bingbing Ni, Pierre Moulin

Abstract: We aim to unsupervisedly discover human’s action (motion) patterns of manipulating various objects in scenarios such as assisted living. We are motivated by two key observations. First, large variation exists in motion patterns associated with various types of objects being manipulated, thus manually defining motion primitives is infeasible. Second, some motion patterns are shared among different objects being manipulated while others are object specific. We therefore propose a nonparametric Bayesian method that adopts a hierarchical Dirichlet process prior to learn representative manipulation (motion) patterns in an unsupervised manner. Taking easy-to-obtain object detection score maps and dense motion trajectories as inputs, the proposed probabilistic model can discover motion pattern groups associated with different types of objects being manipulated with a shared manipulation pattern dictionary. The size of the learned dictionary is automatically inferred. Com- prehensive experiments on two assisted living benchmarks and a cooking motion dataset demonstrate superiority of our learned manipulation pattern dictionary in representing manipulation actions for recognition.

4 0.89458394 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

Author: Chenglong Bao, Jian-Feng Cai, Hui Ji

Abstract: In recent years, how to learn a dictionary from input images for sparse modelling has been one very active topic in image processing and recognition. Most existing dictionary learning methods consider an over-complete dictionary, e.g. the K-SVD method. Often they require solving some minimization problem that is very challenging in terms of computational feasibility and efficiency. However, if the correlations among dictionary atoms are not well constrained, the redundancy of the dictionary does not necessarily improve the performance of sparse coding. This paper proposed a fast orthogonal dictionary learning method for sparse image representation. With comparable performance on several image restoration tasks, the proposed method is much more computationally efficient than the over-complete dictionary based learning methods.

5 0.89066488 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

Author: Claudia Nieuwenhuis, Evgeny Strekalovskiy, Daniel Cremers

Abstract: We propose a convex multilabel framework for image sequence segmentation which allows to impose proportion priors on object parts in order to preserve their size ratios across multiple images. The key idea is that for strongly deformable objects such as a gymnast the size ratio of respective regions (head versus torso, legs versus full body, etc.) is typically preserved. We propose different ways to impose such priors in a Bayesian framework for image segmentation. We show that near-optimal solutions can be computed using convex relaxation techniques. Extensive qualitative and quantitative evaluations demonstrate that the proportion priors allow for highly accurate segmentations, avoiding seeping-out of regions and preserving semantically relevant small-scale structures such as hands or feet. They naturally apply to multiple object instances such as players in sports scenes, and they can relate different objects instead of object parts, e.g. organs in medical imaging. The algorithm is efficient and easily parallelized leading to proportion-consistent segmentations at runtimes around one second.

6 0.8905952 180 iccv-2013-From Where and How to What We See

7 0.88922137 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition

8 0.88823974 376 iccv-2013-Scene Text Localization and Recognition with Oriented Stroke Detection

9 0.88742131 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

10 0.88593137 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions

11 0.88470089 414 iccv-2013-Temporally Consistent Superpixels

12 0.88443017 23 iccv-2013-A New Image Quality Metric for Image Auto-denoising

13 0.88330442 44 iccv-2013-Adapting Classification Cascades to New Domains

14 0.88287532 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

15 0.8828662 277 iccv-2013-Multi-channel Correlation Filters

16 0.88236415 392 iccv-2013-Similarity Metric Learning for Face Recognition

17 0.88216013 150 iccv-2013-Exemplar Cut

18 0.8820079 349 iccv-2013-Regionlets for Generic Object Detection

19 0.8818537 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

20 0.88178968 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification