iccv iccv2013 iccv2013-435 knowledge-graph by maker-knowledge-mining

435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection


Source: pdf

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 au Abstract Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. [sent-6, score-0.389]

2 Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. [sent-7, score-0.846]

3 This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. [sent-8, score-0.559]

4 In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. [sent-9, score-1.044]

5 More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. [sent-10, score-0.747]

6 We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset. [sent-11, score-0.466]

7 Introduction Domain shift is a fundamental problem in visual recognition tasks as evidenced by the recent surge of interest in domain adaptation [22, 15, 16]. [sent-13, score-0.586]

8 On the other hand, labeling sufficiently many images from the target domain to train a discriminative classifier specific to this domain is prohibitively time-consuming and impractical in realistic scenarios. [sent-19, score-0.747]

9 To relate the source and target domains, several state-ofthe-art methods have proposed to create intermediate representations [15, 16]. [sent-21, score-0.466]

10 However, these representations do not explicitly try to match the probability distributions of the source and target data, which may make them sub-optimal for classification. [sent-22, score-0.577]

11 Sample selection, or re-weighting, approaches [14, 21] explicitly attempt to match the source and target distributions by finding the most appropriate source examples for the target data. [sent-23, score-1.055]

12 However, they fail to account for the fact that the image features themselves may have been distorted by the domain shift, and that some of the image features may be specific to one domain and thus irrelevant for classification in the other one. [sent-24, score-0.559]

13 In light of the above discussion, we propose to tackle the problem of domain shift by extracting the information that is invariant across the source and target domains. [sent-25, score-0.837]

14 To this end, we introduce a Domain Invariant Projection (DIP) approach, which aims to learn a low-dimensional latent space where the source and target distributions are similar. [sent-26, score-0.626]

15 Learning such a projection allows us to account for the potential distortions induced by the domain shift, as well as for the presence of domain-specific image features. [sent-27, score-0.34]

16 Furthermore, since the distributions of the source and target data in the latent space are similar, we expect a classifier trained on the source examples to perform well on the target domain. [sent-28, score-1.104]

17 In this work, we make use of the Maximum Mean Discrepancy (MMD) [17] to measure the dissimilarity between the empirical distributions of the source and target examples. [sent-29, score-0.577]

18 Learning the latent space that minimizes the MMD between the source and target domains can then be formulated as an optimization problem on a Grassmann manifold. [sent-30, score-0.648]

19 This lets us utilize Grassmannian geometry to effectively obtain our domain invariant projection. [sent-31, score-0.363]

20 Although designed to be fully unsupervised, our formalism naturally allows us to exploit label information from either domain during the training process. [sent-32, score-0.294]

21 In short, we introduce the idea of finding a domain invariant representation of the data by matching the source and target distributions in a low-dimensional latent space, and propose an effective algorithm to learn our Domain In776699 variant Projection. [sent-34, score-0.944]

22 We demonstrate the benefits of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on the standard domain adaptation benchmark dataset [26]. [sent-35, score-0.466]

23 In the former category, modifications of Support Vector Machines (SVM) [12, 3] and other statistical classifiers [10] have been proposed to exploit the availability of labeled and unlabeled data from the target domain. [sent-38, score-0.333]

24 Co-regularization of similar classifiers was also introduced to utilize unlabeled target data during training [9]. [sent-39, score-0.258]

25 For visual recognition, metric learning [26] and transformation learning [23] were shown to be effective at making use of the labeled target examples. [sent-40, score-0.266]

26 Furthermore, semi-supervised methods have also been proposed to tackle the case where multiple source domains are available [11, 20]. [sent-41, score-0.339]

27 While semi-supervised methods are often effective, in many applications, labeled target examples are not available and cannot easily be acquired. [sent-42, score-0.309]

28 To address this issue, unsupervised domain adaptation approaches that rely on purely unsupervised target data have been proposed [28, 7, 8]. [sent-43, score-0.863]

29 Subspace-based approaches [4, 16, 15] model the domain shift by representing the data with multiple subspaces. [sent-45, score-0.346]

30 Rather than limiting the representation to one source and one target subspaces, several techniques exploit intermediate subspaces, which link the source data to the target data. [sent-47, score-0.933]

31 This idea was originally introduced in [16], where the subspaces were modeled as points on a Grassmann manifold, and intermediate subspaces were obtained by sampling points along the geodesic between the source and target subspaces. [sent-48, score-0.651]

32 While this formulation nicely characterizes the change between the source and target data, it is not clear why all the subspaces along this path should yield meaningful representations. [sent-50, score-0.534]

33 In contrast, sample re-weighting, or selection, approaches, have focused more directly on comparing the distributions of the source and target data. [sent-52, score-0.577]

34 In particular, in [21, 18], the source examples are re-weighted so as to minimize the MMD between the source and target distributions. [sent-53, score-0.69]

35 More recently, an approach to selecting landmarks among the source examples based on MMD was introduced [14]. [sent-54, score-0.289]

36 Despite their success, it is important to note that sample re-weighting and selection methods compare the source and target distributions directly in the original feature space. [sent-56, score-0.608]

37 This space, however, may not be appropriate for this task, since the image features may have been distorted by the domain shift, and since some of the features may only be relevant to one specific domain. [sent-57, score-0.297]

38 In contrast, in this work, we compare the source and target distributions in a low-dimensional latent space where these effects are removed, or reduced. [sent-58, score-0.626]

39 We employ the maximum mean discrepancy [17] between two distributions s and t to measure their dissimilarity. [sent-72, score-0.193]

40 Grassmann Manifolds In our formulation, we model the projection ofthe source and target data to a low-dimensional space as a point W on a Grassmann manifold G(d, D). [sent-107, score-0.67]

41 The Grassmann manifold aG G(d,r aDss)m caonnnsis mtsa noiff otlhde Gse(td ,ofD a)l. [sent-108, score-0.157]

42 Learning the projection then involves non-linear optimization on the Grassmann manifold, which requires some notions of differential geometry WTW reviewed below. [sent-111, score-0.15]

43 In differential geometry, the shortest path between two points on a manifold is a curve called a geodesic. [sent-112, score-0.157]

44 The tangent space at a point on a manifold is a vector space that consists of the tangent vectors of all possible curves passing through this point. [sent-113, score-0.231]

45 In particular, we make use of a conjugate gradient (CG) algorithm on the Grassmann manifold [13]. [sent-117, score-0.207]

46 CG on a Grassmann manifold can be summarized by the following steps: (i) Compute the gradient ∇fW of the objective function fC on pthutee em tahneif goraldd iaetn tthe ∇ fcurrent estimate W as ∇fW = ∂fW − WWT∂fW , (1) with ∂fW the matrix of usual partial derivatives. [sent-120, score-0.157]

47 Domain Invariant Projection (DIP) In this section, we introduce our approach to unsupervised domain adaptation. [sent-125, score-0.349]

48 We first derive the optimization problem at the heart of our approach, and then discuss the details of our Grassmann manifold optimization method. [sent-126, score-0.231]

49 Intuitively, with such a representation, a classifier trained on the source domain should perform equally well on the target domain. [sent-130, score-0.697]

50 To achieve invariance, we search for a projection to a lowdimensional subspace where the source and target distributions are similar, or, in other words, a projection that minimizes a distance measure betwe? [sent-131, score-0.772]

51 Wdiset sreibaurtcihon fso ro af Dthe× sdou prrcoeand target samples in the resulting d-dimensional subspace are as similar as possible. [sent-150, score-0.262]

52 Such constraints prevent our model from wrongly matching the two distributions by distorting the data, and make it very unlikely that the resulting subspace only contains the noise of both domains. [sent-176, score-0.181]

53 Therefore, here, we also consider using the polynomial kernel of degree two. [sent-196, score-0.151]

54 The fact that this kernel yields a distribution distance that only compares the first and second moment of the two distributions [17] will be shown to have little impact on our experimental results, thus showing the robustness of our approach to the choice of kernel. [sent-197, score-0.272]

55 Replacing the Gaussian kernel with this polynomial kernel in our objective function yields D2(WTXs, WTXt) = (5) n12i? [sent-198, score-0.281]

56 ∈ R(n+m)×(n+m), and = ⎧⎨−1 / (mn 2 m) oit,hje∈ rwTSise (6) with S and T th⎩e sets of source and target indices, respectively. [sent-208, score-0.435]

57 2, we make use of a conjugate gradient method on the manifold to obtain W∗ . [sent-218, score-0.207]

58 1 Encouraging Class Clustering (DIP-CC) In the DIP formulation described above, learning the projection W is done in a fully unsupervised manner. [sent-221, score-0.197]

59 Note, however, that even in the so-called unsupervised setting, domain adaptation methods have access to the labels of the source examples. [sent-222, score-0.765]

60 Intuitively, we are interested in finding a projection that not only minimizes the distance between the distribution of the projected source and target data, but also yields good classification performance. [sent-224, score-0.593]

61 Note that in our formulation, the mean of the projected examples is equivalent to the projection of the mean. [sent-238, score-0.15]

62 7 and 8 fall into the unsupervised domain adaptation category, since they do not exploit any labeled target examples. [sent-245, score-0.851]

63 In the unsupervised setting, this classifier is only trained using the source examples. [sent-250, score-0.299]

64 With Semi-Supervised DIP (SS-DIP), the labeled target examples can be taken into account in two different manners. [sent-251, score-0.309]

65 7, since no labels are used when learning W, we only employ the labeled target examples along with the source ones to train the final classifier. [sent-253, score-0.521]

66 8, we utilize the target labels in the regularizer when learning W, as well as when learning the final classifier. [sent-255, score-0.302]

67 This lets us rewrite our constrained optimization problem as an unconstrained problem on the manifold G(d, D). [sent-260, score-0.239]

68 More specifically, manifold optimization methods often have better convergence behavior than iterative projection methods, which can be crucial with a nonlinear objective function [1]. [sent-263, score-0.322]

69 2 that CG on a Grassmann manifold involves (i) computing the gradient on the manifold ∇fW, (ii) estimating the search direction H, and (iii) performing a line search along a geodesic. [sent-267, score-0.314]

70 1shows that × the gradient on the manifold depends on the partial derivatives of the objective function w. [sent-269, score-0.157]

71 In our experiments, we first applied PCA to the concatenated source and target data, kept all the data variance, and initialized W to the truncated identity matrix. [sent-291, score-0.435]

72 In all our experiments, we set the variance σ of the Gaussian kernel to the median squared distance between all source examples, and the weight λ of the regularizer to 4/σ when using the regularizer. [sent-295, score-0.37]

73 Cross-domain WiFi Localization We first evaluated our approach on the task of indoor WiFi localization using the public wifi data set published in the 2007 IEEE ICDM Contest for domain adaptation [29]. [sent-298, score-0.683]

74 In our experiments, we used all the source data and 400 randomly sampled target examples. [sent-307, score-0.435]

75 Visual Object Recognition We then evaluated our approach on the task of visual object recognition using the benchmark domain adaptation dataset introduced in [26]. [sent-382, score-0.466]

76 The Amazon domain consists of images acquired in a highly-controlled environment with studio light- ing conditions. [sent-384, score-0.298]

77 The DSLR domain consists of high resolution images of 3 1 categories that are taken with a digital SLR camera in a home environment under natural lighting. [sent-386, score-0.328]

78 For recognition, we trained an SVM classifier with a polynomial kernel of degree 2 on the projected source examples. [sent-399, score-0.392]

79 In a first experiment on this dataset, we used the evaluation protocol introduced in [14]: For each source/target pair, all the available examples in both domains are exploited at once, rather than splitting the datasets into multiple training/testing partitions. [sent-401, score-0.284]

80 1 This protocol was motivated by the fact that, in [14], selecting landmarks requires a sufficient number of source examples to be available. [sent-402, score-0.403]

81 For the same reason, the DSLR dataset is never used as source domain, since it contains too few examples per class. [sent-403, score-0.255]

82 Table 1 shows the recognition accuracies on the target examples for the 9 pairs of source and target domains. [sent-405, score-0.742]

83 Note that, in this case, our classclustering regularizer is not crucial to achieve good accu1This evaluation protocol was explained 777744 to us by the authors of [14]. [sent-407, score-0.193]

84 Recognition accuracies on 6 pairs of source/target domains using the evaluation protocol of [26]. [sent-487, score-0.282]

85 Recognition accuracies on the remaining 6 pairs of source/target domains using the evaluation protocol of [26]. [sent-568, score-0.282]

86 With this protocol, all possible combinations of source and target domains were evaluated. [sent-575, score-0.562]

87 Following the evaluation protocol of [26], we made use of 3 labeled samples per category from the target domain. [sent-579, score-0.41]

88 Here however, the class-clustering regularizer boosts the accuracy more consistently, which suggests the importance of such a regularizer in the presence of small amounts of labeled data. [sent-583, score-0.201]

89 Conclusion and Future Work In this paper, we have introduced an approach to unsupervised domain adaptation that focuses on extracting a domain-invariant representation of the source and target data. [sent-588, score-0.988]

90 To this end, we have proposed to match the source and target distributions in a low-dimensional latent space, rather than in the original feature space. [sent-589, score-0.626]

91 Our experiments have evidenced the importance of exploiting distribution invariance for domain adaptation by revealing that our DIP approach consistently outperformed the state-of-the-art methods in the task of visual object recognition. [sent-590, score-0.502]

92 However, it is unclear how to regularize nonlinear transformations to prevent them from deteriorating the data distribution to the point of making two inherently dissimilar distributions similar. [sent-594, score-0.192]

93 Finally, we also plan to investigate how ideas from the deep learning literature could be employed to obtain domain invariant features. [sent-595, score-0.318]

94 Recognition accuracies on 6 pairs of source/target domains using the semi-supervised evaluation protocol of [26]. [sent-695, score-0.282]

95 Recognition accuracies on the remaining 6 pairs of source/target domains using the semi-supervised evaluation protocol of [26]. [sent-792, score-0.282]

96 Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. [sent-809, score-0.466]

97 Domain adaptation problems: A dasvm classification technique and a circular validation strategy. [sent-840, score-0.204]

98 Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. [sent-883, score-0.349]

99 Online domain adaptation of a pretrained cascade of classifiers. [sent-945, score-0.466]

100 What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. [sent-951, score-0.283]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('mmd', 0.292), ('grassmann', 0.289), ('domain', 0.262), ('dip', 0.259), ('target', 0.223), ('source', 0.212), ('adaptation', 0.204), ('wifi', 0.181), ('manifold', 0.157), ('xsi', 0.148), ('distributions', 0.142), ('nnoo', 0.129), ('tca', 0.127), ('domains', 0.127), ('webcam', 0.117), ('protocol', 0.114), ('gss', 0.103), ('xsj', 0.103), ('amazon', 0.098), ('dslr', 0.088), ('unsupervised', 0.087), ('xtj', 0.085), ('borgwardt', 0.085), ('shift', 0.084), ('gretton', 0.08), ('wtw', 0.08), ('regularizer', 0.079), ('kernel', 0.079), ('projection', 0.078), ('kwl', 0.077), ('methoda', 0.077), ('wtxs', 0.077), ('wtxt', 0.077), ('fw', 0.074), ('rkhs', 0.073), ('polynomial', 0.072), ('gst', 0.069), ('gtt', 0.069), ('subspaces', 0.067), ('caltech', 0.062), ('cg', 0.061), ('gfk', 0.057), ('invariant', 0.056), ('cw', 0.053), ('aaddaappt', 0.052), ('ggffsk', 0.052), ('methodd', 0.052), ('geodesic', 0.051), ('discrepancy', 0.051), ('yields', 0.051), ('surf', 0.05), ('nonlinear', 0.05), ('conjugate', 0.05), ('latent', 0.049), ('rasch', 0.046), ('argwmin', 0.046), ('lets', 0.045), ('ww', 0.045), ('tsang', 0.044), ('aw', 0.043), ('labeled', 0.043), ('examples', 0.043), ('wc', 0.042), ('accuracies', 0.041), ('subspace', 0.039), ('proven', 0.039), ('optimization', 0.037), ('tangent', 0.037), ('environment', 0.036), ('evidenced', 0.036), ('transfer', 0.036), ('kernels', 0.036), ('tj', 0.036), ('indoor', 0.036), ('wt', 0.036), ('unlabeled', 0.035), ('distorted', 0.035), ('jn', 0.035), ('rss', 0.035), ('notions', 0.035), ('xti', 0.035), ('kulis', 0.035), ('princeton', 0.034), ('landmarks', 0.034), ('kh', 0.033), ('smola', 0.033), ('aed', 0.033), ('tw', 0.033), ('exploit', 0.032), ('daum', 0.032), ('dc', 0.032), ('formulation', 0.032), ('iii', 0.031), ('kw', 0.031), ('selection', 0.031), ('intermediate', 0.031), ('blitzer', 0.031), ('home', 0.03), ('category', 0.03), ('projected', 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

2 0.3903988 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

3 0.38343155 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

4 0.27254054 114 iccv-2013-Dictionary Learning and Sparse Coding on Grassmann Manifolds: An Extrinsic Solution

Author: Mehrtash Harandi, Conrad Sanderson, Chunhua Shen, Brian Lovell

Abstract: Recent advances in computer vision and machine learning suggest that a wide range of problems can be addressed more appropriately by considering non-Euclidean geometry. In this paper we explore sparse dictionary learning over the space of linear subspaces, which form Riemannian structures known as Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into the space of symmetric matrices by an isometric mapping, which enables us to devise a closed-form solution for updating a Grassmann dictionary, atom by atom. Furthermore, to handle non-linearity in data, we propose a kernelised version of the dictionary learning algorithm. Experiments on several classification tasks (face recognition, action recognition, dynamic texture classification) show that the proposed approach achieves considerable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as kernelised Affine Hull Method and graphembedding Grassmann discriminant analysis.

5 0.22195143 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

Author: Tatiana Tommasi, Barbara Caputo

Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.

6 0.22173578 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

7 0.21763727 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

8 0.18074638 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

9 0.16589405 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces

10 0.15735763 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding

11 0.15186091 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

12 0.14252676 44 iccv-2013-Adapting Classification Cascades to New Domains

13 0.13726375 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

14 0.1261394 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

15 0.10340563 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions

16 0.094738804 233 iccv-2013-Latent Task Adaptation with Large-Scale Hierarchies

17 0.094528407 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

18 0.087873638 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

19 0.079632632 244 iccv-2013-Learning View-Invariant Sparse Representations for Cross-View Action Recognition

20 0.079528324 52 iccv-2013-Attribute Adaptation for Personalized Image Search


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.207), (1, 0.073), (2, -0.09), (3, -0.075), (4, -0.087), (5, 0.043), (6, -0.0), (7, 0.063), (8, 0.105), (9, -0.014), (10, -0.022), (11, -0.217), (12, -0.092), (13, -0.106), (14, 0.225), (15, -0.284), (16, -0.093), (17, -0.058), (18, 0.021), (19, -0.112), (20, 0.252), (21, -0.111), (22, 0.201), (23, 0.14), (24, 0.029), (25, -0.072), (26, -0.136), (27, -0.047), (28, -0.044), (29, -0.004), (30, -0.0), (31, 0.033), (32, -0.051), (33, 0.021), (34, 0.032), (35, 0.049), (36, 0.014), (37, 0.011), (38, 0.013), (39, 0.015), (40, -0.056), (41, 0.007), (42, 0.045), (43, -0.058), (44, -0.053), (45, 0.111), (46, 0.023), (47, 0.023), (48, -0.027), (49, -0.071)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96908146 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

2 0.90554237 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

3 0.86663306 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

Author: Andy J. Ma, Pong C. Yuen, Jiawei Li

Abstract: This paper addresses a new person re-identification problem without the label information of persons under non-overlapping target cameras. Given the matched (positive) and unmatched (negative) image pairs from source domain cameras, as well as unmatched (negative) image pairs which can be easily generated from target domain cameras, we propose a Domain Transfer Ranked Support Vector Machines (DTRSVM) method for re-identification under target domain cameras. To overcome the problems introduced due to the absence of matched (positive) image pairs in target domain, we relax the discriminative constraint to a necessary condition only relying on the positive mean in target domain. By estimating the target positive mean using source and target domain data, a new discriminative model with high confidence in target positive mean and low confidence in target negative image pairs is developed. Since the necessary condition may not truly preserve the discriminability, multi-task support vector ranking is proposed to incorporate the training data from source domain with label information. Experimental results show that the proposed DTRSVM outperforms existing methods without using label information in target cameras. And the top 30 rank accuracy can be improved by the proposed method upto 9.40% on publicly available person re-identification datasets.

4 0.86434966 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

5 0.84935546 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

6 0.83449483 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

7 0.70013678 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces

8 0.57766753 178 iccv-2013-From Semi-supervised to Transfer Counting of Crowds

9 0.56569004 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

10 0.56276989 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

11 0.55383134 44 iccv-2013-Adapting Classification Cascades to New Domains

12 0.53660113 413 iccv-2013-Target-Driven Moire Pattern Synthesis by Phase Modulation

13 0.50112242 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

14 0.47070488 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions

15 0.46211472 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding

16 0.44852728 114 iccv-2013-Dictionary Learning and Sparse Coding on Grassmann Manifolds: An Extrinsic Solution

17 0.43565419 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

18 0.41048405 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition

19 0.38685712 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

20 0.37883765 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.062), (7, 0.031), (26, 0.055), (31, 0.045), (42, 0.12), (64, 0.034), (73, 0.022), (89, 0.143), (95, 0.012), (98, 0.38)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78117657 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

2 0.77255404 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

Author: Chen Fang, Ye Xu, Daniel N. Rockmore

Abstract: Many standard computer vision datasets exhibit biases due to a variety of sources including illumination condition, imaging system, and preference of dataset collectors. Biases like these can have downstream effects in the use of vision datasets in the construction of generalizable techniques, especially for the goal of the creation of a classification system capable of generalizing to unseen and novel datasets. In this work we propose Unbiased Metric Learning (UML), a metric learning approach, to achieve this goal. UML operates in the following two steps: (1) By varying hyperparameters, it learns a set of less biased candidate distance metrics on training examples from multiple biased datasets. The key idea is to learn a neighborhood for each example, which consists of not only examples of the same category from the same dataset, but those from other datasets. The learning framework is based on structural SVM. (2) We do model validation on a set of weakly-labeled web images retrieved by issuing class labels as keywords to search engine. The metric with best validationperformance is selected. Although the web images sometimes have noisy labels, they often tend to be less biased, which makes them suitable for the validation set in our task. Cross-dataset image classification experiments are carried out. Results show significant performance improvement on four well-known computer vision datasets.

3 0.74117827 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition

Author: Ricardo Cabral, Fernando De_La_Torre, João P. Costeira, Alexandre Bernardino

Abstract: Low rank models have been widely usedfor the representation of shape, appearance or motion in computer vision problems. Traditional approaches to fit low rank models make use of an explicit bilinear factorization. These approaches benefit from fast numerical methods for optimization and easy kernelization. However, they suffer from serious local minima problems depending on the loss function and the amount/type of missing data. Recently, these lowrank models have alternatively been formulated as convex problems using the nuclear norm regularizer; unlike factorization methods, their numerical solvers are slow and it is unclear how to kernelize them or to impose a rank a priori. This paper proposes a unified approach to bilinear factorization and nuclear norm regularization, that inherits the benefits of both. We analyze the conditions under which these approaches are equivalent. Moreover, based on this analysis, we propose a new optimization algorithm and a “rank continuation ” strategy that outperform state-of-theart approaches for Robust PCA, Structure from Motion and Photometric Stereo with outliers and missing data.

4 0.7317332 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction

Author: Donghyeon Cho, Minhaeng Lee, Sunyeong Kim, Yu-Wing Tai

Abstract: Light-field imaging systems have got much attention recently as the next generation camera model. A light-field imaging system consists of three parts: data acquisition, manipulation, and application. Given an acquisition system, it is important to understand how a light-field camera converts from its raw image to its resulting refocused image. In this paper, using the Lytro camera as an example, we describe step-by-step procedures to calibrate a raw light-field image. In particular, we are interested in knowing the spatial and angular coordinates of the micro lens array and the resampling process for image reconstruction. Since Lytro uses a hexagonal arrangement of a micro lens image, additional treatments in calibration are required. After calibration, we analyze and compare the performances of several resampling methods for image reconstruction with and without calibration. Finally, a learning based interpolation method is proposed which demonstrates a higher quality image reconstruction than previous interpolation methods including a method used in Lytro software.

5 0.72434402 402 iccv-2013-Street View Motion-from-Structure-from-Motion

Author: Bryan Klingner, David Martin, James Roseborough

Abstract: We describe a structure-from-motion framework that handles “generalized” cameras, such as moving rollingshutter cameras, and works at an unprecedented scale— billions of images covering millions of linear kilometers of roads—by exploiting a good relative pose prior along vehicle paths. We exhibit a planet-scale, appearanceaugmented point cloud constructed with our framework and demonstrate its practical use in correcting the pose of a street-level image collection.

6 0.71489418 19 iccv-2013-A Learning-Based Approach to Reduce JPEG Artifacts in Image Matting

7 0.67252535 33 iccv-2013-A Unified Video Segmentation Benchmark: Annotation, Metrics and Analysis

8 0.61971414 123 iccv-2013-Domain Adaptive Classification

9 0.61494207 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

10 0.59977281 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

11 0.59844768 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

12 0.56681895 1 iccv-2013-3DNN: Viewpoint Invariant 3D Geometry Matching for Scene Understanding

13 0.54891878 222 iccv-2013-Joint Learning of Discriminative Prototypes and Large Margin Nearest Neighbor Classifiers

14 0.54513544 44 iccv-2013-Adapting Classification Cascades to New Domains

15 0.54192883 141 iccv-2013-Enhanced Continuous Tabu Search for Parameter Estimation in Multiview Geometry

16 0.53654927 52 iccv-2013-Attribute Adaptation for Personalized Image Search

17 0.5336411 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

18 0.53356117 183 iccv-2013-Geometric Registration Based on Distortion Estimation

19 0.53288823 43 iccv-2013-Active Visual Recognition with Expertise Estimation in Crowdsourcing

20 0.53269619 392 iccv-2013-Similarity Metric Learning for Face Recognition