iccv iccv2013 iccv2013-438 knowledge-graph by maker-knowledge-mining

438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment


Source: pdf

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. [sent-2, score-1.325]

2 DA typically aims at making use of information coming from both source and target domains during the learning process to adapt automatically. [sent-17, score-0.861]

3 As illustrated by recent results [7, 8], subspace based domain adaptation seems a promising approach to tackle unsupervised visual DA problems. [sent-20, score-0.766]

4 generate intermediate representations in the form of subspaces along the geodesic path connecting the source subspace and the target subspace on the Grassmann manifold. [sent-22, score-1.386]

5 Then, the source data are projected onto these subspaces and a classifier is learned. [sent-23, score-0.573]

6 propose a geodesic flow kernel which aims to model incremental changes between the source and target domains. [sent-25, score-0.679]

7 In both papers, a set of intermediate subspaces is used to model the shift between the two distributions. [sent-26, score-0.261]

8 In this paper, we also make use of subspaces (composed of 푑 eigenvectors induced by a PCA), one for each domain. [sent-27, score-0.238]

9 However, following the theoretical recommendations of [1], we rather suggest to directly reduce the discrepancy between the two domains by moving closer the source and target subspaces. [sent-28, score-0.954]

10 This is achieved by optimizing a mapping function that transforms the source subspace into the target one. [sent-29, score-0.924]

11 From this simple idea, we design a new DA approach based on subspace alignment. [sent-30, score-0.246]

12 This allows us to induce robust classifiers not subject to local perturbations; and (2) by aligning the source and target subspaces, our method is intrinsically regularized: we do not need to tune regularization parameters in the objective as imposed by a lot of optimizationbased DA methods. [sent-32, score-0.742]

13 Our subspace alignment is achieved by optimizing a mapping function which takes the form of a transformation matrix 푀. [sent-33, score-0.371]

14 We show that the optimal solution corresponds in fact to the covariance matrix between the source and target eigenvectors. [sent-34, score-0.651]

15 From this transformation matrix, we derive a similarity function 푆푖푚(yS , yT) to compare a source data yS with a target example yT. [sent-35, score-0.684]

16 Alternatively, we can also learn a global classifier such as support vector machines (SVM) on the source data after mapping them onto the target subspace. [sent-39, score-0.739]

17 [1], a reduction of the divergence between the two domains is required to adapt well. [sent-41, score-0.306]

18 A usual way to estimate the divergence consists in learning a linear classifier to discriminate between source and target instances, respectively pseudolabeled with 0 and 1. [sent-43, score-0.836]

19 In this paper, we focus on the unsupervised domain adaptation setting that is well suited to vision problems since it does not require any labeling information from the target domain. [sent-57, score-0.845]

20 PCA based DA methods have then been naturally investigated [6, 12, 13] in order to find a common latent space where the difference between the marginal distributions of the two domains is minimized with respect to the Maximum Mean Discrepancy (MMD) divergence. [sent-61, score-0.222]

21 Then, they concatenate source and target data using this feature representation and apply PCA to find a relevant common projection. [sent-65, score-0.649]

22 In [5], Chang transforms the source data into an intermediate representation such that each transformed source sample can be linearly reconstructed by the target samples. [sent-66, score-1.06]

23 This is however a local approach that may fail to capture the global structure information of the source domain. [sent-67, score-0.324]

24 Moreover it is sensitive to noise and outliers of the source domain that have no correspondence in the target one. [sent-68, score-0.809]

25 Recently, subspace based DA has demonstrated good performance in visual DA [7, 8]. [sent-71, score-0.246]

26 These methods share the same principle: first they compute a domain specific d-dimensional subspace for the source data and another one for the target data, independently created by PCA. [sent-72, score-1.081]

27 Then, they project source and target data into intermediate subspaces along the shortest geodesic path connecting the two d-dimensional subspaces on the Grassmann manifold. [sent-73, score-1.107]

28 These approaches are the closest to ours but, as mentioned in the introduction, it is more appropriate to align the two subspaces directly, instead of computing a large number of intermediate subspaces which can potentially be a costly tuning procedure. [sent-75, score-0.402]

29 As a summary, our approach has the following differences with existing methods: We exploit the global covariance statistical structure of the two domains during the adaptation process in contrast to the manifold alignment methods that use local statistical structure of the data [16, 17, 18]. [sent-77, score-0.547]

30 We project the source data onto the source subspace and the target data onto the target subspace in contrast to methods that project source data to the target subspace or target data to the source subspace such as [3]. [sent-78, score-3.638]

31 Our method is totally unsupervised and does not require any target label information like constraints on cross-domain data [10, 14] or correspondences from across datasets [17, 18]. [sent-80, score-0.444]

32 Some of these features can be specific to one domain yet correlated to some other features in the other one allow- ing us to use both shared and domain specific features. [sent-83, score-0.4]

33 As far as we know, this is the first attempt to use a subspace alignment method in the context of domain adaptation. [sent-84, score-0.521]

34 DA using unsupervised subspace alignment In this section, we introduce our new subspace based DA method. [sent-86, score-0.646]

35 1, we explain how to generate the source and target subspaces of size 푑. [sent-96, score-0.781]

36 2 which consists in learning a transformation matrix 푀 that maps the source subspace to the target one. [sent-98, score-0.904]

37 Subspace generation Even though both the source and target data lie in the same 퐷-dimensional space, they have been drawn according to different marginal distributions. [sent-104, score-0.678]

38 Consequently, rather than working on the original data themselves, we suggest to handle more robust representations of the source and target domains and to learn the shift between these two domains. [sent-105, score-0.865]

39 First, we transform every source and target data in the form of a 퐷-dimensional z-normalized vector (i. [sent-106, score-0.649]

40 Then, using PCA, we select for each domain 푑 eigenvectors corresponding to the 푑 largest eigenvalues. [sent-109, score-0.266]

41 These eigenvectors are used as bases of the source and target subspaces, respectively denoted by 푋푆 and 푋푇 (푋푆, 푋푇 ∈ ℝ퐷×푑). [sent-110, score-0.703]

42 Domain adaptation with subspace alignment As presented in section 2, two main strategies are used in subspace based DA methods. [sent-115, score-0.777]

43 The first one consists in projecting both source and target data to a common shared subspace. [sent-116, score-0.712]

44 Beyond the fact that such a strategy can be costly, projecting the data to an intermediate common shared subspace may lead to information loss in both source and target domains. [sent-119, score-1.015]

45 In our method, we suggest to project each source (yS) and target (yT) data (where yS , yT ∈ ℝ1 ×퐷) to its respective subspace 푋푆 and 푋푇 by the operations yS푋푆 and yT푋푇, respectively. [sent-120, score-0.924]

46 Then, we learn a linear transformation function that align the source subspace coordinate system to the target one. [sent-121, score-0.961]

47 This step allows us to directly compare source and target samples in their respective subspaces without unnecessary data projections. [sent-122, score-0.807]

48 To achieve this task, we use a subspace alignment approach. [sent-123, score-0.311]

49 We call 푋푎 the target aligned source coordinate system. [sent-133, score-0.651]

50 It is worth noting that if the source and target domains are the same, then 푋푆 = 푋푇 and 푀∗ is the identity matrix. [sent-134, score-0.793]

51 Matrix 푀 transforms the source subspace coordinate system into the target subspace coordinate system by aligning the source basis vectors with the target ones. [sent-135, score-1.852]

52 If a source basis vector is orthogonal to all target basis vectors, it is ignored. [sent-136, score-0.679]

53 On the other hand, a high weight is given to a source basis vector that is well aligned with the target basis vectors. [sent-137, score-0.679]

54 In order to compare a source data yS with a target data yT, one needs a similarity function 푆푖푚(yS , yT). [sent-138, score-0.675]

55 Projecting yS and yT in their respective subspace 푋푆 and 푋푇 and applying the optimal transformation matrix 푀∗, we can define 푆푖푚(yS , yT) as follows: 푆푖푚(yS, yT) = = (yS푋푆푀∗)(yT푋푇)′ yS퐴yT′, = yS푋푆푀∗푋푇′yT′ (4) where 퐴 = 푋푆푋푆′푋푇푋푇′. [sent-139, score-0.281]

56 Classifying ImageNet images using Caltech-256 images as the source domain. [sent-142, score-0.324]

57 As we will see in the experimental section, an alternative solution will consist in (i) projecting the source data via 푋푎 into the target aligned source subspace and the target data into the target subspace (using 푋푇), (ii) learn a SVM from this 푑-dimensional space. [sent-148, score-2.124]

58 Data: Source data 푆, Target data 푇, Source labels 퐿푆, Subspace dimension 푑 Result: Predicted target labels 퐿푇 푋푆 ← 푃퐶퐴(푆, 푑) ; 푋푇 ←← 푃푃퐶퐶퐴퐴((푇푆,, 푑푑)) ; 푋푎 ←← 푋푆푋푆′푋푇 ; 푆푎 =← 푆 푋푋푎 ; 푇푇 = 푇푋푇 ; 퐿푇 ← 퐶푙푎푠푠푖푓푖푒푟(푆푎, 푇푇, 퐿푆) ; Alg←or 퐶ith푙푎m푠푠 1푖:푓 푖S푒u푟b(s푆pace alignment DA algorithm 3. [sent-150, score-0.416]

59 for any vector x, ∥x∥ ≤ 퐵, let 푋푑퐷˜ and 푋푑퐷˜푛 be the orthogonal projectors of the subspaces spanned by the first d eigenvectors and Let 휆1 > 휆2 > . [sent-163, score-0.275]

60 푋푇푑푛) be the d-dimensional projection operator built from the source (resp. [sent-185, score-0.379]

61 > 휆푑푇 > 휆푑푇+1), then we have with probability at least 1− 훿 ∥푋푆푑푀푋푑푇′−푋푆푑푛푀푛푋푇푑푛′∥ ≤ 8푑3/2퐵(1 +√ln(22/훿) ×(√푛푆(휆푆푑1− 휆푑푆+1)+√푛푇(휆푑푇1− 휆푇푑+1)) where 푀푛 is the solution of the optimization problem of Eq 2 using source and target samples of sizes 푛푆 and 푛푇 respectively, and 푀 is its expected value. [sent-195, score-0.623]

62 In other words, as long as we selec−t a subspace dim∥e ≤nsi 훾o. [sent-207, score-0.246]

63 Divergence between source and target domains The pioneer work of Ben-David et al. [sent-211, score-0.793]

64 [1] provides a generalization bound on the target error which depends on the source error and a measure of divergence, called the 퐻Δ퐻 divergence, between the source and target distributions 풟푆 adnivde r풟g푇en. [sent-212, score-1.299]

65 휖푇(ℎ) = 휖푆(ℎ) + 푑퐻Δ퐻(풟푆, 풟푇) + 휆, (6) where ℎ is a learned hypothesis, 휖푇(ℎ) the generalization target error, 휖푆 (ℎ) the generalization source error, and 휆 the error of the ideal joint hypothesis on 푆 and 푇, which is supposed to be a negligible term if the adaptation is possible. [sent-213, score-0.843]

66 To estimate 푑퐻Δ퐻 (풟푆, 풟푇), a usual way ecoenns 풟ists ainn learning a linear classifier( 풟ℎ to, 풟discriminate between source and target instances, respectively pseudolabeled with 0 and 1. [sent-216, score-0.67]

67 Based on the recommendations of [2], we propose a discrepancy measure to estimate the local den- sity of a target point w. [sent-222, score-0.426]

68 This discrepancy, called Target density around source TDAS counts how many target points can be found on average within a 휖 neighborhood of a source point. [sent-226, score-0.947]

69 Experiments We evaluate our method in the context of object recognition using a standard dataset and protocol for evaluating visual domain adaptation methods as in [5, 7, 8, 10, 14]. [sent-232, score-0.43]

70 We use each source of images as a domain, consequently we get four domains (A, C, D and W) leading to 12 DA problems. [sent-241, score-0.494]

71 We follow the standard protocol of [7, 8, 10, 14] for generating the source and target samples1 . [sent-245, score-0.623]

72 Experimental setup We compare our subspace DA approach with two other DA methods and three baselines. [sent-258, score-0.246]

73 Each of these methods defines a new representation space and our goal is to compare the performance of a 1-Nearest-Neighbor (NN) classifier and a SVM classifier on DA problems in the subspace found. [sent-259, score-0.376]

74 We also report results obtained by the following three baselines: Baseline 1: where we use the projection defined by the PCA subspace 푋푆 built from the source domain to project both source and target data and work in the resulting representation. [sent-263, score-1.489]

75 Baseline 2: where we use similarly the projection defined by the PCA subspace 푋푇 built from the target domain. [sent-264, score-0.6]

76 No adaptation NA: where no projection is made, we use the original input space without learning a new representation. [sent-265, score-0.247]

77 (with C parameter set to the mean similarity value obtained from the training set) in the subspace defined by each method. [sent-268, score-0.246]

78 For each source-target DA problem in the first two series of experiments, we evaluate the accuracy of each method on the target domain over 20 random trials. [sent-269, score-0.526]

79 For each trial, we consider an unsupervised DA setting where we randomly sample labeled data in the source domain as training data and unlabeled data in the target domain as testing examples. [sent-270, score-1.218]

80 In the last series involving the PASCAL-VOC dataset, we rather evaluate the approaches by measuring the mean average precision over target data using SVM. [sent-271, score-0.366]

81 We have also compared the behavior of the approaches in a semi-supervised scenario by adding 3 labelled target examples to the training set for Office+Caltech10 series and 50 for the PASCAL-VOC series. [sent-272, score-0.34]

82 Afterwards, we consider the subspaces of dimensionality from 푑 = 1to 푑푚푎푥 and select the best 푑∗ that minimizes the classification error using a 2 fold cross-validation over the labelled source data. [sent-281, score-0.518]

83 of Eq 6 where the idea is to try to move the domain distribution closer while maintaining a good accuracy on the source domain. [sent-283, score-0.51]

84 Evaluating DA with divergence measures Here, we propose to evaluate the capability of our method to move the domain distributions closer according to the measures presented in Section 3. [sent-290, score-0.31]

85 We can remark that our approach reduces significantly the discrepancy between the source and target domains com2965 Figure 2. [sent-293, score-0.912]

86 Finding a stable solution and a subspace dimensionality using the consistency theorem. [sent-294, score-0.306]

87 Classification Results Visual domain adaptation performance with Office/Caltech10 datasets: In this experiment we evaluate the different methods using Office [14]/Caltech10 [8] datasets which consist of four domains (A, C, D and W). [sent-311, score-0.606]

88 Domain adaptation on ImageNet, LabelMe and Caltech-256 datasets : Results obtained for unsupervised DA using NN classifiers are shown in Table 4. [sent-319, score-0.38]

89 First, we can remark that all the other DA methods achieve poor accuracy when LabelMe images are used as the source domain, while our method seems to adapt the source to the target reasonably well. [sent-320, score-1.032]

90 Classifying PASCAL-VOC-2007 images using classifiers built on ImageNet : In this experiment, we compare the average precision obtained on PASCAL-VOC-2007 by a SVM classifier in both unsupervised and semi-supervised DA settings. [sent-365, score-0.223]

91 We use ImageNet as the source domain and PASCAL-VOC-2007 as the target domain. [sent-366, score-0.809]

92 In the unsupervised DA setting, GFK improves by 7% in mAP over no adaptation while our method improves by 27% in mAP over GFK. [sent-381, score-0.309]

93 Conclusion We present a new visual domain adaptation method using subspace alignment. [sent-384, score-0.652]

94 In this method, we create subspaces for both source and target domains and learn a linear mapping that aligns the source subspace with the target subspace. [sent-385, score-1.894]

95 This allows us to compare the source domain data directly with the target domain data and to build classifiers on source data and apply them on the target domain. [sent-386, score-1.737]

96 We show that our method outperforms state of the art domain adaptation methods using both SVM and nearest neighbour classifiers. [sent-388, score-0.445]

97 As future work we plan to extend our domain adaptation method to large scale image retrieval and on the fly learning of classifiers. [sent-391, score-0.406]

98 Extracting discriminative concepts for domain adaptation in text mining. [sent-430, score-0.406]

99 What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. [sent-458, score-0.22]

100 A literature review of domain adaptation with [12] [13] [14] [15] [16] [17] [18] [19] unlabeled data. [sent-462, score-0.438]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('da', 0.416), ('source', 0.324), ('ys', 0.316), ('target', 0.299), ('subspace', 0.246), ('adaptation', 0.22), ('yt', 0.215), ('domain', 0.186), ('domains', 0.17), ('subspaces', 0.158), ('tdas', 0.142), ('imagenet', 0.14), ('office', 0.115), ('divergence', 0.101), ('discrepancy', 0.094), ('unsupervised', 0.089), ('gfk', 0.087), ('theorem', 0.081), ('eigenvectors', 0.08), ('labelme', 0.072), ('ln', 0.07), ('classifier', 0.065), ('alignment', 0.065), ('intermediate', 0.057), ('geodesic', 0.056), ('blitzer', 0.056), ('pca', 0.052), ('hyperparameter', 0.052), ('tune', 0.052), ('svm', 0.049), ('cavg', 0.047), ('founded', 0.047), ('methodc', 0.047), ('methodl', 0.047), ('pseudolabeled', 0.047), ('eq', 0.047), ('shift', 0.046), ('ijcai', 0.042), ('classifiers', 0.041), ('series', 0.041), ('gfs', 0.039), ('neighbour', 0.039), ('manifold', 0.038), ('afterwards', 0.037), ('projectors', 0.037), ('dimensionality', 0.036), ('projecting', 0.035), ('nn', 0.035), ('transformation', 0.035), ('deviation', 0.035), ('adapt', 0.035), ('theoretical', 0.034), ('recommendations', 0.033), ('coming', 0.033), ('lemma', 0.033), ('unlabeled', 0.032), ('dslr', 0.032), ('cw', 0.032), ('grassmann', 0.031), ('kwok', 0.03), ('datasets', 0.03), ('transforms', 0.03), ('bound', 0.03), ('llc', 0.029), ('classify', 0.029), ('marginal', 0.029), ('project', 0.029), ('gopalan', 0.029), ('align', 0.029), ('coordinate', 0.028), ('shared', 0.028), ('basis', 0.028), ('insight', 0.028), ('covariance', 0.028), ('material', 0.028), ('built', 0.028), ('suited', 0.027), ('tsang', 0.027), ('preparation', 0.027), ('projection', 0.027), ('aw', 0.026), ('lc', 0.026), ('intrinsically', 0.026), ('data', 0.026), ('orthonormal', 0.025), ('aligns', 0.025), ('dedicated', 0.025), ('remark', 0.025), ('seems', 0.025), ('mapping', 0.025), ('setting', 0.024), ('supplementary', 0.024), ('create', 0.024), ('consistency', 0.024), ('context', 0.024), ('saenko', 0.023), ('pan', 0.023), ('empirical', 0.023), ('caltech', 0.023), ('distributions', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

2 0.38368991 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

3 0.38343155 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

4 0.24053077 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

Author: Andy J. Ma, Pong C. Yuen, Jiawei Li

Abstract: This paper addresses a new person re-identification problem without the label information of persons under non-overlapping target cameras. Given the matched (positive) and unmatched (negative) image pairs from source domain cameras, as well as unmatched (negative) image pairs which can be easily generated from target domain cameras, we propose a Domain Transfer Ranked Support Vector Machines (DTRSVM) method for re-identification under target domain cameras. To overcome the problems introduced due to the absence of matched (positive) image pairs in target domain, we relax the discriminative constraint to a necessary condition only relying on the positive mean in target domain. By estimating the target positive mean using source and target domain data, a new discriminative model with high confidence in target positive mean and low confidence in target negative image pairs is developed. Since the necessary condition may not truly preserve the discriminability, multi-task support vector ranking is proposed to incorporate the training data from source domain with label information. Experimental results show that the proposed DTRSVM outperforms existing methods without using label information in target cameras. And the top 30 rank accuracy can be improved by the proposed method upto 9.40% on publicly available person re-identification datasets.

5 0.22116393 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

Author: Tatiana Tommasi, Barbara Caputo

Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.

6 0.21814498 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

7 0.20267405 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces

8 0.19174148 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

9 0.17270029 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

10 0.16879499 232 iccv-2013-Latent Space Sparse Subspace Clustering

11 0.14179294 44 iccv-2013-Adapting Classification Cascades to New Domains

12 0.1335385 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

13 0.12031479 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions

14 0.11109443 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

15 0.10661623 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization

16 0.10657974 233 iccv-2013-Latent Task Adaptation with Large-Scale Hierarchies

17 0.10376409 359 iccv-2013-Robust Object Tracking with Online Multi-lifespan Dictionary Learning

18 0.10326704 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

19 0.10028229 244 iccv-2013-Learning View-Invariant Sparse Representations for Cross-View Action Recognition

20 0.095191993 392 iccv-2013-Similarity Metric Learning for Face Recognition


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.208), (1, 0.08), (2, -0.102), (3, -0.087), (4, -0.115), (5, 0.056), (6, -0.01), (7, 0.123), (8, 0.129), (9, 0.044), (10, 0.026), (11, -0.192), (12, -0.116), (13, -0.08), (14, 0.183), (15, -0.305), (16, -0.096), (17, -0.076), (18, 0.016), (19, -0.081), (20, 0.231), (21, -0.082), (22, 0.126), (23, 0.094), (24, 0.009), (25, -0.123), (26, -0.181), (27, -0.079), (28, -0.055), (29, -0.035), (30, 0.043), (31, 0.003), (32, 0.066), (33, -0.023), (34, -0.024), (35, -0.018), (36, -0.034), (37, -0.013), (38, 0.044), (39, 0.064), (40, -0.057), (41, -0.004), (42, 0.021), (43, 0.02), (44, -0.002), (45, 0.032), (46, 0.017), (47, 0.062), (48, 0.028), (49, -0.002)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98824835 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

2 0.92590547 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

3 0.89863133 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

4 0.89689344 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

5 0.88698971 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

Author: Andy J. Ma, Pong C. Yuen, Jiawei Li

Abstract: This paper addresses a new person re-identification problem without the label information of persons under non-overlapping target cameras. Given the matched (positive) and unmatched (negative) image pairs from source domain cameras, as well as unmatched (negative) image pairs which can be easily generated from target domain cameras, we propose a Domain Transfer Ranked Support Vector Machines (DTRSVM) method for re-identification under target domain cameras. To overcome the problems introduced due to the absence of matched (positive) image pairs in target domain, we relax the discriminative constraint to a necessary condition only relying on the positive mean in target domain. By estimating the target positive mean using source and target domain data, a new discriminative model with high confidence in target positive mean and low confidence in target negative image pairs is developed. Since the necessary condition may not truly preserve the discriminability, multi-task support vector ranking is proposed to incorporate the training data from source domain with label information. Experimental results show that the proposed DTRSVM outperforms existing methods without using label information in target cameras. And the top 30 rank accuracy can be improved by the proposed method upto 9.40% on publicly available person re-identification datasets.

6 0.86142087 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

7 0.73051691 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces

8 0.591398 44 iccv-2013-Adapting Classification Cascades to New Domains

9 0.5535174 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

10 0.54913181 413 iccv-2013-Target-Driven Moire Pattern Synthesis by Phase Modulation

11 0.53366005 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions

12 0.51445413 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

13 0.47859725 178 iccv-2013-From Semi-supervised to Transfer Counting of Crowds

14 0.468851 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

15 0.43452558 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition

16 0.40696377 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

17 0.39710698 232 iccv-2013-Latent Space Sparse Subspace Clustering

18 0.39530644 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling

19 0.3936559 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

20 0.39152965 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.056), (7, 0.034), (12, 0.012), (13, 0.019), (26, 0.065), (31, 0.064), (42, 0.183), (64, 0.031), (73, 0.029), (89, 0.173), (96, 0.093), (98, 0.126)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.92549747 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

2 0.91977829 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition

Author: Ricardo Cabral, Fernando De_La_Torre, João P. Costeira, Alexandre Bernardino

Abstract: Low rank models have been widely usedfor the representation of shape, appearance or motion in computer vision problems. Traditional approaches to fit low rank models make use of an explicit bilinear factorization. These approaches benefit from fast numerical methods for optimization and easy kernelization. However, they suffer from serious local minima problems depending on the loss function and the amount/type of missing data. Recently, these lowrank models have alternatively been formulated as convex problems using the nuclear norm regularizer; unlike factorization methods, their numerical solvers are slow and it is unclear how to kernelize them or to impose a rank a priori. This paper proposes a unified approach to bilinear factorization and nuclear norm regularization, that inherits the benefits of both. We analyze the conditions under which these approaches are equivalent. Moreover, based on this analysis, we propose a new optimization algorithm and a “rank continuation ” strategy that outperform state-of-theart approaches for Robust PCA, Structure from Motion and Photometric Stereo with outliers and missing data.

3 0.91127151 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

4 0.91106856 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

Author: Chen Fang, Ye Xu, Daniel N. Rockmore

Abstract: Many standard computer vision datasets exhibit biases due to a variety of sources including illumination condition, imaging system, and preference of dataset collectors. Biases like these can have downstream effects in the use of vision datasets in the construction of generalizable techniques, especially for the goal of the creation of a classification system capable of generalizing to unseen and novel datasets. In this work we propose Unbiased Metric Learning (UML), a metric learning approach, to achieve this goal. UML operates in the following two steps: (1) By varying hyperparameters, it learns a set of less biased candidate distance metrics on training examples from multiple biased datasets. The key idea is to learn a neighborhood for each example, which consists of not only examples of the same category from the same dataset, but those from other datasets. The learning framework is based on structural SVM. (2) We do model validation on a set of weakly-labeled web images retrieved by issuing class labels as keywords to search engine. The metric with best validationperformance is selected. Although the web images sometimes have noisy labels, they often tend to be less biased, which makes them suitable for the validation set in our task. Cross-dataset image classification experiments are carried out. Results show significant performance improvement on four well-known computer vision datasets.

same-paper 5 0.90930033 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

6 0.89935136 19 iccv-2013-A Learning-Based Approach to Reduce JPEG Artifacts in Image Matting

7 0.89657682 123 iccv-2013-Domain Adaptive Classification

8 0.88984287 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

9 0.88725615 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction

10 0.88725364 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

11 0.88180238 173 iccv-2013-Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging

12 0.87562191 44 iccv-2013-Adapting Classification Cascades to New Domains

13 0.87509859 402 iccv-2013-Street View Motion-from-Structure-from-Motion

14 0.87358034 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

15 0.87326348 392 iccv-2013-Similarity Metric Learning for Face Recognition

16 0.87297523 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

17 0.87091935 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables

18 0.87000358 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

19 0.86997205 52 iccv-2013-Attribute Adaptation for Personalized Image Search

20 0.86428624 362 iccv-2013-Robust Tucker Tensor Decomposition for Effective Image Representation