iccv iccv2013 iccv2013-123 knowledge-graph by maker-knowledge-mining

123 iccv-2013-Domain Adaptive Classification


Source: pdf

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. [sent-3, score-1.04]

2 The key insight is finding attributes that are discriminative across categories and predictable across domains. [sent-5, score-0.41]

3 In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios. [sent-7, score-0.615]

4 For these reasons domain adaptation techniques have received considerable attention in machine learning applications. [sent-14, score-0.543]

5 Some previous efforts [22, 3, 8, 7] consider semisupervised domain adaptation where some labeled data from the target domain is available. [sent-15, score-1.273]

6 We focus on the unsupervised scenarios when there is no labeled data from the target domain available. [sent-16, score-0.741]

7 pervised domain adaptation assumes that there are discriminative ”pivot” features that are common to both domains [5, 4]. [sent-18, score-0.867]

8 A recent work [ 12] considers the labeled source data at the instance level to detect a subset of them (landmarks) that could model the distribution of the data in the target domain well. [sent-20, score-0.918]

9 A drawback of such methods is that they do not use the information from all the samples in the source domain available for training the classifier, as they use only landmark points and prune the rest. [sent-21, score-0.585]

10 Another research theme in domain adaptation is to assume there is an underlying common subspace [19, 1, 13] where the source and target domains have the same (or similar) marginal distributions, and the posterior distributions of the labels are also the same across domains. [sent-22, score-1.375]

11 Hence, in this subspace a classifier trained on the labeled data from the source domain would likely perform well on the target domain. [sent-23, score-1.033]

12 However, transforming data only with the goal of modeling the target domain distribution does not necessarily result in accurate classification. [sent-24, score-0.65]

13 We propose a simple yet effective adaptation approach that directly learns a new feature space from the unlabeled target data. [sent-26, score-0.536]

14 This feature space is optimized for classification in the target domain. [sent-27, score-0.398]

15 Motivated by [20], our new feature space, composed of binary attributes, is spanned by maxmargin non-orthogonal hyperplanes learned directly on the target domain. [sent-28, score-0.677]

16 Our new binary feature sets are discriminative and at the same time are robust against the change of distributions of data points in the original feature space between the source and target domains. [sent-29, score-0.769]

17 The notion of predictability is based on the idea that subtle variations of the data point positions in the original space should not result in different binary codes. [sent-31, score-0.437]

18 In (b) we classify the data from the target domain (webcam images) using the classifier trained in (a). [sent-34, score-0.796]

19 (c) and (d) we want to use roughly predicted labels in the target domain to find hyperplanes that are discriminative across categories source In and also have large margins from samples. [sent-35, score-1.193]

20 separates positive and negative samples but has a are binarizing data in the target domain with a large illustrates the essential idea behind our approach. [sent-39, score-0.755]

21 In fact in many cases we even reach the upperbound accuracy that is obtained when the classifier is trained and tested on the target domain itself. [sent-41, score-0.792]

22 In language processing, Daume et al [6] model the data distribution corresponding to source and target domains as a common shared component and a component that is specific to the individual domains. [sent-48, score-0.862]

23 They used these pivot features to learn an adapted discriminative classifier for the target domain. [sent-50, score-0.534]

24 In visual object recognition, Saenko et al [22] proposed a metric learning approach that uses labeled data in the source and target domains for all or some of the cor1http://www. [sent-51, score-0.87]

25 Pan et al [19] devise a dimensionality reduction technique that learns an underlying subspace where the difference between the data distributions of the two domains is reduced. [sent-57, score-0.398]

26 In [12], Gong et al, however, consider only a subset of training data in the source domain for their geodesic flow kernel approach; the ones that are distributed similarly to the target domain, . [sent-63, score-0.931]

27 This is a similar problem to domain adaptation where each dataset can be considered as a domain. [sent-65, score-0.543]

28 We are also looking for a set of discriminative binary codes but in our problem data comes from different domains with mismatched distributions in the feature space. [sent-73, score-0.599]

29 We represent this information by a number of hyperplanes in the feature space created using data from the target domain. [sent-77, score-0.6]

30 These attributes must be discriminative across categories and predictable across domains. [sent-79, score-0.41]

31 We use these attributes as feature descriptors and train a classifier on the labeled data in the source domain. [sent-82, score-0.474]

32 When we apply this classifier to the target domain, we achieve a much higher accuracy rate than the baseline classifier for the target data. [sent-83, score-0.772]

33 The baseline is simply a classifier trained on the source data in the original feature space. [sent-84, score-0.428]

34 Each attribute is a hyperplane in feature space; it divides the space into two subspaces. [sent-85, score-0.459]

35 To produce consistent binary codes across domains, each binary value needs to be predictable from instances across domains. [sent-88, score-0.489]

36 Superscripts S and T indicates source and target domaiSnus respectively aanndd superscript T so iunrdcieca atnesd matrix transpose. [sent-97, score-0.464]

37 w is the K-dimensional normal vector of a classifier that classifies one category from the others in the binary attribute space. [sent-103, score-0.419]

38 Therefore, we need to find K hyperplanes, ak, in the target domain such that when we use sgn(ATxi) as a new feature space, and learn a classifier on source data projected onto this space, we can predict the class labels of the data in the target domain. [sent-106, score-1.273]

39 Of course we do not have the class labels for the data in the target domain liT. [sent-107, score-0.65]

40 In order to train the classifier and attributes (hyperplanes) in target domain, we add a constraint to our optimization to force the liT to be predictable from the source domain’s classifier. [sent-108, score-0.808]

41 For a sample, an attribute is a binary value derived from a hyperplane in the raw feature space. [sent-129, score-0.527]

42 If this hyperplane produces different binary values for samples that are nearby to each other, then we say that the values coming from this hyperplane are not predictable. [sent-130, score-0.55]

43 Therefore, this attribute would not be robust against the variations of samples from different domains in the raw feature space. [sent-131, score-0.563]

44 Note that the hyperplanes learned by large margin divide the space, avoiding the fragmentation of sample distributions by the help of predictability constraints implemented by max-margin regularization. [sent-135, score-0.55]

45 The binary values obtained by a hyperplane are predictable when the hyperplane has large margin from samples. [sent-140, score-0.668]

46 The hyperplanes that are shown in orange are our predictable attributes, which enforce the large margins from samples. [sent-147, score-0.439]

47 To enforce the predictability constraint on binary values of attributes, we regulate our optimization by adding a maxmargin constraint on A as follows: A,wS,wTm,lTin,ξS,ξT,ξA? [sent-148, score-0.416]

48 liS(wSTsgn(ATxiS)) > 1− ξiS, ljT(wTTsgn(ATxjT)) > 1− ξjT, ljT = sgn(wSTsgn(ATxjT)), bkj = sgn(akTxjT), (2) bkj(akTxjT) > 1− ξkAj, Where bkj is the binary value of the kth bit (attribute) of the jth sample in the target domain. [sent-159, score-0.64]

49 In fact, each attribute is a max-margin classifier in feature space and bjk is the label of the jth sample when classified by the kth attribute classifier. [sent-160, score-0.636]

50 Then solving for wT and A is a standard attribute discovery problem in the target domain and can be solved using the method (DBC) in [20]. [sent-164, score-0.816]

51 An intuitive way to initialize lT is to learn a classifier on the labeled data in the source domain, xS and lS, and then apply it on xT, the data in the target domain. [sent-168, score-0.67]

52 Experiments We first evaluate our method on two benchmark datasets extensively used for domain adaptation in the contexts of object recognition [22, 17, 1, 13, 12] and sentiment analysis [4, 1, 12]. [sent-173, score-0.714]

53 We compare our method to several previously published domain adaptation methods. [sent-174, score-0.543]

54 when the classifier is trained and tested on the target domain itself. [sent-177, score-0.765]

55 Furthermore, we test the performance of our method on an inductive setting of unsupervised domain adaptation. [sent-178, score-0.546]

56 In the inductive setting we test our adapted classifier on a set of unseen and unlabeled instances from target domain- separate from the target domain data used to learn the attribute model. [sent-179, score-1.442]

57 And finally, we investigate the dataset bias problem, recently studied in [23, 16], and we show that our adaptive classification technique can successfully overcome the bias differences in both single and multiple source domains scenarios. [sent-180, score-0.787]

58 We report the results of our evaluation on all 12 pairs of source and target domains and compare it with methods as reported in [12] (Table 1). [sent-191, score-0.774]

59 We also report a baseline results of no adaptation, where we train a kernel SVM on labeled data from the source domain in the original feature space. [sent-193, score-0.709]

60 For each pair of domains the performance is measured by classification accuracy (number of correctly classified instances over total test data from target). [sent-195, score-0.408]

61 As explained in [12], due to its small number of samples (157 for all 10 categories), DSLR was not used as a source domain and so the results for other methods have been reported only for 9 out of 12 pairings. [sent-196, score-0.613]

62 To learn each attribute hyperplane we used linear SVM coupled with kernel mapping. [sent-200, score-0.432]

63 Cross-Domain Sentiment Classification: accuracies for 4 pairs of source and target domains are reported. [sent-212, score-0.8]

64 Again we compare the performance of our approach with the same set of domain adaptation methods as reported in [12] and listed in section 4. [sent-218, score-0.571]

65 Comparing to Same-Domain Classification How accurate are the domain adapted classifiers compared to classifiers trained on labeled data from the target domain? [sent-230, score-0.869]

66 This balances the number of samples used for within domain training and testing and cross domain adaptive training and testing. [sent-232, score-0.798]

67 Table 3 shows the results for all 16 pairs of domains in sentiment dataset and 4 pairs of domains from object recognition datasets. [sent-233, score-0.735]

68 In the latter we could use only the two domains (Caltech, Amazon) that had sufficient number of samples to be divided into two groups (train/test) The rows correspond to the source domains and columns to the target domains. [sent-234, score-1.123]

69 Cross-domain Object recognition: accuracies for all 12 pairs of source and target domains are reported (C: Caltech, A: Amazon, W: Webcam, and D: DSLR). [sent-248, score-0.828]

70 Due to its small number of samples, DSLR was not used as a source domain by the other methods and so their results have been reported only for 9 pairings. [sent-249, score-0.561]

71 Our method significantly outperforms all the previous methods except for 2 out of 3 cases when DSLR , whose number of samples are insufficient for training our attribute model, is the target domain. [sent-250, score-0.524]

72 Comparing to Same-Domain Classification : (Left) Accuracies for all 16 pairs of source and target domains in sentiment dataset are reported in the left table. [sent-258, score-0.945]

73 (Right) Accuracies for 4 pairs of source and target domains are reported. [sent-260, score-0.746]

74 Rows and columns correspond to source and target domains respectively. [sent-262, score-0.746]

75 So, we had access to all the samples in the target domain at training time and our goal was to predict their labels. [sent-268, score-0.671]

76 So, to create an inductive setting for the unsupervised domain adaptation problem, we make only a fraction of the data from the target domain accessible at training time for learning our adaptive feature space. [sent-271, score-1.533]

77 The rest, which we refer to as out-of-sample data from the target domain, is set aside for inductive classification tests. [sent-272, score-0.526]

78 Transductive vs Inductive Cross-domain Classification: The first two rows show the results in transductive setting where all the data from the target domains are accessible during training. [sent-296, score-0.761]

79 The last two rows show the results in inductive setting where we test our classifier only on a subset of data in the target domain that was not accessible during training time 5) 4. [sent-297, score-0.996]

80 Cross-Dataset Object Recognition: The 4 rightmost columns show the classification results for when we hold out one dataset as the target domain and use the other 3 as source domains, in both the inductive (first two rows) and transductive (last two rows) settings. [sent-330, score-1.11]

81 Each column of the table correspond to the situation where one dataset is considered as the target domain and all the remaining datasets are considered as the source domain (multi-source domain). [sent-333, score-1.152]

82 Quantitative evaluation: To see how learning binary attributes by itself is contributing to our performance increase, we ignore the adaptation and use the attribute features learned only from the source domain. [sent-342, score-0.761]

83 In this setting we learn the binary attribute space from the labeled data in the source domain, project the data from both source and target domain onto this space where we train a classifier on the source data and test it on the target data. [sent-343, score-2.053]

84 We pick an attribute classifier learned by our method, then we find images (from both source and target) that are most positively and negatively confident when classified by this attribute classifier. [sent-348, score-0.713]

85 In Figure 4 the left two rows use DSLR as source domain and Amazon as target. [sent-349, score-0.576]

86 The green arrow represent an attribute classifier which is trained on target domain. [sent-351, score-0.685]

87 The dashed part of the arrow illustrates that the same hyperplane which is trained in target domain is ? [sent-352, score-0.963]

88 Quantitative Evaluation of Predictability: The blue bars show the classification accuracies when the classifier is simply trained on the data from the source domain in original feature space (baseline). [sent-371, score-0.921]

89 The red bars show the results when the classifier is trained in a binary attribute space learned from the data in the source domain (source binary). [sent-372, score-1.069]

90 The green bars show the results of our adapted model when the classifier is trained on labeled source data in a binary attribute space learned in the target domain (adapted binary). [sent-373, score-1.457]

91 In average the source binary model is increasing the performance by 10% over the baseline while the adapted binary model does that by 28% applied in the source domain. [sent-374, score-0.669]

92 In the first case, the attribute consistently separates round shapes from dark-volumed shapes in both domains and in the second case, the attribute consistently discriminates be- tween objects with keypad and objects with dark-volumed shape. [sent-377, score-0.738]

93 Our method is based on learning a predictable binary code that captures the structural information of the data distribution in the target domain itself. [sent-381, score-0.918]

94 Our empirical evaluations demonstrate an impressive and consistent performance gain by our method on standard benchmarks previously studied for domain adaptation problem. [sent-384, score-0.578]

95 In many cases our domain adaptive method could reach the gold standard accuracies; i. [sent-385, score-0.429]

96 when the classifier is trained on the labeled from the target domain itself. [sent-387, score-0.813]

97 Qualitative Evaluation of Predictability: This figure illustrates two examples where an attribute hyperplane (green arrow), learned by our joint optimization, discriminates visual properties consistently across two different domains. [sent-389, score-0.506]

98 is trained in target domain is applied in the source domain. [sent-392, score-0.858]

99 Exploiting weakly-labeled web images to improve object classification: A domain adaptation approach. [sent-412, score-0.543]

100 Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. [sent-476, score-0.387]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('domain', 0.344), ('domains', 0.282), ('predictability', 0.278), ('target', 0.275), ('hyperplanes', 0.232), ('hyperplane', 0.2), ('adaptation', 0.199), ('attribute', 0.197), ('source', 0.189), ('sentiment', 0.171), ('predictable', 0.17), ('inductive', 0.159), ('dslr', 0.148), ('wstsgn', 0.121), ('sgn', 0.1), ('binary', 0.098), ('atxjt', 0.096), ('classifier', 0.096), ('ljt', 0.086), ('transductive', 0.082), ('bias', 0.081), ('amazon', 0.08), ('attributes', 0.078), ('bit', 0.073), ('bkj', 0.072), ('dbc', 0.072), ('ws', 0.072), ('webcam', 0.069), ('arrow', 0.067), ('adapted', 0.065), ('dvd', 0.064), ('classification', 0.061), ('adaptive', 0.058), ('wt', 0.058), ('caltech', 0.058), ('geodesic', 0.057), ('books', 0.056), ('daume', 0.056), ('pivot', 0.056), ('accuracies', 0.054), ('samples', 0.052), ('gong', 0.052), ('kth', 0.05), ('trained', 0.05), ('labeled', 0.048), ('aktxjt', 0.048), ('atdedap', 0.048), ('atxis', 0.048), ('fatemeh', 0.048), ('toatuirons', 0.048), ('wttsgn', 0.048), ('accessible', 0.048), ('lt', 0.047), ('kitchen', 0.046), ('across', 0.046), ('al', 0.045), ('blitzer', 0.043), ('unsupervised', 0.043), ('mismatched', 0.043), ('rows', 0.043), ('discriminative', 0.042), ('distributions', 0.04), ('language', 0.04), ('maxmargin', 0.04), ('ls', 0.038), ('margins', 0.037), ('labelme', 0.037), ('saenko', 0.036), ('discriminates', 0.036), ('studied', 0.035), ('kernel', 0.035), ('reviews', 0.035), ('bars', 0.034), ('rastegari', 0.034), ('classified', 0.034), ('ak', 0.033), ('rating', 0.033), ('semisupervised', 0.032), ('xs', 0.032), ('feature', 0.032), ('data', 0.031), ('codes', 0.031), ('space', 0.03), ('baseline', 0.03), ('lis', 0.029), ('reaches', 0.029), ('categories', 0.028), ('dd', 0.028), ('bb', 0.028), ('classifiers', 0.028), ('category', 0.028), ('reported', 0.028), ('discriminating', 0.027), ('reach', 0.027), ('illustrates', 0.027), ('orthogonal', 0.027), ('svm', 0.027), ('protocol', 0.027), ('jt', 0.026), ('separates', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

2 0.3903988 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

3 0.38368991 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

4 0.27535284 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

Author: Andy J. Ma, Pong C. Yuen, Jiawei Li

Abstract: This paper addresses a new person re-identification problem without the label information of persons under non-overlapping target cameras. Given the matched (positive) and unmatched (negative) image pairs from source domain cameras, as well as unmatched (negative) image pairs which can be easily generated from target domain cameras, we propose a Domain Transfer Ranked Support Vector Machines (DTRSVM) method for re-identification under target domain cameras. To overcome the problems introduced due to the absence of matched (positive) image pairs in target domain, we relax the discriminative constraint to a necessary condition only relying on the positive mean in target domain. By estimating the target positive mean using source and target domain data, a new discriminative model with high confidence in target positive mean and low confidence in target negative image pairs is developed. Since the necessary condition may not truly preserve the discriminability, multi-task support vector ranking is proposed to incorporate the training data from source domain with label information. Experimental results show that the proposed DTRSVM outperforms existing methods without using label information in target cameras. And the top 30 rank accuracy can be improved by the proposed method upto 9.40% on publicly available person re-identification datasets.

5 0.25301906 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

Author: Tatiana Tommasi, Barbara Caputo

Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.

6 0.22436801 52 iccv-2013-Attribute Adaptation for Personalized Image Search

7 0.19617456 31 iccv-2013-A Unified Probabilistic Approach Modeling Relationships between Attributes and Objects

8 0.18066452 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

9 0.178892 44 iccv-2013-Adapting Classification Cascades to New Domains

10 0.16444862 54 iccv-2013-Attribute Pivots for Guiding Relevance Feedback in Image Search

11 0.15895548 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces

12 0.15802421 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions

13 0.15645373 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

14 0.14056166 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

15 0.13978897 204 iccv-2013-Human Attribute Recognition by Rich Appearance Dictionary

16 0.13718458 399 iccv-2013-Spoken Attributes: Mixing Binary and Relative Attributes to Say the Right Thing

17 0.11598996 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

18 0.11580338 53 iccv-2013-Attribute Dominance: What Pops Out?

19 0.11075121 380 iccv-2013-Semantic Transform: Weakly Supervised Semantic Inference for Relating Visual Attributes

20 0.10679851 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.221), (1, 0.151), (2, -0.098), (3, -0.154), (4, 0.034), (5, -0.001), (6, -0.065), (7, -0.021), (8, 0.156), (9, 0.101), (10, -0.063), (11, -0.175), (12, -0.028), (13, -0.113), (14, 0.2), (15, -0.295), (16, -0.11), (17, -0.016), (18, -0.021), (19, -0.062), (20, 0.252), (21, -0.097), (22, 0.144), (23, 0.119), (24, 0.003), (25, -0.093), (26, -0.147), (27, -0.078), (28, -0.049), (29, -0.021), (30, 0.039), (31, 0.033), (32, 0.036), (33, 0.017), (34, -0.034), (35, -0.033), (36, -0.001), (37, -0.025), (38, 0.072), (39, 0.056), (40, -0.04), (41, 0.024), (42, 0.016), (43, 0.06), (44, -0.011), (45, 0.052), (46, 0.005), (47, 0.036), (48, 0.01), (49, 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98068434 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

2 0.90961069 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

3 0.89950353 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

4 0.89378405 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

Author: Andy J. Ma, Pong C. Yuen, Jiawei Li

Abstract: This paper addresses a new person re-identification problem without the label information of persons under non-overlapping target cameras. Given the matched (positive) and unmatched (negative) image pairs from source domain cameras, as well as unmatched (negative) image pairs which can be easily generated from target domain cameras, we propose a Domain Transfer Ranked Support Vector Machines (DTRSVM) method for re-identification under target domain cameras. To overcome the problems introduced due to the absence of matched (positive) image pairs in target domain, we relax the discriminative constraint to a necessary condition only relying on the positive mean in target domain. By estimating the target positive mean using source and target domain data, a new discriminative model with high confidence in target positive mean and low confidence in target negative image pairs is developed. Since the necessary condition may not truly preserve the discriminability, multi-task support vector ranking is proposed to incorporate the training data from source domain with label information. Experimental results show that the proposed DTRSVM outperforms existing methods without using label information in target cameras. And the top 30 rank accuracy can be improved by the proposed method upto 9.40% on publicly available person re-identification datasets.

5 0.87953871 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

Author: Tatiana Tommasi, Barbara Caputo

Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.

6 0.87705421 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

7 0.71413565 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces

8 0.64389402 44 iccv-2013-Adapting Classification Cascades to New Domains

9 0.62664807 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions

10 0.54630089 413 iccv-2013-Target-Driven Moire Pattern Synthesis by Phase Modulation

11 0.53528529 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

12 0.52894962 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

13 0.51691008 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

14 0.50372934 52 iccv-2013-Attribute Adaptation for Personalized Image Search

15 0.46189848 178 iccv-2013-From Semi-supervised to Transfer Counting of Crowds

16 0.45400357 248 iccv-2013-Learning to Rank Using Privileged Information

17 0.41333959 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition

18 0.41035357 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling

19 0.40401855 54 iccv-2013-Attribute Pivots for Guiding Relevance Feedback in Image Search

20 0.39578557 53 iccv-2013-Attribute Dominance: What Pops Out?


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.158), (2, 0.08), (7, 0.03), (12, 0.017), (26, 0.086), (31, 0.045), (34, 0.017), (40, 0.012), (42, 0.131), (48, 0.011), (64, 0.039), (73, 0.015), (77, 0.023), (89, 0.124), (98, 0.127)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81838882 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

2 0.80954808 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

Author: Chen Fang, Ye Xu, Daniel N. Rockmore

Abstract: Many standard computer vision datasets exhibit biases due to a variety of sources including illumination condition, imaging system, and preference of dataset collectors. Biases like these can have downstream effects in the use of vision datasets in the construction of generalizable techniques, especially for the goal of the creation of a classification system capable of generalizing to unseen and novel datasets. In this work we propose Unbiased Metric Learning (UML), a metric learning approach, to achieve this goal. UML operates in the following two steps: (1) By varying hyperparameters, it learns a set of less biased candidate distance metrics on training examples from multiple biased datasets. The key idea is to learn a neighborhood for each example, which consists of not only examples of the same category from the same dataset, but those from other datasets. The learning framework is based on structural SVM. (2) We do model validation on a set of weakly-labeled web images retrieved by issuing class labels as keywords to search engine. The metric with best validationperformance is selected. Although the web images sometimes have noisy labels, they often tend to be less biased, which makes them suitable for the validation set in our task. Cross-dataset image classification experiments are carried out. Results show significant performance improvement on four well-known computer vision datasets.

3 0.80377769 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

4 0.78717995 19 iccv-2013-A Learning-Based Approach to Reduce JPEG Artifacts in Image Matting

Author: Inchang Choi, Sunyeong Kim, Michael S. Brown, Yu-Wing Tai

Abstract: Single image matting techniques assume high-quality input images. The vast majority of images on the web and in personal photo collections are encoded using JPEG compression. JPEG images exhibit quantization artifacts that adversely affect the performance of matting algorithms. To address this situation, we propose a learning-based post-processing method to improve the alpha mattes extracted from JPEG images. Our approach learns a set of sparse dictionaries from training examples that are used to transfer details from high-quality alpha mattes to alpha mattes corrupted by JPEG compression. Three different dictionaries are defined to accommodate different object structure (long hair, short hair, and sharp boundaries). A back-projection criteria combined within an MRF framework is used to automatically select the best dictionary to apply on the object’s local boundary. We demonstrate that our method can produces superior results over existing state-of-the-art matting algorithms on a variety of inputs and compression levels.

5 0.78085238 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition

Author: Ricardo Cabral, Fernando De_La_Torre, João P. Costeira, Alexandre Bernardino

Abstract: Low rank models have been widely usedfor the representation of shape, appearance or motion in computer vision problems. Traditional approaches to fit low rank models make use of an explicit bilinear factorization. These approaches benefit from fast numerical methods for optimization and easy kernelization. However, they suffer from serious local minima problems depending on the loss function and the amount/type of missing data. Recently, these lowrank models have alternatively been formulated as convex problems using the nuclear norm regularizer; unlike factorization methods, their numerical solvers are slow and it is unclear how to kernelize them or to impose a rank a priori. This paper proposes a unified approach to bilinear factorization and nuclear norm regularization, that inherits the benefits of both. We analyze the conditions under which these approaches are equivalent. Moreover, based on this analysis, we propose a new optimization algorithm and a “rank continuation ” strategy that outperform state-of-theart approaches for Robust PCA, Structure from Motion and Photometric Stereo with outliers and missing data.

6 0.77056462 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

7 0.76730418 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

8 0.76612669 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction

9 0.74865049 33 iccv-2013-A Unified Video Segmentation Benchmark: Annotation, Metrics and Analysis

10 0.74706149 44 iccv-2013-Adapting Classification Cascades to New Domains

11 0.74706149 420 iccv-2013-Topology-Constrained Layered Tracking with Latent Flow

12 0.74194741 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

13 0.73854363 52 iccv-2013-Attribute Adaptation for Personalized Image Search

14 0.73810017 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition

15 0.73680955 402 iccv-2013-Street View Motion-from-Structure-from-Motion

16 0.73366684 180 iccv-2013-From Where and How to What We See

17 0.72994083 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

18 0.72991949 392 iccv-2013-Similarity Metric Learning for Face Recognition

19 0.72720629 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

20 0.72716665 43 iccv-2013-Active Visual Recognition with Expertise Estimation in Crowdsourcing