iccv iccv2013 iccv2013-427 knowledge-graph by maker-knowledge-mining

427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation


Source: pdf

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. [sent-6, score-0.335]

2 However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. [sent-7, score-0.461]

3 In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). [sent-8, score-0.153]

4 Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. [sent-9, score-0.67]

5 For highly-evolving visual domains where labeled data are very sparse, one may expect to leverage abundant labeled data readily available in some related source domains for training accurate classifiers to be reused in the target domain. [sent-13, score-0.436]

6 Recently, the literature has witnessed an increasing interest in developing transfer learning [16] algorithms for cross-domain knowledge adaptation problems. [sent-14, score-0.286]

7 In cross-domain problems, the source and target data are usually sampled from different probability distributions. [sent-16, score-0.15]

8 Therefore, a major computational issue of transfer learning is to reduce the distribution difference between domains. [sent-17, score-0.264]

9 Recent works aim to discover a shared feature representation which can reduce the distribution difference and preserve the important properties of input data simultaneously Source Domain Target Domain Figure1. [sent-18, score-0.111]

10 50 I0nourp5oblem,la10bel dsou1r5c0 e0andun5label d10targetdo- mains are different in both marginal and conditional distributions. [sent-19, score-0.301]

11 [21, 15, 18], or re-weight source data in order to minimize the distribution difference and then learn a classifier on the re-weighted source data [3, 4]. [sent-20, score-0.254]

12 Most of existing methods measure the distribution difference based on either marginal distribution or conditional distribution. [sent-21, score-0.461]

13 However, Figure 1 demonstrates the importance ofmatching both marginal and conditional distributions for robust transfer learning. [sent-22, score-0.581]

14 In this paper, we address a challenging scenario in which the source and target domains are different in both marginal and conditional distributions, and the target domain has no labeled data. [sent-24, score-0.756]

15 We put forward a novel transfer learning solution, referred to as Joint Distribution Adaptation (JDA), to jointly adapt both the marginal and conditional distributions in a principled dimensionality reduction procedure. [sent-25, score-0.729]

16 57% in terms of classification accuracy, obtained by the proposed JDA approach over several state-of-the-art transfer learning methods. [sent-31, score-0.181]

17 Our results reveal substantial effects of matching both marginal and conditional distributions across domains. [sent-32, score-0.447]

18 Related Work In this section, we discuss prior works on transfer learning that are related to ours, and highlight their differences. [sent-34, score-0.153]

19 According to the literature survey [16], existing transfer learning methods can be roughly organized into two categories: instance reweighting [3, 4] and feature extraction. [sent-35, score-0.153]

20 2) Distribution adaptation, which explicitly minimizes predefined distance measures to reduce the difference in the marginal distribution [22, 15], conditional distribution [21], or both [26, 23, 18]. [sent-40, score-0.502]

21 However, to match conditional distributions, these methods require either some labeled target data, or multiple source domains for consensus learning. [sent-41, score-0.452]

22 To our knowledge, our work is among the first attempts to jointly adapt both marginal and conditional distributions between domains, and no labeled data are required in the target domain. [sent-42, score-0.606]

23 Our work is a principled dimensionality reduction procedure with MMD-based distribution matching, which is different from feature re-weighting methods [1, 5]. [sent-43, score-0.161]

24 Joint Distribution Adaptation In this section, we present in detail the Joint Distribution Adaptation (JDA) approach for effective transfer learning. [sent-45, score-0.153]

25 Definition 1(Domain) A domain D is composed of an mdDimefiennitsiioonna 1l feature space oXm aanind D a marginal probability ddiismtreinbsuitoionna lP fe(xat)u, ei. [sent-50, score-0.245]

26 Definition 2 (Task) Given domain D, a task T is composed of a C2- (Tcaasrdki)na Gliitvye nlab deolm msaeti nY Dan,d a a classifier f co(xm),pi. [sent-53, score-0.105]

27 ee interpreted as t)h}e, conditional probability d =is tQri(byu|txio)n c. [sent-56, score-0.136]

28 a Problem 1(Joint Distribution Adaptation) Given labeled source domain Ds = {(x1, y1) , . [sent-57, score-0.19]

29 Proposed Approach In this paper, we propose to adapt the joint distributions by a feature transformation T so that the joint expectations ofthe features x and labels y are matched between domains: [T (xt),yt]? [sent-69, score-0.241]

30 This can be executed by applying a cla|xss)ifie ≈r f train|exd on the labeled source data to the unlabeled target data. [sent-93, score-0.217]

31 In order to achieve a more accurate approximation for Qt (yt |xt), we propose an iterative pseudo label refinement strategy to iteratively refine the transformation T and classifier f. [sent-94, score-0.12]

32 2 Marginal Distribution Adaptation However, even through the PCA-induced k-dimensional representation, the distribution difference between domains 2201 will still be significantly large. [sent-119, score-0.194]

33 Thus a major computational issue is to reduce the distribution difference by explicitly minimizing proper distance measures. [sent-120, score-0.134]

34 Since parametrically estimating the probability density for a distribution is often a nontrivial problem, we resort to explore the sufficient statistics instead. [sent-121, score-0.116]

35 is computed as follows (M0)ij=⎨⎪ ⎧n −s t1 n 1ts, xo tih,ex rjw∈ ise D ts (4) By minimizing Equation⎩⎪ (3) such that Equation (2) is maximized, the marginal distributions between domains are drawn close under the new representation Z = ATX. [sent-136, score-0.412]

36 3 Conditional Distribution Adaptation However, reducing the difference in the marginal distributions does not guarantee that the conditional distributions between domains can also be drawn close. [sent-140, score-0.72]

37 Indeed, mini- mizing the difference between the conditional distributions Qs (ys |xs) and Qt (yt |xt) is crucial for robust distribution adapta|txion [23]. [sent-141, score-0.375]

38 Unfor|xtunately, it is nontrivial to match the conditional distributions, even by exploring sufficient statistics of the distributions, since there are no labeled data in the target domain, i. [sent-142, score-0.31]

39 Some very recent works s|taxrted to match the conditional distributions via sample selection in a kernel mapping space [26], circular validation [3], co-training [4], and kernel density estimation [18]. [sent-145, score-0.365]

40 But they all require some labeled data in the target domain, and thus cannot address our problem. [sent-146, score-0.124]

41 In this paper, we propose to explore the pseudo labels of the target data, which can be easily predicted by applying some base classifiers trained on the labeled source data to the unlabeled target data. [sent-147, score-0.416]

42 Since the posterior probabilities Qs (ys |xs) and Qt(yt |xt) are quite involved, we resort to explore t|hxe sufficient s|xtatistics of class-conditional distributions Qs (xs |ys) and Qt (xt |yt) instead. [sent-153, score-0.146]

43 Now with the true source labels |aynd pseudo target labels, we can essentially match the classconditional distributions Qs (xs |ys = c) and Qt (xt |yt = c) w. [sent-154, score-0.415]

44 h He crlea swseconditional distributions Qs (xs |ys = c) and Qt(xt |yt = c) ? [sent-162, score-0.146]

45 tihx,ejir∈w DseD s(c)t(,c )xij∈ Dt(c) (6) By minimizing⎩⎪ Equation (5) such that Equation (2) is maximized, the conditional distributions between domains are drawn close under the new representation Z = ATX. [sent-185, score-0.383]

46 With this important improvement, JDA can be robust for crossdomain problems with changes in conditional distributions. [sent-186, score-0.159]

47 It is important to note that, although many of the pseudo target labels are incorrect due to the differences in both the marginal and conditional distributions, we can still leverage them to match the conditional distributions with the revised MMD measure defined in Equation (5). [sent-187, score-0.784]

48 The justification is that we match the distributions by exploring the sufficient statistics instead of the density estimates. [sent-188, score-0.191]

49 In this way, we can leverage the source classifier to improve the target classifier. [sent-189, score-0.175]

50 4 Optimization Problem In JDA, to achieve effective and robust transfer learning, we aim to simultaneously minimize the differences in both the marginal distributions and conditional distributions across domains. [sent-193, score-0.746]

51 With JDA, we can simultaneously adapt both the marginal distributions and conditional distributions between domains to facilitate joint distribution adaptation. [sent-202, score-0.817]

52 A fascinating property of JDA is its capability to effectively explore the conditional distributions only using principled unsupervised dimensionality reduction and base classifier. [sent-203, score-0.397]

53 Thus, if we use this labeling as the pseudo target labels and run JDA iteratively, then we can alternatingly improve the labeling quality until convergence. [sent-220, score-0.178]

54 This EM-like pseudo label refinement procedure is empirically effective as shown in experiments. [sent-221, score-0.114]

55 A = XHXTAΦ (10) Finally, finding the optimal adaptation matrix A is reduced to solving Equation (10) for the k smallest eigenvectors. [sent-244, score-0.133]

56 Data Preparation USPS+MNIST, COIL20, PIE, and Office+Caltech (refer to Figure 2 and Table 2) are six benchmark datasets widely adopted to evaluate visual domain adaptation algorithms. [sent-287, score-0.236]

57 To speed up experiments, we construct one dataset USPS vs MNIST by randomly sampling 1,800 images in USPS to × form the source data, and randomly sampling 2,000 images in MNIST to form the target data. [sent-293, score-0.254]

58 o nTeh buys the source and target data can share the same feature space. [sent-296, score-0.15]

59 We construct one dataset COIL1 vs COIL2 by selecting all 720 images in COIL1 to form the source data, and all 720 images in COIL2 to form the target data. [sent-303, score-0.254]

60 By randomly selecting two different subsets (poses) as the source domain and target domain respectively, we can construct 5 4 = 20 cross-domain face datasets, e. [sent-313, score-0.35]

61 , PIE1 vs PIE2, P5I ×E1 4 vs 2 P0IE c3ro, s Ps-IEd1o vs n PI fEac4e, PdaItEa1s vs Pe. [sent-315, score-0.336]

62 In this way, the source and target data are constructed using face images from different poses, and thus will follow significantly different distributions. [sent-321, score-0.17]

63 Moreover, the distribution differences in these datasets may vary a lot, since for example, the difference between the left and right poses is larger than the difference between the left and frontal poses. [sent-322, score-0.142]

64 By randomly selecting two different domains as the source domain and target domain respectively, we construct 4 3 = 12 cross-domain doobmjecati ndrateaspseetcs,ti e. [sent-333, score-0.431]

65 Note that, the adaptation difficulty in the 36 datasets varies a lot, since the standard NN classifier can only achieve an average classification accuracy of 37. [sent-406, score-0.226]

66 Thirdly, JDA significantly outperforms TCA, which is a state-of-the-art transfer learning method based on feature extraction. [sent-412, score-0.153]

67 A major limitation of TCA is that the difference in the conditional distributions is not explicitly reduced. [sent-413, score-0.308]

68 Lastly, JDA achieves much better performance than TSL, which depends on the kernel density estimation (KDE) to match the marginal distributions. [sent-415, score-0.229]

69 Theoretically, TSL can adapt the marginal distributions better than TCA, which is validated by the empirical results. [sent-416, score-0.346]

70 However, TSL also does not explicitly reduce the difference in the conditional distributions. [sent-417, score-0.18]

71 One possible difficulty for TSL to adapt the conditional distributions is that TSL relies on the distribution density, which is hard to manipulate with partially incorrect pseudo labels. [sent-418, score-0.479]

72 JDA succeeds in matching the conditional distributions through exploring only the sufficient statistics. [sent-419, score-0.282]

73 Effectiveness Verification We further verify the effectiveness of JDA by inspecting the distribution distance and the similarity of embeddings. [sent-422, score-0.119]

74 Note that, in order to compute the true distance in both the marginal and conditional distributions between domains, we have to use the groundtruth labels instead of the pseudo labels. [sent-425, score-0.566]

75 However, the groundtruth target labels are only used for verification, not for learning procedure. [sent-426, score-0.119]

76 Figure 4(a) shows the distribution distance computed for each method, and figure 4(b) shows the classification accuracy. [sent-427, score-0.118]

77 1) Without learning a feature representation, the distribution distance of NN in the original feature space is the largest. [sent-429, score-0.109]

78 3) TCA can substantially reduce the distribution distance by explicitly reducing the differ- 2205 DteMnaiDsMc2150 510 JNTPDCA 20cAu)%(cary7654320 510 JTPNDCNA 20 xaepElm31 205 0 01 Exam20ple30 lmaxEp23 150 50 01 Exa2m0ple30 #iterations (a) MMD distance w. [sent-431, score-0.131]

79 Effectiveness verification: MMD distance, classification accuracy, and similarity of embeddings on the PIE1 vs PIE2 dataset. [sent-438, score-0.178]

80 ence in the marginal distributions, so it can achieve better classification accuracy. [sent-439, score-0.193]

81 4) JDA can reduce the difference in both the marginal and conditional distributions, thus it can extract a most effective and robust representation for cross- domain problems. [sent-440, score-0.444]

82 By iteratively refining the pseudo labels, JDA can reduce the difference in conditional distributions in each iteration to improve the classification performance. [sent-441, score-0.432]

83 For better illustration, we only use the 365 face images corresponding to the first 5 classes, in which the first 245 images are from the source data, the last 120 images are from the target data. [sent-444, score-0.17]

84 Also, the diagonal blocks of the similarity matrix indicate withinclass similarity in the same domain, the diagonal blocks of the top-right and bottom-left submatrices indicate withinclass similarity across domains, while all other blocks of the similarity matrix indicate between-class similarity. [sent-446, score-0.203]

85 Figures 4(c) and 4(d) illustrate the similarity matrix of TCA embedding and JDA embedding respectively. [sent-447, score-0.115]

86 To be an effective and robust embedding for cross-domain classification problems, 1) the between-domain similarity should be high enough to establish knowledge transfer, and 2) the between-class similarity should be low to facilitate category discrimination. [sent-448, score-0.148]

87 This proves that only adapting the marginal distributions is not enough for transfer learning. [sent-450, score-0.445]

88 We only report the results on USPS vs MNIST, PIE1 vs PIE2, and A → D datasets, while similar trends on aPlIlE Eo1th vesr PdaIEta2s,et asn are n →ot Dshdo watnas deutse, wtoh space mliimlairta ttrieonnd. [sent-462, score-0.168]

89 en W λh →n λ ∞, d 0i,str thiebu otipotnim adaptation oisb enomt ipse irlflo-rdmefiende, dan. [sent-473, score-0.133]

90 Conclusion and Future Work In this paper, we propose a Joint Distribution Adaptation (JDA) approach for robust transfer learning. [sent-489, score-0.134]

91 JDA aims to simultaneously adapt both marginal and conditional distributions in a principled dimensionality reduction procedure. [sent-490, score-0.576]

92 Extensive experiments show that JDA is effective and robust for a variety of cross-domain problems, and can significantly outperform several state-of-the-art adaptation methods even if the distribution difference is substantially large. [sent-491, score-0.245]

93 A comparative study of methods for transductive transfer learning. [sent-516, score-0.151]

94 Domain adaptation problems: A dasvm classification technique and a circular validation strategy. [sent-526, score-0.161]

95 Transductive transfer learning for action recognition in tennis games. [sent-541, score-0.153]

96 Weakly-paired maximum covariance analysis for multimodal dimensionality reduction and transfer learning. [sent-599, score-0.198]

97 Knowledge transfer with low-quality data: A feature extraction issue. [sent-636, score-0.134]

98 Socialtransfer: Crossdomain transfer learning from social streams for media applications. [sent-644, score-0.153]

99 Domain adaptation of conditional probability models via feature subsetting. [sent-656, score-0.269]

100 Multi-feature metric learning with knowledge transfer among semantics and social tagging. [sent-683, score-0.153]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('jda', 0.789), ('tca', 0.195), ('marginal', 0.165), ('distributions', 0.146), ('conditional', 0.136), ('transfer', 0.134), ('adaptation', 0.133), ('tsl', 0.119), ('domains', 0.101), ('qt', 0.098), ('mmd', 0.095), ('xs', 0.094), ('yt', 0.09), ('xt', 0.087), ('vs', 0.084), ('target', 0.082), ('ys', 0.081), ('usps', 0.081), ('domain', 0.08), ('pseudo', 0.078), ('proceedings', 0.077), ('qs', 0.074), ('source', 0.068), ('mnist', 0.067), ('distribution', 0.067), ('pie', 0.051), ('methodruntime', 0.045), ('office', 0.044), ('embedding', 0.043), ('labeled', 0.042), ('nn', 0.04), ('embeddings', 0.037), ('caltech', 0.036), ('adapt', 0.035), ('submatrices', 0.035), ('xns', 0.035), ('reduction', 0.034), ('maximized', 0.033), ('tkde', 0.032), ('principled', 0.03), ('dimensionality', 0.03), ('atxmcxta', 0.03), ('ccooiil', 0.03), ('philip', 0.03), ('xhxta', 0.03), ('iterations', 0.03), ('similarity', 0.029), ('dt', 0.029), ('classification', 0.028), ('nontrivial', 0.027), ('cc', 0.027), ('equation', 0.027), ('hxe', 0.027), ('subspace', 0.026), ('tmn', 0.026), ('withinclass', 0.026), ('difference', 0.026), ('unlabeled', 0.025), ('classifier', 0.025), ('pca', 0.025), ('isx', 0.024), ('atx', 0.024), ('datasets', 0.023), ('bases', 0.023), ('sensitivity', 0.023), ('match', 0.023), ('crossdomain', 0.023), ('distance', 0.023), ('density', 0.022), ('gfk', 0.022), ('ds', 0.022), ('joint', 0.021), ('eigendecomposition', 0.021), ('xj', 0.021), ('base', 0.021), ('face', 0.02), ('dslr', 0.02), ('tsinghua', 0.02), ('construct', 0.02), ('bregman', 0.02), ('turaga', 0.02), ('quadrants', 0.02), ('kernel', 0.019), ('learning', 0.019), ('ps', 0.019), ('learners', 0.019), ('effective', 0.019), ('tr', 0.018), ('labels', 0.018), ('parameter', 0.018), ('reduce', 0.018), ('difficulty', 0.017), ('digit', 0.017), ('correspondingly', 0.017), ('nt', 0.017), ('refinement', 0.017), ('transductive', 0.017), ('webcam', 0.017), ('regularization', 0.017), ('thoroughly', 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

2 0.21814498 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

3 0.21763727 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

4 0.18066452 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

5 0.12857485 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

Author: Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun

Abstract: Face verification involves determining whether a pair of facial images belongs to the same or different subjects. This problem can prove to be quite challenging in many important applications where labeled training data is scarce, e.g., family album photo organization software. Herein we propose a principled transfer learning approach for merging plentiful source-domain data with limited samples from some target domain of interest to create a classifier that ideally performs nearly as well as if rich target-domain data were present. Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM algorithm. As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights related to solution structure and feature-transform invariance. These insights help to both explain the effectiveness of our algorithm as well as elucidate a wide variety of related Bayesian approaches. Experimental testing with challenging datasets validate the utility of the proposed algorithm.

6 0.10963121 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

7 0.10360426 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

8 0.078348137 44 iccv-2013-Adapting Classification Cascades to New Domains

9 0.075649783 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces

10 0.066378705 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

11 0.066153228 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions

12 0.058149669 225 iccv-2013-Joint Segmentation and Pose Tracking of Human in Natural Videos

13 0.057829604 359 iccv-2013-Robust Object Tracking with Online Multi-lifespan Dictionary Learning

14 0.057705715 395 iccv-2013-Slice Sampling Particle Belief Propagation

15 0.056256045 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling

16 0.055956293 233 iccv-2013-Latent Task Adaptation with Large-Scale Hierarchies

17 0.054775063 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

18 0.054005582 340 iccv-2013-Real-Time Articulated Hand Pose Estimation Using Semi-supervised Transductive Regression Forests

19 0.052385021 232 iccv-2013-Latent Space Sparse Subspace Clustering

20 0.052273385 52 iccv-2013-Attribute Adaptation for Personalized Image Search


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.128), (1, 0.045), (2, -0.051), (3, -0.052), (4, -0.03), (5, 0.002), (6, -0.005), (7, 0.058), (8, 0.048), (9, 0.011), (10, -0.024), (11, -0.118), (12, -0.053), (13, -0.059), (14, 0.12), (15, -0.162), (16, -0.075), (17, -0.052), (18, -0.012), (19, -0.043), (20, 0.157), (21, -0.089), (22, 0.096), (23, 0.061), (24, 0.016), (25, -0.051), (26, -0.043), (27, -0.031), (28, -0.002), (29, 0.026), (30, 0.033), (31, 0.038), (32, 0.073), (33, 0.009), (34, -0.016), (35, -0.001), (36, -0.031), (37, -0.025), (38, 0.047), (39, 0.01), (40, -0.017), (41, 0.024), (42, -0.004), (43, 0.029), (44, 0.001), (45, 0.021), (46, 0.003), (47, 0.051), (48, 0.069), (49, -0.005)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95598066 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

2 0.92238462 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

Author: Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

Abstract: In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyperparameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.

3 0.90424657 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann

Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

4 0.89917839 123 iccv-2013-Domain Adaptive Classification

Author: Fatemeh Mirrashed, Mohammad Rastegari

Abstract: We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.

5 0.88891798 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

Author: Tatiana Tommasi, Barbara Caputo

Abstract: Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on imageto-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.

6 0.8798427 124 iccv-2013-Domain Transfer Support Vector Ranking for Person Re-identification without Target Camera Label Information

7 0.70984238 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces

8 0.60816306 26 iccv-2013-A Practical Transfer Learning Algorithm for Face Verification

9 0.59686154 44 iccv-2013-Adapting Classification Cascades to New Domains

10 0.55490482 413 iccv-2013-Target-Driven Moire Pattern Synthesis by Phase Modulation

11 0.55183798 451 iccv-2013-Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions

12 0.52671146 178 iccv-2013-From Semi-supervised to Transfer Counting of Crowds

13 0.5250895 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

14 0.52306098 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

15 0.49326992 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling

16 0.43406925 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

17 0.43398198 96 iccv-2013-Coupled Dictionary and Feature Space Learning with Applications to Cross-Domain Image Synthesis and Recognition

18 0.4285183 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

19 0.3854363 212 iccv-2013-Image Set Classification Using Holistic Multiple Order Statistics Features and Localized Multi-kernel Metric Learning

20 0.38358566 142 iccv-2013-Ensemble Projection for Semi-supervised Image Classification


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.046), (7, 0.027), (26, 0.093), (27, 0.163), (31, 0.046), (34, 0.015), (40, 0.015), (42, 0.105), (64, 0.057), (68, 0.014), (73, 0.036), (77, 0.011), (78, 0.015), (89, 0.115), (96, 0.027), (98, 0.089)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.8306011 292 iccv-2013-Non-convex P-Norm Projection for Robust Sparsity

Author: Mithun Das Gupta, Sanjeev Kumar

Abstract: In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. We compare with L1projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsificationproposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.

same-paper 2 0.82781154 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

Author: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu

Abstract: Transfer learning is established as an effective technology in computer visionfor leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robustfor substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

3 0.81782103 342 iccv-2013-Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length

Author: Zuzana Kukelova, Martin Bujnak, Tomas Pajdla

Abstract: Theproblem ofdetermining the absoluteposition andorientation of a camera from a set of 2D-to-3D point correspondences is one of the most important problems in computer vision with a broad range of applications. In this paper we present a new solution to the absolute pose problem for camera with unknown radial distortion and unknown focal length from five 2D-to-3D point correspondences. Our new solver is numerically more stable, more accurate, and significantly faster than the existing state-of-the-art minimal fourpoint absolutepose solvers for this problem. Moreover, our solver results in less solutions and can handle larger radial distortions. The new solver is straightforward and uses only simple concepts from linear algebra. Therefore it is simpler than the state-of-the-art Gr¨ obner basis solvers. We compare our new solver with the existing state-of-theart solvers and show its usefulness on synthetic and real datasets. 1

4 0.78909373 310 iccv-2013-Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Author: Tae-Hyun Oh, Hyeongwoo Kim, Yu-Wing Tai, Jean-Charles Bazin, In So Kweon

Abstract: Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values. The proposed objective function implicitly encourages the target rank constraint in rank minimization. Our experimental analyses show that our approach performs better than conventional rank minimization when the number of samples is deficient, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, photometric stereo and image alignment, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.

5 0.76578128 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

Author: Zhenyu Guo, Z. Jane Wang

Abstract: Digital images nowadays show large appearance variabilities on picture styles, in terms of color tone, contrast, vignetting, and etc. These ‘picture styles’ are directly related to the scene radiance, image pipeline of the camera, and post processing functions (e.g., photography effect filters). Due to the complexity and nonlinearity of these factors, popular gradient-based image descriptors generally are not invariant to different picture styles, which could degrade the performance for object recognition. Given that images shared online or created by individual users are taken with a wide range of devices and may be processed by various post processing functions, to find a robust object recognition system is useful and challenging. In this paper, we investigate the influence of picture styles on object recognition by making a connection between image descriptors and a pixel mapping function g, and accordingly propose an adaptive approach based on a g-incorporated kernel descriptor and multiple kernel learning, without estimating or specifying the image styles used in training and testing. We conduct experiments on the Domain Adaptation data set, the Oxford Flower data set, and several variants of the Flower data set by introducing popular photography effects through post-processing. The results demonstrate that theproposedmethod consistently yields recognition improvements over standard descriptors in all studied cases.

6 0.75868338 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval

7 0.75531322 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition

8 0.75333226 19 iccv-2013-A Learning-Based Approach to Reduce JPEG Artifacts in Image Matting

9 0.74086171 435 iccv-2013-Unsupervised Domain Adaptation by Domain Invariant Projection

10 0.73859996 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction

11 0.73232412 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

12 0.72703451 181 iccv-2013-Frustratingly Easy NBNN Domain Adaptation

13 0.72507608 123 iccv-2013-Domain Adaptive Classification

14 0.72402823 33 iccv-2013-A Unified Video Segmentation Benchmark: Annotation, Metrics and Analysis

15 0.71769428 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

16 0.71768713 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

17 0.71701413 150 iccv-2013-Exemplar Cut

18 0.71607876 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

19 0.71475518 277 iccv-2013-Multi-channel Correlation Filters

20 0.71453923 173 iccv-2013-Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging