iccv iccv2013 iccv2013-435 iccv2013-435-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, Mathieu Salzmann
Abstract: Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.
[1] P. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms Matrix Manifolds. Princeton University Press, 2008. on
[2] H. Bay, T. Tuytelaars, and L. Van Gool. Surf: Speeded up robust features. In ECCV, 2006.
[3] A. Bergamo and L. Torresani. Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. In NIPS, 2010.
[4] J. Blitzer, D. Foster, and S. Kakade. Domain adaptation with coupled subspaces. JMLR, 2011.
[5] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In Conf. Empirical Methods in Natural. Lang. Proc., 2006.
[6] K. Borgwardt, A. Gretton, M. Rasch, H. Kriegel, B. Schoelkopf, and A. Smola. Integrating structured biological data by kernel maximum mean discrepancy. J. Bioinformatics, 2006.
[7] L. Bruzzone and M. Marconcini. Domain adaptation problems: A dasvm classification technique and a circular validation strategy. TPAMI, 2010.
[8] M. Chen, K. Weinberger, and J. Blitzer. Co-training for domain adaptation. In NIPS, 2011.
[9] H. Daum e´ III, A. Kumar, and A. Saha. Co-regularization based semisupervised domain adaptation. In NIPS, 2010.
[10] H. Daum e´ III and D. Marcu. Domain adaptation for statistical classifiers. JAIR, 2006.
[11] L. Duan, I. Tsang, D. Xu, and T. Chua. Domain adaptation from multiple sources via auxiliary classifiers. In ICML, 2009.
[12] L. Duan, I. Tsang, D. Xu, and S. Maybank. Domain transfer svm for video concept detection. In CVPR, 2009.
[13] A. Edelman, T. Arias, and S. Smith. The geometry of algorithms with orthogonality constraints. SIAM, 1998.
[14] B. Gong, K. Grauman, and F. Sha. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In ICML, 2013. 777766
[15] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012.
[16] R. Gopalan, R. Li, and R. Chellappa. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011.
[17] A. Gretton, K. Borgwardt, M. Rasch, B. Sch o¨lkopf, and A. Smola. A
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29] kernel two-sample test. JMLR, 2012. A. Gretton, A. Smola, J. Huang, M. Schmittfull, K. Borgwardt, and B. Sch o¨lkopf. Covariate shift by kernel mean matching. J. Royal. Statistical Society, 2009. G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical report, Calif. Inst. of Tech., 2007. J. Hoffman, B. Kulis, T. Darrell, and K. Saenko. Discovering latent domains for multisource domain adaptation. In ECCV, 2012. J. Huang, A. J. Smola, A. Gretton, K. Borgwardt, and B. Scholkopf. Correcting sample selection bias by unlabeled data. In NIPS, 2007. V. Jain and E. Learned-Miller. Online domain adaptation of a pretrained cascade of classifiers. In CVPR, 2011. B. Kulis, K. Saenko, and T. Darrell. What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In CVPR, 2011. S. Pan, I. Tsang, J. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. TNN, 2011. A. Ruszczynski. Nonlinear optimization. Princeton University press, 2006. K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, 2010. I. Steinwart. On the influence of the kernel on the consistency of support vector machines. JMLR, 2002. D. Xing, W. Dai, G. Xue, and Y. Yu. Bridged refinement for transfer learning. In ECML, 2007. Q. Yang, J. Pan, and V. Zheng. Estimating location using wi-fi. IEEE Intelligent Systems, 2008.