nips nips2012 nips2012-308 nips2012-308-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: David Lopez-paz, Jose M. Hernández-lobato, Bernhard Schölkopf
Abstract: A new framework based on the theory of copulas is proposed to address semisupervised domain adaptation problems. The presented method factorizes any multivariate density into a product of marginal distributions and bivariate copula functions. Therefore, changes in each of these factors can be detected and corrected to adapt a density model accross different learning domains. Importantly, we introduce a novel vine copula model, which allows for this factorization in a non-parametric manner. Experimental results on regression problems with real-world data illustrate the efficacy of the proposed approach when compared to state-of-the-art techniques. 1
[1] K. Aas, C. Czado, A. Frigessi, and H. Bakken. Pair-copula constructions of multiple dependence. Insurance: Mathematics and Economics, 44(2):182–198, 2006.
[2] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. A theory of learning from different domains. Machine Learning, 79(1):151–175, 2010.
[3] E. Bonilla, K. Chai, and C. Williams. Multi-task gaussian process prediction. NIPS, 2008.
[4] B. Cao, S. Jialin, Y. Zhang, D. Yeung, and Q. Yang. Adaptive transfer learning. AAAI, 2010.
[5] C. Cortes and M. Mohri. Domain adaptation in regression. In Proceedings of the 22nd international conference on Algorithmic learning theory, ALT’11, pages 308–323, Berlin, Heidelberg, 2011. Springer-Verlag.
[6] H. Daum´ , III, Abhishek Kumar, and Avishek Saha. Frustratingly easy semi-supervised doe main adaptation. Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 53–59, 2010.
[7] H. Daum´ III. Frustratingly easy domain adaptation. Association of Computational Linguistics, e pages 256–263, 2007.
[8] J. Fermanian and O. Scaillet. The estimation of copulas: Theory and practice. Copulas: From Theory to Application in Finance, pages 35–60, 2007.
[9] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[10] A. Gretton, K. Borgwardt, M. Rasch, B. Scholkopf, and A. Smola. A kernel method for the two-sample-problem. NIPS, pages 513–520, 2007.
[11] J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Schoelkopf. Correcting sample selection bias by unlabeled data. NIPS, pages 601–608, 2007.
[12] P. Jaworski, F. Durante, W.K. H¨ rdle, and T. Rychlik. Copula Theory and Its Applications. a Lecture Notes in Statistics. Springer, 2010.
[13] S. Jialin-Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, 2010.
[14] H. Joe. Families of m-variate distributions with given margins and m(m − 1)/2 bivariate dependence parameters. Distributions with Fixed Marginals and Related Topics, 1996.
[15] T. Kanamori, T. Suzuki, and M. Sugiyama. Statistical analysis of kernel-based least-squares density-ratio estimation. Machine Learning, 86(3):335–367, 2012.
[16] D. Kurowicka and R. Cooke. Uncertainty Analysis with High Dimensional Dependence Modelling. Wiley Series in Probability and Statistics, 1st edition, 2006.
[17] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In COLT, 2009.
[18] R. Nelsen. An Introduction to Copulas. Springer Series in Statistics, 2nd edition, 2006.
[19] S. Nitschke, E. Kidd, and L. Serratrice. First language transfer and long-term structural priming in comprehension. Language and Cognitive Processes, 5(1):94–114, 2010.
[20] R. C. Prim. Shortest connection networks and some generalizations. Bell System Technology Journal, 36:1389–1401, 1957.
[21] B.W. Silverman. Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability. Chapman and Hall, 1986. `
[22] A. Sklar. Fonctions de repartition a n dimension set leurs marges. Publ. Inst. Statis. Univ. Paris, 8(1):229–231, 1959. 9