nips nips2013 nips2013-211 nips2013-211-reference knowledge-graph by maker-knowledge-mining

211 nips-2013-Non-Linear Domain Adaptation with Boosting


Source: pdf

Author: Carlos J. Becker, Christos M. Christoudias, Pascal Fua

Abstract: A common assumption in machine vision is that the training and test samples are drawn from the same distribution. However, there are many problems when this assumption is grossly violated, as in bio-medical applications where different acquisitions can generate drastic variations in the appearance of the data due to changing experimental conditions. This problem is accentuated with 3D data, for which annotation is very time-consuming, limiting the amount of data that can be labeled in new acquisitions for training. In this paper we present a multitask learning algorithm for domain adaptation based on boosting. Unlike previous approaches that learn task-specific decision boundaries, our method learns a single decision boundary in a shared feature space, common to all tasks. We use the boosting-trick to learn a non-linear mapping of the observations in each task, with no need for specific a-priori knowledge of its global analytical form. This yields a more parameter-free domain adaptation approach that successfully leverages learning on new tasks where labeled data is scarce. We evaluate our approach on two challenging bio-medical datasets and achieve a significant improvement over the state of the art. 1


reference text

[1] Jiang, J.: A literature survey on domain adaptation of statistical classifiers. (2008)

[2] Caruana, R.: Multitask Learning. Machine Learning 28 (1997)

[3] Evgeniou, T., Micchelli, C., Pontil, M.: Learning Multiple Tasks with Kernel Methods. JMLR 6 (2005)

[4] Bach, F.R., Jordan, M.I.: Kernel Independent Component Analysis. JMLR 3 (2002) 1–48

[5] Ek, C.H., Torr, P.H., Lawrence, N.D.: Ambiguity Modelling in Latent Spaces. In: MLMI. (2008)

[6] Salzmann, M., Ek, C.H., Urtasun, R., Darrell, T.: Factorized Orthogonal Latent Spaces. In: AISTATS. (2010)

[7] Memisevic, R., Sigal, L., Fleet, D.J.: Shared Kernel Information Embedding for Discriminative Inference. PAMI (April 2012) 778–790

[8] Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer (2001)

[9] Zheng, Z., Zha, H., Zhang, T., Chapelle, O., Sun, G.: A General Boosting Method and Its Application to Learning Ranking Functions for Web Search. In: NIPS. (2007)

[10] Chapelle, O., Shivaswamy, P., Vadrevu, S., Weinberger, K., Zhang, Y., Tseng, B.: Boosted Multi-Task Learning. Machine Learning (2010)

[11] Turetken, E., Benmansour, F., Fua, P.: Automated Reconstruction of Tree Structures Using Path Classifiers and Mixed Integer Programming. In: CVPR. (June 2012)

[12] Baxter, J.: A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research (2000)

[13] Ando, R.K., Zhang, T.: A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. JMLR 6 (2005) 1817–1853

[14] Daum´ , H.: Bayesian Multitask Learning with Latent Hierarchies. In: UAI. (2009) e

[15] Kumar, A., Daum´ , H.: Learning Task Grouping and Overlap in Multi-task Learning. In: e ICML. (2012)

[16] Xue, Y., Liao, X., Carin, L., Krishnapuram, B.: Multi-task Learning for Classification with Dirichlet Process Priors. JMLR 8 (2007)

[17] Jacob, L., Bach, F., Vert, J.P.: Clustered Multi-task Learning: a Convex Formulation. In: NIPS. (2008)

[18] Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting Visual Category Models to New Domains. In: ECCV. (2010)

[19] Shon, A.P., Grochow, K., Hertzmann, A., Rao, R.P.N.: Learning Shared Latent Structure for Image Synthesis and Robotic Imitation. In: NIPS. (2006) 1233–1240

[20] Kulis, B., Saenko, K., Darrell, T.: What You Saw is Not What You Get: Domain Adaptation Using Asymmetric Kernel Transforms. In: CVPR. (2011)

[21] Gopalan, R., Li, R., Chellappa, R.: Domain Adaptation for Object Recognition: An Unsupervised Approach. In: ICCV. (2011)

[22] Rosset, S., Zhu, J., Hastie, T.: Boosting as a Regularized Path to a Maximum Margin Classifier. JMLR (2004)

[23] Caruana, R., Niculescu-Mizil, A.: An Empirical Comparison of Supervised Learning Algorithms. In: ICML. (2006)

[24] Viola, P., Jones, M.: Rapid Object Detection Using a Boosted Cascade of Simple Features. In: CVPR. (2001)

[25] Ali, K., Fleuret, F., Hasler, D., Fua, P.: A Real-Time Deformable Detector. PAMI 34(2) (February 2012) 225–239

[26] Freund, Y., Schapire, R.: A Short Introduction to Boosting (1999) Journal of Japanese Society for Artificial Intelligence, 14(5):771-780.

[27] Becker, C., Ali, K., Knott, G., Fua, P.: Learning Context Cues for Synapse Segmentation. TMI (2013) In Press. 9