nips nips2008 nips2008-242 nips2008-242-reference knowledge-graph by maker-knowledge-mining

242 nips-2008-Translated Learning: Transfer Learning across Different Feature Spaces


Source: pdf

Author: Wenyuan Dai, Yuqiang Chen, Gui-rong Xue, Qiang Yang, Yong Yu

Abstract: This paper investigates a new machine learning strategy called translated learning. Unlike many previous learning tasks, we focus on how to use labeled data from one feature space to enhance the classification of other entirely different learning spaces. For example, we might wish to use labeled text data to help learn a model for classifying image data, when the labeled images are difficult to obtain. An important aspect of translated learning is to build a “bridge” to link one feature space (known as the “source space”) to another space (known as the “target space”) through a translator in order to migrate the knowledge from source to target. The translated learning solution uses a language model to link the class labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Finally, this chain of linkages is completed by tracing back to the instances in the target spaces. We show that this path of linkage can be modeled using a Markov chain and risk minimization. Through experiments on the text-aided image classification and cross-language classification tasks, we demonstrate that our translated learning framework can greatly outperform many state-of-the-art baseline methods. 1


reference text

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13] N. Bel, C. Koster, and M. Villegas. Cross-lingual text categorization. In ECDL, 2003. A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998. R. Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997. W. Dai, Q. Yang, G.-R. Xue, and Y. Yu. Self-taught clustering. In ICML, 2008. J. Lafferty and C. Zhai. Document language models, query models, and risk minimization for information retrieval. In SIGIR, 2001. D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110, 2004. I. Muslea, S. Minton, and C. Knoblock. Active + semi-supervised learning = robust multi-view learning. In ICML, 2002. K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In CIKM, 2000. R. Raina, A. Battle, H. Lee, B. Packer, and A. Ng. Self-taught learning: transfer learning from unlabeled data. In ICML, 2007. R. Raina, A. Ng, and D. Koller. Constructing informative priors using transfer learning. In ICML, 2006. P. Wu and T. Dietterich. Improving svm accuracy by training on auxiliary data sources. In ICML, 2004. Y. Yang and J. Pedersen. A comparative study on feature selection in text categorization. In ICML, 1997. X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, University of WisconsinMadison, 2007.