nips nips2003 nips2003-121 nips2003-121-reference knowledge-graph by maker-knowledge-mining

121 nips-2003-Log-Linear Models for Label Ranking


Source: pdf

Author: Ofer Dekel, Yoram Singer, Christopher D. Manning

Abstract: Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferences over the label-set, however we do not assume that this list is either complete or consistent. This enables us to accommodate a variety of ranking problems. In contrast to the general form of the supervision, our goal is to learn a ranking function that induces a total order over the entire set of labels. Special cases of our setting are multilabel categorization and hierarchical classification. We present a general boosting-based learning algorithm for the label ranking problem and prove a lower bound on the progress of each boosting iteration. The applicability of our approach is demonstrated with a set of experiments on a large-scale text corpus. 1


reference text

[1] M. Collins and N. Duffy. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In 30th Annual Meeting of the ACL, 2002.

[2] M. Collins, R.E. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 47(2/3):253–285, 2002.

[3] K. Crammer and Y. Singer. Pranking with ranking. NIPS 14, 2001.

[4] K. Crammer and Y. Singer. A new family of online algorithms for category ranking. Jornal of Machine Learning Research, 3:1025–1058, 2003.

[5] O. Dekel, S. Shalev-Shwartz, and Y. Singer. Smooth epsilon-insensitive regression by loss symmetrization. COLT 16, 2003.

[6] A. Elisseeff and J. Weston. A kernel method for multi-labeled classification. NIPS 14, 2001.

[7] Y. Freund, R. Iyer, R. E.Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. In Machine Learning: Proc. of the Fifteenth International Conference, 1998.

[8] G. Lebanon and J. Lafferty. Boosting and ML for exponential models. NIPS 14, 2001.

[9] G. Lebanon and J. Lafferty. Conditional models on the ranking poset. NIPS 15, 2002.

[10] R. E. Schapire and Y. Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 32(2/3), 2000.

[11] A. Shashua and A. Levin. Ranking with large margin principle. NIPS 15, 2002.

[12] K. Toutanova and C. D. Manning. Feature selection for a rich HPSG grammar using decision trees. In Proceedings of the Sixth Conference on Natural Language Learning (CoNLL), 2002.

[13] The Penn Treebank Project. http://www.cis.upenn.edu/∼treebank/.

[14] Reuters Corpus Vol. 1. http://about.reuters.com/researchandstandards/corpus/.