jmlr jmlr2012 jmlr2012-26 jmlr2012-26-reference knowledge-graph by maker-knowledge-mining

26 jmlr-2012-Coherence Functions with Applications in Large-Margin Classification Methods


Source: pdf

Author: Zhihua Zhang, Dehua Liu, Guang Dai, Michael I. Jordan

Abstract: Support vector machines (SVMs) naturally embody sparseness due to their use of hinge loss functions. However, SVMs can not directly estimate conditional class probabilities. In this paper we propose and study a family of coherence functions, which are convex and differentiable, as surrogates of the hinge function. The coherence function is derived by using the maximum-entropy principle and is characterized by a temperature parameter. It bridges the hinge function and the logit function in logistic regression. The limit of the coherence function at zero temperature corresponds to the hinge function, and the limit of the minimizer of its expected error is the minimizer of the expected error of the hinge loss. We refer to the use of the coherence function in large-margin classification as “C -learning,” and we present efficient coordinate descent algorithms for the training of regularized C -learning models. Keywords: large-margin classifiers, hinge functions, logistic functions, coherence functions, C learning


reference text

U. Alon, N. Barkai, D. A. Notterman, K. Gish, S. Ybarra, D. Mack, and A. J. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences (PNAS), 96 (12):6745–6750, 1999. P. Bartlett and A. Tewari. Sparseness vs estimating conditional probabilities: some asymptotic results. Journal of Machine Learning Research, 8:775–790, 2007. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. J. M. Bernardo and A. F. M. Smith. Bayesian Theory. John Wiley and Sons, New York, 1994. O. Chapelle, B. Sch¨ lkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, o MA, 2006. N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, Cambridge, U.K., 2000. Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 21: 256–285, 1995. J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22, 2010. J. H. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting. Annals of Statistics, 28(2):337–374, 2000. T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classitcation of cancer: class discovery and class prediction by gene expression monitoring. Science, 286: 531–537, 1999. T. Hastie, R. Tishirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag, New York, 2001. Y. Lin. Tensor product space ANOVA models. The Annals of Statistics, 28(3):734–755, 2000. 2733 Z HANG , L IU , DAI AND J ORDAN Y. Lin. Support vector machines and the Bayes rule in classification. Data Mining and Knowledge Discovery, 6:259–275, 2002. Y. Lin, G. Wahba, H. Zhang, and Y. Lee. Statistical properties and adaptive tuning of support vector machines. Machine Learning, 48:115–136, 2002. B. K. Mallick, D. Ghosh, and M. Ghosh. Bayesian classification of tumours by using gene expression data. Journal of the Royal Statistical Society Series B, 67:219–234, 2005. J. C. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Advances in Large Margin Classifiers, pages 61–74, Cambridge, MA, 1999. MIT Press. K. Rose, E. Gurewitz, and G. C. Fox. Statistical mechanics and phase transitions in clustering. Physics Review Letter, 65:945–948, 1990. X. Shen, G. C. Tseng, X. Zhang, and W. H. Wong. On ψ-learning. Journal of the American Statistical Association, 98:724–734, 2003. P. Sollich. Bayesian methods for support vector machines: evidence and predictive class probabilities. Machine Learning, 46:21–52, 2002. I. Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:67 – 93, 2001. I. Steinwart. Sparseness of support vector machines. Journal of Machine Learning Research, 4: 1071 – 1105, 2003. I. Steinwart. Consistency of support vector machines and other regularized kernel classifiers. IEEE Transactions on Information Theory, 51(1):128 – 142, 2005. R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267–288, 1996. V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998. G. Wahba. Spline Models for Observational Data. SIAM, Philadelphia, 1990. J. Wang, X. Shen, and Y. Liu. Probability estimation for large-margin classifiers. Biometrika, 95 (1):149–167, 2008. L. Wang, J. Zhu, and H. Zou. Hybrid huberized support vector machines for microarray classification. In Proceedings of the 24th International Conference on Machine Learning (ICML), pages 983–990, 2007. T. Zhang. Statistical behavior and consistency of classifications based on convex risk minimization. Annals of Statistics, 32:56–85, 2004. T. Zhang and F. Oles. Text categorization based on regularized linear classification methods. Information Retrieval, 4:5–31, 2001. H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society B, 67:301–320, 2005. 2734