nips nips2008 nips2008-199 nips2008-199-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mohak Shah
Abstract: We derive risk bounds for the randomized classifiers in Sample Compression setting where the classifier-specification utilizes two sources of information viz. the compression set and the message string. By extending the recently proposed Occam’s Hammer principle to the data-dependent settings, we derive point-wise versions of the bounds on the stochastic sample compressed classifiers and also recover the corresponding classical PAC-Bayes bound. We further show how these compare favorably to the existing results.
Gilles Blanchard and Francois Fleuret. Occam’s hammer. In Proceedings of the 20th Annual Con¸ ference on Learning Theory (COLT-2007), volume 4539 of Lecture Notes on Computer Science, pages 112–126, 2007. Sally Floyd and Manfred Warmuth. Sample compression, learnability, and the Vapnik-Chervonenkis dimension. Machine Learning, 21(3):269–304, 1995. John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 3:273–306, 2005. Francois Laviolette and Mario Marchand. PAC-Bayes risk bounds for stochastic averages and major¸ ity votes of sample-compressed classifiers. Journal of Machine Learning Research, 8:1461–1487, 2007. Francois Laviolette, Mario Marchand, and Mohak Shah. Margin-sparsity trade-off for the set covering machine. In Proceedings of the 16th European Conference on Machine Learning, ECML 2005, volume 3720 of Lecture Notes in Artificial Intelligence, pages 206–217. Springer, 2005. N. Littlestone and M. Warmuth. Relating data compression and learnability. Technical report, University of California Santa Cruz, Santa Cruz, CA, 1986. Mario Marchand and John Shawe-Taylor. The Set Covering Machine. Journal of Machine Learning Reasearch, 3:723–746, 2002. David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355–363, 1999.