nips nips2001 nips2001-1 nips2001-1-reference knowledge-graph by maker-knowledge-mining

1 nips-2001-(Not) Bounding the True Error


Source: pdf

Author: John Langford, Rich Caruana

Abstract: We present a new approach to bounding the true error rate of a continuous valued classifier based upon PAC-Bayes bounds. The method first constructs a distribution over classifiers by determining how sensitive each parameter in the model is to noise. The true error rate of the stochastic classifier found with the sensitivity analysis can then be tightly bounded using a PAC-Bayes bound. In this paper we demonstrate the method on artificial neural networks with results of a order of magnitude improvement vs. the best deterministic neural net bounds. £ ¡ ¤¢


reference text

[1] Peter Bartlett, “The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network”, IEEE Transactions on Information Theory, Vol. 44, No. 2, March 1998.

[2] V. Koltchinskii and D. Panchenko, “Empirical Margin Distributions and Bounding the Generalization Error of Combined Classifiers”, preprint, http://citeseer.nj.nec.com/386416.html

[3] John Langford and Matthias Seeger, “Bounds for Averaging Classifiers.” CMU tech report, 2001.

[4] David MacKay, “Probable Networks and Plausible Predictions - A Review of Practical Bayesian Methods for Supervised Neural Networks”, ??

[5] David McAllester, “Some PAC-Bayes bounds”, COLT 1999.