nips nips2001 nips2001-105 nips2001-105-reference knowledge-graph by maker-knowledge-mining

105 nips-2001-Kernel Machines and Boolean Functions


Source: pdf

Author: Adam Kowalczyk, Alex J. Smola, Robert C. Williamson

Abstract: We give results about the learnability and required complexity of logical formulae to solve classification problems. These results are obtained by linking propositional logic with kernel machines. In particular we show that decision trees and disjunctive normal forms (DNF) can be represented by the help of a special kernel, linking regularized risk to separation margin. Subsequently we derive a number of lower bounds on the required complexity of logic formulae using properties of algorithms for generation of linear estimators, such as perceptron and maximal perceptron learning.


reference text

[1] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337 – 404, 1950.

[2] S. Ben-David and H. U. Simon. Efficient learning of linear perceptron. In T.K. Leen, T.G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 189–195, Cambridge, MA, 2001. MIT Press.

[3] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995.

[4] L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone. Classification and Regression Trees. Wadsworth Int., Belmont, Ca., 1984.

[5] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273 – 297, 1995.

[6] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge, 2000.

[7] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. In J. Shavlik, editor, Machine Learning: Proceedings of the Fifteenth International Conference, San Francisco, CA, 1998. Morgan Kaufmann.

[8] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219–269, 1995.

[9] A. Kowalczyk. Maximal margin perceptron. In A. Smola, P.Bartlett, B. Sch¨ lkopf, o and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 61–100, Cambridge, MA, 2000. MIT Press.

[10] A. Kowalczyk and H. Ferr` . Developing higher-order networks with empirically sea lected units. IEEE Transactions on Neural Networks, 5:698–711, 1994.

[11] A. B. Novikoff. On convergence proofs on perceptrons. Symposium on the Mathematical Theory of Automata, 12:615–622, 1962.

[12] J.R. Quinlan. Simplifying decision trees. Int. J. Man-Machine Studies, 27:221–234, (1987).

[13] J. Shawe-Taylor and N. Christianini. Margin distribution and soft margin. In A. J. Smola, P. L. Bartlett, B. Sch¨ lkopf, and D. Schuurmans, editors, Advances in Large o Margin Classifiers, pages 349–358, Cambridge, MA, 2000. MIT Press.