nips nips2001 nips2001-60 nips2001-60-reference knowledge-graph by maker-knowledge-mining

60 nips-2001-Discriminative Direction for Kernel Classifiers


Source: pdf

Author: Polina Golland

Abstract: In many scientific and engineering applications, detecting and understanding differences between two groups of examples can be reduced to a classical problem of training a classifier for labeling new examples while making as few mistakes as possible. In the traditional classification setting, the resulting classifier is rarely analyzed in terms of the properties of the input data captured by the discriminative model. However, such analysis is crucial if we want to understand and visualize the detected differences. We propose an approach to interpretation of the statistical model in the original feature space that allows us to argue about the model in terms of the relevant changes to the input vectors. For each point in the input space, we define a discriminative direction to be the direction that moves the point towards the other class while introducing as little irrelevant change as possible with respect to the classifier function. We derive the discriminative direction for kernel-based classifiers, demonstrate the technique on several examples and briefly discuss its use in the statistical shape analysis, an application that originally motivated this work.


reference text

[1] S. Amari and S. Wu. Improving Support Vector Machines by Modifying Kernel Functions. Neural Networks, 783-789, 1999.

[2] S. Amari. Natural Gradient Works Efficiently in Learning. Neural Comp., 10:251-276, 1998.

[3] C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2):121-167, 1998.

[4] C. J. C. Burges. Geometry and Invariance in Kernel Based Methods. In Adv. in Kernel Methods: Support Vector Learning, Eds. Sch¨ lkopf, Burges and Smola, MIT Press, o 89-116, 1999.

[5] P. Golland et al. Small Sample Size Learning for Shape Analysis of Anatomical Structures. In Proc. of MICCAI’2000, LNCS 1935:72-82, 2000.

[6] B. Sch¨ lkopf et al. Input Space vs. Feature Space in Kernel-Based Methods. IEEE o Trans. on Neural Networks, 10(5):1000-1017, 1999.

[7] B. Sch¨ lkopf, A. Smola, and K.-R. M¨ ller. Nonlinear Component Analysis as a Kero u nel Eigenvalue Problem. Neural Comp., 10:1299-1319, 1998.

[8] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.

[9] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.