nips nips2000 nips2000-18 knowledge-graph by maker-knowledge-mining

18 nips-2000-Active Support Vector Machine Classification


Source: pdf

Author: Olvi L. Mangasarian, David R. Musicant

Abstract: An active set strategy is applied to the dual of a simple reformulation of the standard quadratic program of a linear support vector machine. This application generates a fast new dual algorithm that consists of solving a finite number of linear equations, with a typically large dimensionality equal to the number of points to be classified. However, by making novel use of the Sherman-MorrisonWoodbury formula , a much smaller matrix of the order of the original input space is inverted at each step. Thus, a problem with a 32-dimensional input space and 7 million points required inverting positive definite symmetric matrices of size 33 x 33 with a total running time of 96 minutes on a 400 MHz Pentium II. The algorithm requires no specialized quadratic or linear programming code, but merely a linear equation solver which is publicly available. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract An active set strategy is applied to the dual of a simple reformulation of the standard quadratic program of a linear support vector machine. [sent-9, score-0.952]

2 This application generates a fast new dual algorithm that consists of solving a finite number of linear equations, with a typically large dimensionality equal to the number of points to be classified. [sent-10, score-0.555]

3 However, by making novel use of the Sherman-MorrisonWoodbury formula , a much smaller matrix of the order of the original input space is inverted at each step. [sent-11, score-0.224]

4 Thus, a problem with a 32-dimensional input space and 7 million points required inverting positive definite symmetric matrices of size 33 x 33 with a total running time of 96 minutes on a 400 MHz Pentium II. [sent-12, score-0.415]

5 The algorithm requires no specialized quadratic or linear programming code, but merely a linear equation solver which is publicly available. [sent-13, score-0.428]

6 1 Introduction Support vector machines (SVMs) [23, 5, 14, 12] are powerful tools for data classification. [sent-14, score-0.049]

7 Classification is achieved by a linear or nonlinear separating surface in the input space of the dataset. [sent-15, score-0.341]

8 In this work we propose a very fast simple algorithm, based on an active set strategy for solving quadratic programs with bounds [18]. [sent-16, score-0.46]

9 The algorithm is capable of accurately solving problems with millions of points and requires nothing more complicated than a commonly available linear equation solver [17, 1, 6] for a typically small (100) dimensional input space of the problem. [sent-17, score-0.554]

10 Key to our approach are the following two changes to the standard linear SVM: 1. [sent-18, score-0.088]

11 Maximize the margin (distance) between the parallel separating planes with respect to both orientation (w) as well as location relative to the origin b). [sent-19, score-0.842]

12 Such an approach was also successfully utilized in the successive overrelaxation (SOR) approach of [15] as well as the smooth support vector machine (SSVM) approach of [12]. [sent-21, score-0.217]

13 The error in the soft margin (y) is minimized using the 2-norm squared instead of the conventional 1-norm. [sent-23, score-0.206]

14 Such an approach has also been used successfully in generating virtual support vectors [4]. [sent-25, score-0.109]

15 These simple, but fundamental changes, lead to a considerably simpler positive definite dual problem with nonnegativity constraints only. [sent-26, score-0.569]

16 In Section 2 of the paper we begin with the standard SVM formulation and its dual and then give our formulation and its simpler dual. [sent-28, score-0.619]

17 We corroborate with solid computational evidence that our simpler formulation does not compromise on generalization ability as evidenced by numerical tests in Section 4 on 6 public datasets. [sent-29, score-0.205]

18 Section 3 gives our active support vector machine (ASVM) Algorithm 3. [sent-31, score-0.303]

19 1 which consists of solving a system of linear equations in m dual variables with a positive definite matrix. [sent-32, score-0.672]

20 By invoking the Sherman-Morrison-Woodbury (SMW) formula (1) we need only invert an (n + 1) x (n + 1) matrix where n is the dimensionality of the input space. [sent-33, score-0.351]

21 This is a key feature of our approach that allows us to solve problems with millions of points by merely inverting much smaller matrices of the order of n. [sent-34, score-0.507]

22 In concurrent work [8] Ferris and Munson also use the SMW formula but in conjunction with an interior point approach to solve massive problems based on our formulation (8) as well as the conventional formulation (6). [sent-35, score-0.449]

23 Burges [3] has also used an active set method, but applied to the standard SVM formulation (2) instead of (7) as we do here. [sent-36, score-0.335]

24 Both this work and Burges' appeal, in different ways, to the active set computational strategy of More and Toraldo [18]. [sent-37, score-0.245]

25 We note that an active set computational strategy bears no relation to active learning. [sent-38, score-0.466]

26 We now describe our notation and give some background material. [sent-40, score-0.067]

27 All vectors will be column vectors unless transposed to a row vector by a prime I. [sent-41, score-0.049]

28 For a vector x E Rn, x + denotes the vector in Rn with all of its negative components set to zero. [sent-42, score-0.151]

29 The notation A E Rm x n will signify a real m x n matrix. [sent-43, score-0.067]

30 For such a matrix A' will denote the transpose of A and Ai will denote the i-th row of A. [sent-44, score-0.1]

31 A vector of ones or zeroes in a real space of arbitrary dimension will be denoted by e or 0, respectively. [sent-45, score-0.095]

32 The identity matrix of arbitrary dimension will be denoted by I. [sent-46, score-0.146]

33 , m}, UB denotes UiEB, QB denotes QiEB and QBB denotes a principal submatrix of Q with rows i E B and columns j E B. [sent-53, score-0.159]

34 The notation argminxEs f(x) denotes the set of minimizers in the set S of the real-valued function f defined on S. [sent-54, score-0.12]

35 The 2-norm of a matrix Q will be denoted by IIQI12. [sent-56, score-0.146]

36 A separating plane, with respect to two given point sets A and B in R n , is a plane that attempts to separate R n into two halfspaces such that each open halfspace contains points mostly of A or B. [sent-57, score-0.455]

37 A special case of the Sherman-Morrison-Woodbury (SMW) formula [9] will be utilized: (Ilv + HH') -l = v(I - H(Ilv + H'H)-l H'), (1) where v is a positive number and H is an arbitrary m x k matrix. [sent-58, score-0.172]

38 This formula enables us to invert a large m x m matrix by merely inverting a smaller k x k matrix. [sent-59, score-0.597]

39 For this problem the standard SVM with a linear kernel [23, 5] is given by the following quadratic program with parameter v > 0: . [sent-61, score-0.211]

40 D(Aw - e-y) (w,'Y,y)ERn +l+= 2 mm +y 2:: e, y 2:: O. [sent-64, score-0.056]

41 (2) x'w 0 x o x o 0 0 Ox 0 0000 1 +1 x x x 0 000 A- = x A+ x x x x o ( x'w =1 1-1 · 2 Margln= IIwl12 X'W = Figure 1: The bounding planes (3) with a soft (i. [sent-65, score-0.512]

42 with some errors) margin and the plane (4) approximately separating A+ from A-. [sent-67, score-0.421]

43 2/llwI12, Here w is the normal to the bounding planes: x'w = 'Y ± 1 (3) and'Y determines their location relative to the origin (Figure 1. [sent-68, score-0.193]

44 ) The plane x'w = 'Y + 1 bounds the A+ points, possibly with error, and the plane x'w = 'Y -1 bounds the A - points, also possibly with some error. [sent-69, score-0.366]

45 The separating surface is the plane: x'w = (4) 'Y , midway between the bounding planes (3). [sent-70, score-0.746]

46 The quadratic term in (2), is twice the reciprocal of the square of the 2-norm distance 2/llw112 between the two bounding planes of (3) (see Figure 1). [sent-71, score-0.535]

47 If the classes are linearly inseparable, as depicted in Figure 1, then the two planes bound the two classes with a "soft margin". [sent-73, score-0.443]

48 That is, they bound each set approximately with some error determined by the nonnegative error variable y: ~ ::; 'Y + 1, for 'Y - 1, for Dii = Dii = 1, - 1. [sent-74, score-0.086]

49 (5) Traditionally the I-norm of the error variable y is minimized parametrically with weight v in (2) resulting in an approximate separation as depicted in Figure 1. [sent-75, score-0.142]

50 The dual to the standard quadratic linear SVM (2) [13, 22, 14, 7] is the following: . [sent-76, score-0.471]

51 uER=2 - - (6) The variables (w, 'Y) of the primal problem which determine the separating surface (4) can be obtained from the solution of the dual problem above [15, Eqns. [sent-80, score-0.744]

52 We note immediately that the matrix DAA'D appearing in the dual objective function (6) is not positive definite in general because typically m > > n. [sent-82, score-0.709]

53 Also, there is an equality constraint present, in addition to bound constraints, which for large problems necessitates special computational procedures such as SMO [21]. [sent-83, score-0.151]

54 Furthermore, a one-dimensional optimization problem [15] must be solved in order to determine the locator 'Y of the separating surface (4). [sent-84, score-0.339]

55 In order to overcome all these difficulties as well as that of dealing with the necessity of having to essentially invert a very large matrix of the order of m x m , we propose the following simple but critical modification of the standard SVM formulation (2). [sent-85, score-0.379]

56 We change Il y lll to Ilyll§ which makes the constraint y ~ 0 redundant. [sent-86, score-0.056]

57 This in effect maximizes the margin between the parallel separating planes (3) with respect to both wand 'Y [15], that is with respect to both orientation and location of the planes, rather that just with respect to w which merely determines the orientation of the plane. [sent-88, score-1.058]

58 This leads to the following reformulation of the SVM: y'y 1 min v - + -(w'w + ,2) s. [sent-89, score-0.075]

59 (7) (w ,'Y, y)ERn+l+", 2 2 the dual of this problem is [13]: 1 I min -u'( - + D(AA' + ee')D)u - e'u. [sent-92, score-0.301]

60 (8) O~uER'" 2 v The variables (w,,) of the primal problem which determine the separating surface (4) are recovered directly from the solution of the dual (8) above by the relations: w=A'Du, y=u/v, ,=-e'Du. [sent-93, score-0.744]

61 (9) We immediately note that the matrix appearing in the dual objective function is positive definite and that there is no equality constraint and no upper bound on the dual variable u. [sent-94, score-1.201]

62 The only constraint present is a simple nonnegativity one. [sent-95, score-0.119]

63 These facts lead us to our simple finite active set algorithm which requires nothing more sophisticated than inverting an (n + 1) x (n + 1) matrix at each iteration in order to solve the dual problem (8). [sent-96, score-0.842]

64 3 ASVM (Active Support Vector Machine) Algorithm The algorithm consists of determining a partition of the dual variable u into nonbasic and basic variables. [sent-97, score-0.528]

65 The nonbasic variables are those which are set to zero. [sent-98, score-0.129]

66 The values of the basic variables are determined by finding the gradient of the objective function of (8) with respect to these variables, setting this gradient equal to zero, and solving the resulting linear equations for the basic variables. [sent-99, score-0.413]

67 If any basic variable takes on a negative value after solving the linear equations, it is set to zero and becomes nonbasic. [sent-100, score-0.216]

68 In order to make the a lgorithm converge and terminate, a few additional safeguards need to be put in place in order to a llow us to invoke the More- Toraldo finite termination result [18]. [sent-102, score-0.088]

69 The other key feature of the algorithm is a computational one and makes use of the SMW formula. [sent-103, score-0.098]

70 This feature allows us to invert an (n + 1) x (n + 1) matrix at each step instead of a much bigger matrix of order m x m. [sent-104, score-0.365]

71 Before stating our a lgorithm we define two matrices to simplifY notation as follows: H = D[A - e], Q = I /v + HH'. [sent-105, score-0.197]

72 (10) With these definitions the dual problem (8) becomes . [sent-106, score-0.301]

73 (11) 2 It will be understood that within the ASVM Algorithm, Q - 1 will always be evaluated using the SMW formula and hence only an (n+l) x (n+l) matrix is inverted. [sent-108, score-0.224]

74 Note that commented (%) parts of the algorithm are not needed in general and were rarely used in our numerical results presented in Section 4. [sent-110, score-0.092]

75 The essence of the algorithm is displayed in the two boxes below. [sent-111, score-0.121]

76 Bi ' BiBi B ' Stop if Ui+1 is the global solution, that is if a ~ Ui+1 -. [sent-125, score-0.056]

77 ~ 1 -eBi+1 ~ 0, then UH1 is a global solution on the face of active constraints: UNi = O. [sent-131, score-0.298]

78 I (4a) % Move in the direction of the global minimum on the face of ac· S et UBi := Q - l Bi eBi an d UBiI . [sent-134, score-0.115]

79 U~+1 = 0 for some j E B i , set i := i + 1 and go to (1). [sent-142, score-0.082]

80 Otherwise UH1 is a global minimum on the face UNi = 0, and go to (4b). [sent-143, score-0.197]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('planes', 0.351), ('dual', 0.301), ('asvm', 0.219), ('smw', 0.219), ('separating', 0.198), ('active', 0.183), ('inverting', 0.151), ('svm', 0.15), ('plane', 0.132), ('uer', 0.131), ('uni', 0.131), ('invert', 0.127), ('formula', 0.124), ('formulation', 0.112), ('definite', 0.103), ('millions', 0.103), ('bounding', 0.102), ('matrix', 0.1), ('surface', 0.095), ('merely', 0.095), ('margin', 0.091), ('ern', 0.088), ('ilv', 0.088), ('lgorithm', 0.088), ('nki', 0.088), ('nonbasic', 0.088), ('toraldo', 0.088), ('uki', 0.088), ('quadratic', 0.082), ('solving', 0.082), ('go', 0.082), ('ui', 0.078), ('reformulation', 0.075), ('points', 0.071), ('support', 0.071), ('dii', 0.068), ('essence', 0.068), ('notation', 0.067), ('rn', 0.066), ('massive', 0.063), ('nonnegativity', 0.063), ('primal', 0.063), ('solver', 0.063), ('strategy', 0.062), ('utilized', 0.059), ('face', 0.059), ('soft', 0.059), ('bi', 0.057), ('orientation', 0.057), ('rm', 0.056), ('appearing', 0.056), ('minimized', 0.056), ('mm', 0.056), ('ordinary', 0.056), ('constraint', 0.056), ('global', 0.056), ('respect', 0.054), ('simpler', 0.054), ('iterate', 0.054), ('nothing', 0.054), ('aw', 0.054), ('immediately', 0.054), ('ih', 0.054), ('denotes', 0.053), ('algorithm', 0.053), ('bounds', 0.051), ('burges', 0.051), ('equality', 0.049), ('vector', 0.049), ('equations', 0.049), ('positive', 0.048), ('linear', 0.048), ('street', 0.047), ('objective', 0.047), ('location', 0.047), ('bound', 0.046), ('determine', 0.046), ('basic', 0.046), ('denoted', 0.046), ('depicted', 0.046), ('key', 0.045), ('college', 0.044), ('origin', 0.044), ('matrices', 0.042), ('accurately', 0.041), ('program', 0.041), ('variables', 0.041), ('variable', 0.04), ('standard', 0.04), ('equation', 0.039), ('numerical', 0.039), ('successfully', 0.038), ('mill', 0.038), ('appeal', 0.038), ('bears', 0.038), ('bigger', 0.038), ('concurrent', 0.038), ('const', 0.038), ('df', 0.038), ('ilyll', 0.038)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 18 nips-2000-Active Support Vector Machine Classification

Author: Olvi L. Mangasarian, David R. Musicant

Abstract: An active set strategy is applied to the dual of a simple reformulation of the standard quadratic program of a linear support vector machine. This application generates a fast new dual algorithm that consists of solving a finite number of linear equations, with a typically large dimensionality equal to the number of points to be classified. However, by making novel use of the Sherman-MorrisonWoodbury formula , a much smaller matrix of the order of the original input space is inverted at each step. Thus, a problem with a 32-dimensional input space and 7 million points required inverting positive definite symmetric matrices of size 33 x 33 with a total running time of 96 minutes on a 400 MHz Pentium II. The algorithm requires no specialized quadratic or linear programming code, but merely a linear equation solver which is publicly available. 1

2 0.1261839 58 nips-2000-From Margin to Sparsity

Author: Thore Graepel, Ralf Herbrich, Robert C. Williamson

Abstract: We present an improvement of Novikoff's perceptron convergence theorem. Reinterpreting this mistake bound as a margin dependent sparsity guarantee allows us to give a PAC-style generalisation error bound for the classifier learned by the perceptron learning algorithm. The bound value crucially depends on the margin a support vector machine would achieve on the same data set using the same kernel. Ironically, the bound yields better guarantees than are currently available for the support vector solution itself. 1

3 0.11255595 70 nips-2000-Incremental and Decremental Support Vector Machine Learning

Author: Gert Cauwenberghs, Tomaso Poggio

Abstract: An on-line recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the KuhnTucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible, and decremental

4 0.10484184 9 nips-2000-A PAC-Bayesian Margin Bound for Linear Classifiers: Why SVMs work

Author: Ralf Herbrich, Thore Graepel

Abstract: We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training set. The result is obtained in a PAC- Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound by Shawe-Taylor et al. [8] and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and - for maximum margins - to a vanishing complexity term. Furthermore, the classical margin is too coarse a measure for the essential quantity that controls the generalisation error: the volume ratio between the whole hypothesis space and the subset of consistent hypotheses. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal w.r.t. the new bound only if the feature vectors are all of the same length. As a consequence we recommend to use SVMs on normalised feature vectors only - a recommendation that is well supported by our numerical experiments on two benchmark data sets. 1

5 0.10310751 44 nips-2000-Efficient Learning of Linear Perceptrons

Author: Shai Ben-David, Hans-Ulrich Simon

Abstract: We consider the existence of efficient algorithms for learning the class of half-spaces in ~n in the agnostic learning model (Le., making no prior assumptions on the example-generating distribution). The resulting combinatorial problem - finding the best agreement half-space over an input sample - is NP hard to approximate to within some constant factor. We suggest a way to circumvent this theoretical bound by introducing a new measure of success for such algorithms. An algorithm is IL-margin successful if the agreement ratio of the half-space it outputs is as good as that of any half-space once training points that are inside the IL-margins of its separating hyper-plane are disregarded. We prove crisp computational complexity results with respect to this success measure: On one hand, for every positive IL, there exist efficient (poly-time) IL-margin successful learning algorithms. On the other hand, we prove that unless P=NP, there is no algorithm that runs in time polynomial in the sample size and in 1/ IL that is IL-margin successful for all IL> O. 1

6 0.10138841 111 nips-2000-Regularized Winnow Methods

7 0.098643243 4 nips-2000-A Linear Programming Approach to Novelty Detection

8 0.090229347 54 nips-2000-Feature Selection for SVMs

9 0.090060845 128 nips-2000-Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra

10 0.087387189 17 nips-2000-Active Learning for Parameter Estimation in Bayesian Networks

11 0.083701074 37 nips-2000-Convergence of Large Margin Separable Linear Classification

12 0.081584394 7 nips-2000-A New Approximate Maximal Margin Classification Algorithm

13 0.07654395 134 nips-2000-The Kernel Trick for Distances

14 0.0759959 75 nips-2000-Large Scale Bayes Point Machines

15 0.074387178 120 nips-2000-Sparse Greedy Gaussian Process Regression

16 0.07410869 133 nips-2000-The Kernel Gibbs Sampler

17 0.071289122 68 nips-2000-Improved Output Coding for Classification Using Continuous Relaxation

18 0.066026144 12 nips-2000-A Support Vector Method for Clustering

19 0.065671049 145 nips-2000-Weak Learners and Improved Rates of Convergence in Boosting

20 0.065609761 119 nips-2000-Some New Bounds on the Generalization Error of Combined Classifiers


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.226), (1, 0.15), (2, -0.057), (3, 0.002), (4, -0.023), (5, -0.01), (6, 0.033), (7, -0.058), (8, -0.031), (9, 0.118), (10, -0.092), (11, 0.004), (12, -0.042), (13, 0.002), (14, 0.045), (15, -0.115), (16, -0.122), (17, -0.038), (18, -0.084), (19, -0.016), (20, 0.031), (21, -0.054), (22, 0.024), (23, 0.045), (24, -0.181), (25, -0.046), (26, 0.076), (27, 0.071), (28, -0.146), (29, 0.01), (30, 0.143), (31, 0.001), (32, -0.008), (33, 0.082), (34, 0.009), (35, -0.034), (36, -0.042), (37, -0.156), (38, 0.081), (39, -0.0), (40, -0.011), (41, 0.125), (42, -0.009), (43, 0.117), (44, -0.128), (45, 0.002), (46, -0.009), (47, 0.196), (48, 0.004), (49, -0.035)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95773512 18 nips-2000-Active Support Vector Machine Classification

Author: Olvi L. Mangasarian, David R. Musicant

Abstract: An active set strategy is applied to the dual of a simple reformulation of the standard quadratic program of a linear support vector machine. This application generates a fast new dual algorithm that consists of solving a finite number of linear equations, with a typically large dimensionality equal to the number of points to be classified. However, by making novel use of the Sherman-MorrisonWoodbury formula , a much smaller matrix of the order of the original input space is inverted at each step. Thus, a problem with a 32-dimensional input space and 7 million points required inverting positive definite symmetric matrices of size 33 x 33 with a total running time of 96 minutes on a 400 MHz Pentium II. The algorithm requires no specialized quadratic or linear programming code, but merely a linear equation solver which is publicly available. 1

2 0.7275672 70 nips-2000-Incremental and Decremental Support Vector Machine Learning

Author: Gert Cauwenberghs, Tomaso Poggio

Abstract: An on-line recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the KuhnTucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible, and decremental

3 0.60466576 44 nips-2000-Efficient Learning of Linear Perceptrons

Author: Shai Ben-David, Hans-Ulrich Simon

Abstract: We consider the existence of efficient algorithms for learning the class of half-spaces in ~n in the agnostic learning model (Le., making no prior assumptions on the example-generating distribution). The resulting combinatorial problem - finding the best agreement half-space over an input sample - is NP hard to approximate to within some constant factor. We suggest a way to circumvent this theoretical bound by introducing a new measure of success for such algorithms. An algorithm is IL-margin successful if the agreement ratio of the half-space it outputs is as good as that of any half-space once training points that are inside the IL-margins of its separating hyper-plane are disregarded. We prove crisp computational complexity results with respect to this success measure: On one hand, for every positive IL, there exist efficient (poly-time) IL-margin successful learning algorithms. On the other hand, we prove that unless P=NP, there is no algorithm that runs in time polynomial in the sample size and in 1/ IL that is IL-margin successful for all IL> O. 1

4 0.51989299 111 nips-2000-Regularized Winnow Methods

Author: Tong Zhang

Abstract: In theory, the Winnow multiplicative update has certain advantages over the Perceptron additive update when there are many irrelevant attributes. Recently, there has been much effort on enhancing the Perceptron algorithm by using regularization, leading to a class of linear classification methods called support vector machines. Similarly, it is also possible to apply the regularization idea to the Winnow algorithm, which gives methods we call regularized Winnows. We show that the resulting methods compare with the basic Winnows in a similar way that a support vector machine compares with the Perceptron. We investigate algorithmic issues and learning properties of the derived methods. Some experimental results will also be provided to illustrate different methods.

5 0.48923099 52 nips-2000-Fast Training of Support Vector Classifiers

Author: Fernando Pérez-Cruz, Pedro Luis Alarcón-Diana, Angel Navia-Vázquez, Antonio Artés-Rodríguez

Abstract: In this communication we present a new algorithm for solving Support Vector Classifiers (SVC) with large training data sets. The new algorithm is based on an Iterative Re-Weighted Least Squares procedure which is used to optimize the SVc. Moreover, a novel sample selection strategy for the working set is presented, which randomly chooses the working set among the training samples that do not fulfill the stopping criteria. The validity of both proposals, the optimization procedure and sample selection strategy, is shown by means of computer experiments using well-known data sets. 1 INTRODUCTION The Support Vector Classifier (SVC) is a powerful tool to solve pattern recognition problems [13, 14] in such a way that the solution is completely described as a linear combination of several training samples, named the Support Vectors. The training procedure for solving the SVC is usually based on Quadratic Programming (QP) which presents some inherent limitations, mainly the computational complexity and memory requirements for large training data sets. This problem is typically avoided by dividing the QP problem into sets of smaller ones [6, 1, 7, 11], that are iteratively solved in order to reach the SVC solution for the whole set of training samples. These schemes rely on an optimizing engine, QP, and in the sample selection strategy for each sub-problem, in order to obtain a fast solution for the SVC. An Iterative Re-Weighted Least Squares (IRWLS) procedure has already been proposed as an alternative solver for the SVC [10] and the Support Vector Regressor [9], being computationally efficient in absolute terms. In this communication, we will show that the IRWLS algorithm can replace the QP one in any chunking scheme in order to find the SVC solution for large training data sets. Moreover, we consider that the strategy to decide which training samples must j oin the working set is critical to reduce the total number of iterations needed to attain the SVC solution, and the runtime complexity as a consequence. To aim for this issue, the computer program SV cradit have been developed so as to solve the SVC for large training data sets using IRWLS procedure and fixed-size working sets. The paper is organized as follows. In Section 2, we start by giving a summary of the IRWLS procedure for SVC and explain how it can be incorporated to a chunking scheme to obtain an overall implementation which efficiently deals with large training data sets. We present in Section 3 a novel strategy to make up the working set. Section 4 shows the capabilities of the new implementation and they are compared with the fastest available SVC implementation, SV Mlight [6]. We end with some concluding remarks. 2 IRWLS-SVC In order to solve classification problems, the SVC has to minimize Lp = ~llwI12+CLei- LJliei- LQi(Yi(¢(xifw+b)-l+ei) (1) i i i with respectto w, band ei and maximize it with respectto Qi and Jli, subject to Qi, Jli ~ 0, where ¢(.) is a nonlinear transformation (usually unknown) to a higher dimensional space and C is a penalization factor. The solution to (1) is defined by the Karush-Kuhn-Tucker (KKT) conditions [2]. For further details on the SVC, one can refer to the tutorial survey by Burges [2] and to the work ofVapnik [13, 14]. In order to obtain an IRWLS procedure we will first need to rearrange (1) in such a way that the terms depending on ei can be removed because, at the solution C - Qi - Jli = 0 Vi (one of the KKT conditions [2]) must hold. Lp = 1 Qi(l- Yi(¢T(Xi)W + b)) 211wl12 + L i = (2) where The weighted least square nature of (2) can be understood if ei is defined as the error on each sample and ai as its associated weight, where! IIwl1 2 is a regularizing functional. The minimization of (2) cannot be accomplished in a single step because ai = ai(ei), and we need to apply an IRWLS procedure [4], summarized below in tree steps: 1. Considering the ai fixed, minimize (2). 2. Recalculate ai from the solution on step 1. 3. Repeat until convergence. In order to work with Reproducing Kernels in Hilbert Space (RKHS), as the QP procedure does, we require that w = Ei (JiYi¢(Xi) and in order to obtain a non-zero b, that Ei {JiYi = O. Substituting them into (2), its minimum with respect to {Ji and b for a fixed set of ai is found by solving the following linear equation system l (3) IThe detailed description of the steps needed to obtain (3) from (2) can be found in [10]. where y = [Yl, Y2, ... Yn]T (4) 'r/i,j = 1, ... ,n 'r/i,j = 1, ... ,n (H)ij = YiYj¢T(Xi)¢(Xj) = YiyjK(Xi,Xj) (Da)ij = aio[i - j] 13 = [,81, ,82, ... (5) (6) (7) , ,8n]T and 0[·] is the discrete impulse function. Finally, the dependency of ai upon the Lagrange multipliers is eliminated using the KKT conditions, obtaining a, ai 2.1 ={~ ei Yi' eiYi < Yt.et. > - ° ° (8) IRWLS ALGORITHMIC IMPLEMENTATION The SVC solution with the IRWLS procedure can be simplified by dividing the training samples into three sets. The first set, SI, contains the training samples verifying < ,8i < C, which have to be determined by solving (3). The second one, S2, includes every training sample whose,8i = 0. And the last one, S3, is made up of the training samples whose ,8i = C. This division in sets is fully justified in [10]. The IRWLS-SVC algorithm is shown in Table 1. ° 0. Initialization: SI will contain every training sample, S2 = 0 and S3 = 0. Compute H. e_a = y, f3_a = 0, b_a = 0, G 13 = Gin, a = 1 and G b3 = G bi n . 1 Solve [ (H)Sb S1 + D(al S1 . =° = e-lt a, 3. ai = { ~ (13) S2 2. e ° 1[ (Y)Sl (f3)Sl ] (y ) ~1 b and (13) Ss = C DyH(f3 - f3_a) - (b - b_a)1 =[1- G 13 ] G b3 ' °. eiYi < e- _ > O'r/Z E SI U S2 U S3 tYt 4. Sets reordering: a. Move every sample in S3 with eiYi < to S2. b. Move every sample in SI with ,8i = C to S3. c. Move every sample in SI with ai = to S2 . d. Move every sample in S2 with ai :I to SI. 5. e_a = e, f3_a = 13, G 13 = (H)Sl,SS (f3)ss + (G in )Sl' b-lt = band Gb3 = -y~s (f3)ss + Gbin · 6. Go to step 1 and repeat until convergence. ei Yi ' ° ° ° Table 1: IRWLS-SVC algorithm. The IRWLS-SVC procedure has to be slightly modified in order to be used inside a chunk:ing scheme as the one proposed in [8, 6], such that it can be directly applied in the one proposed in [1]. A chunking scheme is needed to solve the SVC whenever H is too large to fit into memory. In those cases, several SVC with a reduced set of training samples are iteratively solved until the solution for the whole set is found. The samples are divide into a working set, Sw, which is solved as a full SVC problem, and an inactive set, Sin. If there are support vectors in the inactive set, as it might be, the inactive set modifies the IRWLSSVC procedure, adding a contribution to the independent term in the linear equation system (3) . Those support vectors in S in can be seen as anchored samples in S3, because their ,8i is not zero and can not be modified by the IRWLS procedure. Then, such contribution (Gin and G bin ) will be calculated as G 13 and G b3 are (Table 1, 5th step), before calling the IRWLS-SVC algorithm. We have already modified the IRWLS-SVC in Table 1 to consider Gin and G bin , which must be set to zero if the Hessian matrix, H, fits into memory for the whole set of training samples. The resolution of the SVC for large training data sets, employing as minimization engine the IRWLS procedure, is summarized in the following steps: 1. Select the samples that will form the working set. 2. Construct Gin = (H)Sw,Sin (f3)s.n and G bin = -yIin (f3)Sin 3. Solve the IRWLS-SVC procedure, following the steps in Table 1. 4. Compute the error of every training sample. 5. If the stopping conditions Yiei < C eiYi> -c leiYil < C 'Vii 'Vii 'Vii (Ji = 0 (Ji = C 0 < (Ji < C (9) (10) (11) are fulfilled, the SVC solution has been reached. The stopping conditions are the ones proposed in [6] and C must be a small value around 10 - 3 , a full discussion concerning this topic can be found in [6]. 3 SAMPLE SELECTION STRATEGY The selection of the training samples that will constitute the working set in each iteration is the most critical decision in any chunking scheme, because such decision is directly involved in the number of IRWLS-SVC (or QP-SVC) procedures to be called and in the number of reproducing kernel evaluations to be made, which are, by far, the two most time consuming operations in any chunking schemes. In order to solve the SVC efficiently, we first need to define a candidate set of training samples to form the working set in each iteration. The candidate set will be made up, as it could not be otherwise, with all the training samples that violate the stopping conditions (9)-(11); and we will also add all those training samples that satisfy condition (11) but a small variation on their error will make them violate such condition. The strategies to select the working set are as numerous as the number of problems to be solved, but one can think three different simple strategies: • Select those samples which do not fulfill the stopping criteria and present the largest Iei I values. • Select those samples which do not fulfill the stopping criteria and present the smallest Iei I values. • Select them randomly from the ones that do not fulfill the stopping conditions. The first strategy seems the more natural one and it was proposed in [6]. If the largest leil samples are selected we guanrantee that attained solution gives the greatest step towards the solution of (1). But if the step is too large, which usually happens, it will cause the solution in each iteration and the (Ji values to oscillate around its optimal value. The magnitude of this effect is directly proportional to the value of C and q (size of the working set), so in the case ofsmall C (C < 10) and low q (q < 20) it would be less noticeable. The second one is the most conservative strategy because we will be moving towards the solution of (1) with small steps. Its drawback is readily discerned if the starting point is inappropriate, needing too many iterations to reach the SVC solution. The last strategy, which has been implemented together with the IRWLS-SVC procedure, is a mid-point between the other two, but if the number of samples whose 0 < (3i < C increases above q there might be some iterations where we will make no progress (working set is only made up of the training samples that fulfill the stopping condition in (11)). This situation is easily avoided by introducing one sample that violates each one of the stopping conditions per class. Finally, if the cardinality of the candidate set is less than q the working set is completed with those samples that fulfil the stopping criteria conditions and present the least leil. In summary, the sample selection strategy proposed is 2 : 1. Construct the candidate set, Se with those samples that do not fulfill stopping conditions (9) and (10), and those samples whose (3 obeys 0 < (3i < C. 2. IfISel < ngot05. 3. Choose a sample per class that violates each one of the stopping conditions and move them from Se to the working set, SW. 4. Choose randomly n - ISw I samples from Se and move then to SW. Go to Step 6. 5. Move every sample form Se to Sw and then-ISwl samples that fulfill the stopping conditions (9) and (10) and present the lowest leil values are used to complete SW . 6. Go on, obtaining Gin and Gbin. 4 BENCHMARK FOR THE IRWLS-SVC We have prepared two different experiments to test both the IRWLS and the sample selection strategy for solving the SVc. The first one compares the IRWLS against QP and the second one compares the samples selection strategy, together with the IRWLS, against a complete solving procedure for SVC, the SV Mlight. In the first trial, we have replaced the LOQO interior point optimizer used by SV M1ig ht version 3.02 [5] by the IRWLS-SVC procedure in Table 1, to compare both optimizing engines with equal samples selection strategy. The comparison has been made over a Pentium ill-450MHz with 128Mb running on Window98 and the programs have been compiled using Microsoft Developer 6.0. In Table 2, we show the results for two data sets: the first q 20 40 70 Adult44781 CPU time Optimize Time LOQO IRWLS LOQO IRWLS 21.25 20.70 0.61 0.39 20.60 19.22 1.01 0.17 21.15 18.72 2.30 0.46 Splice 2175 CPU time Optimize Time LOQO IRWLS LOQO IRWLS 46.19 30.76 21.94 4.77 71.34 24.93 46.26 8.07 53.77 20.32 34.24 7.72 Table 2: CPU Time indicates the consume time in seconds for the whole procedure. The Optimize Time indicates the consume time in second for the LOQO or IRWLS procedure. one, containing 4781 training samples, needs most CPU resources to compute the RKHS and the second one, containing 2175 training samples, uses most CPU resources to solve the SVC for each Sw, where q indicates the size of the working set. The value of C has 2In what follows, I . I represents absolute value for numbers and cardinality for sets been set to 1 and 1000, respectively, and a Radial Basis Function (RBF) RKHS [2] has been employed, where its parameter a has been set, respectively, to 10 and 70. As it can be seen, the SV M1ig ht with IRWLS is significantly faster than the LOQO procedure in all cases. The kernel cache size has been set to 64Mb for both data sets and for both procedures. The results in Table 2 validates the IRWLS procedure as the fastest SVC solver. For the second trial, we have compiled a computer program that uses the IRWLS-SVC procedure and the working set selection in Section 3, we will refer to it as svcradit from now on. We have borrowed the chunking and shrinking ideas from the SV Mlight [6] for our computer program. To test these two programs several data sets have been used. The Adult and Web data sets have been obtained from 1. Platt's web page http://research.microsoft.comr jplatt/smo.html/; the Gauss-M data set is a two dimensional classification problem proposed in [3] to test neural networks, which comprises a gaussian random variable for each class, which highly overlap. The Banana, Diabetes and Splice data sets have been obtained from Gunnar Ratsch web page http://svm.first.gmd.der raetschl. The selection of C and the RKHS has been done as indicated in [11] for Adult and Web data sets and in http://svm.first.gmd.derraetschl for Banana, Diabetes and Splice data sets. In Table 3, we show the runtime complexity for each data set, where the value of q has been elected as the one that reduces the runtime complexity. Database Dim Adult6 Adult9 Adult! Web 1 Web7 Gauss-M Gauss-M Banana Banana Diabetes Splice 123 123 123 300 300 2 2 2 2 8 69 N Sampl. 11221 32562 1605 2477 24693 4000 4000 400 4900 768 2175 C a SV 1 1 1000 5 5 1 100 316.2 316.2 10 1000 10 10 10 10 10 1 1 1 1 2 70 4477 12181 630 224 1444 1736 1516 80 1084 409 525 q CPU time radit light radit light 150 130 100 100 150 70 100 40 70 40 150 40 70 10 10 10 10 10 70 40 10 20 118.2 1093.29 25.98 2.42 158.13 12.69 61.68 0.33 22.46 2.41 14.06 124.46 1097.09 113.54 2.36 124.57 48.28 3053.20 0.77 1786.56 6.04 49.19 Table 3: Several data sets runtime complexity, when solved with the short, and SV Mlight, light for short. s v c radit , radit for One can appreciate that the svcradit is faster than the SV M1ig ht for most data sets. For the Web data set, which is the only data set the SV Mlight is sligthly faster, the value of C is low and most training samples end up as support vector with (3i < C. In such cases the best strategy is to take the largest step towards the solution in every iteration, as the SV Mlig ht does [6], because most training samples (3i will not be affected by the others training samples (3j value. But in those case the value of C increases the SV c radit samples selection strategy is a much more appropriate strategy than the one used in SV Mlight. 5 CONCLUSIONS In this communication a new algorithm for solving the SVC for large training data sets has been presented. Its two major contributions deal with the optimizing engine and the sample selection strategy. An IRWLS procedure is used to solve the SVC in each step, which is much faster that the usual QP procedure, and simpler to implement, because the most difficult step is the linear equation system solution that can be easily obtained by LU decomposition means [12]. The random working set selection from the samples not fulfilling the KKT conditions is the best option if the working is be large, because it reduces the number of chunks to be solved. This strategy benefits from the IRWLS procedure, which allows to work with large training data set. All these modifications have been concreted in the svcradit solving procedure, publicly available at http://svm.tsc.uc3m.es/. 6 ACKNOWLEDGEMENTS We are sincerely grateful to Thorsten Joachims who has allowed and encouraged us to use his SV Mlight to test our IRWLS procedure, comparisons which could not have been properly done otherwise. References [1] B. E. Boser, I. M . Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In 5th Annual Workshop on Computational Learning Theory, Pittsburg, U.S.A., 1992. [2] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121-167, 1998. [3] S. Haykin. Neural Networks: A comprehensivefoundation. Prentice-Hall, 1994. [4] P. W. Holland and R. E. Welch. Robust regression using iterative re-weighted least squares. Communications of Statistics Theory Methods, A6(9):813-27, 1977. [5] T. Joachims. http://www-ai.infonnatik.uni-dortmund.de/forschung/verfahren Isvmlight Isvmlight.eng.html. Technical report, University of Dortmund, Informatik, AI-Unit Collaborative Research Center on 'Complexity Reduction in Multivariate Data', 1998. [6] T. Joachims. Making Large Scale SVM Learning Practical, In Advances in Kernel Methods- Support Vector Learning, Editors SchOlkopf, B., Burges, C. 1. C. and Smola, A. 1., pages 169-184. M.I.T. Press, 1999. [7] E. Osuna, R. Freund, and F. Girosi. An improved training algorithm for support vector machines. In Proc. of the 1997 IEEE Workshop on Neural Networks for Signal Processing, pages 276-285, Amelia Island, U.S.A, 1997. [8] E. Osuna and F. Girosi. Reducing the run-time complexity of support vector machines. In ICPR'98, Brisbane, Australia, August 1998. [9] F. Perez-Cruz, A. Navia-Vazquez

6 0.48042795 54 nips-2000-Feature Selection for SVMs

7 0.43861505 148 nips-2000-`N-Body' Problems in Statistical Learning

8 0.43150169 93 nips-2000-On Iterative Krylov-Dogleg Trust-Region Steps for Solving Neural Networks Nonlinear Least Squares Problems

9 0.42968807 7 nips-2000-A New Approximate Maximal Margin Classification Algorithm

10 0.4148919 37 nips-2000-Convergence of Large Margin Separable Linear Classification

11 0.41451406 128 nips-2000-Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra

12 0.39855233 4 nips-2000-A Linear Programming Approach to Novelty Detection

13 0.39519683 17 nips-2000-Active Learning for Parameter Estimation in Bayesian Networks

14 0.37111756 58 nips-2000-From Margin to Sparsity

15 0.35868254 120 nips-2000-Sparse Greedy Gaussian Process Regression

16 0.35045594 22 nips-2000-Algorithms for Non-negative Matrix Factorization

17 0.33994603 34 nips-2000-Competition and Arbors in Ocular Dominance

18 0.32656661 12 nips-2000-A Support Vector Method for Clustering

19 0.32458097 9 nips-2000-A PAC-Bayesian Margin Bound for Linear Classifiers: Why SVMs work

20 0.32205793 27 nips-2000-Automatic Choice of Dimensionality for PCA


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.013), (17, 0.131), (33, 0.034), (55, 0.014), (62, 0.02), (65, 0.01), (67, 0.04), (76, 0.033), (90, 0.614), (97, 0.01)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95932031 18 nips-2000-Active Support Vector Machine Classification

Author: Olvi L. Mangasarian, David R. Musicant

Abstract: An active set strategy is applied to the dual of a simple reformulation of the standard quadratic program of a linear support vector machine. This application generates a fast new dual algorithm that consists of solving a finite number of linear equations, with a typically large dimensionality equal to the number of points to be classified. However, by making novel use of the Sherman-MorrisonWoodbury formula , a much smaller matrix of the order of the original input space is inverted at each step. Thus, a problem with a 32-dimensional input space and 7 million points required inverting positive definite symmetric matrices of size 33 x 33 with a total running time of 96 minutes on a 400 MHz Pentium II. The algorithm requires no specialized quadratic or linear programming code, but merely a linear equation solver which is publicly available. 1

2 0.95081687 3 nips-2000-A Gradient-Based Boosting Algorithm for Regression Problems

Author: Richard S. Zemel, Toniann Pitassi

Abstract: In adaptive boosting, several weak learners trained sequentially are combined to boost the overall algorithm performance. Recently adaptive boosting methods for classification problems have been derived as gradient descent algorithms. This formulation justifies key elements and parameters in the methods, all chosen to optimize a single common objective function. We propose an analogous formulation for adaptive boosting of regression problems, utilizing a novel objective function that leads to a simple boosting algorithm. We prove that this method reduces training error, and compare its performance to other regression methods. The aim of boosting algorithms is to

3 0.90231419 128 nips-2000-Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra

Author: Paul M. Hayton, Bernhard Schölkopf, Lionel Tarassenko, Paul Anuzis

Abstract: A system has been developed to extract diagnostic information from jet engine carcass vibration data. Support Vector Machines applied to novelty detection provide a measure of how unusual the shape of a vibration signature is, by learning a representation of normality. We describe a novel method for Support Vector Machines of including information from a second class for novelty detection and give results from the application to Jet Engine vibration analysis.

4 0.82198989 81 nips-2000-Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks

Author: Xiaohui Xie, Richard H. R. Hahnloser, H. Sebastian Seung

Abstract: It has long been known that lateral inhibition in neural networks can lead to a winner-take-all competition, so that only a single neuron is active at a steady state. Here we show how to organize lateral inhibition so that groups of neurons compete to be active. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a formula that can be interpreted as arising from a simple learning rule. Our analysis demonstrates that such inhibition generally results in winner-take-all competition between the given groups, with the exception of some degenerate cases. In a broader context, the network serves as a particular illustration of the general distinction between permitted and forbidden sets, which was introduced recently. From this viewpoint, the computational function of our network is to store and retrieve memories as permitted sets of coactive neurons. In traditional winner-take-all networks, lateral inhibition is used to enforce a localized, or

5 0.66974127 52 nips-2000-Fast Training of Support Vector Classifiers

Author: Fernando Pérez-Cruz, Pedro Luis Alarcón-Diana, Angel Navia-Vázquez, Antonio Artés-Rodríguez

Abstract: In this communication we present a new algorithm for solving Support Vector Classifiers (SVC) with large training data sets. The new algorithm is based on an Iterative Re-Weighted Least Squares procedure which is used to optimize the SVc. Moreover, a novel sample selection strategy for the working set is presented, which randomly chooses the working set among the training samples that do not fulfill the stopping criteria. The validity of both proposals, the optimization procedure and sample selection strategy, is shown by means of computer experiments using well-known data sets. 1 INTRODUCTION The Support Vector Classifier (SVC) is a powerful tool to solve pattern recognition problems [13, 14] in such a way that the solution is completely described as a linear combination of several training samples, named the Support Vectors. The training procedure for solving the SVC is usually based on Quadratic Programming (QP) which presents some inherent limitations, mainly the computational complexity and memory requirements for large training data sets. This problem is typically avoided by dividing the QP problem into sets of smaller ones [6, 1, 7, 11], that are iteratively solved in order to reach the SVC solution for the whole set of training samples. These schemes rely on an optimizing engine, QP, and in the sample selection strategy for each sub-problem, in order to obtain a fast solution for the SVC. An Iterative Re-Weighted Least Squares (IRWLS) procedure has already been proposed as an alternative solver for the SVC [10] and the Support Vector Regressor [9], being computationally efficient in absolute terms. In this communication, we will show that the IRWLS algorithm can replace the QP one in any chunking scheme in order to find the SVC solution for large training data sets. Moreover, we consider that the strategy to decide which training samples must j oin the working set is critical to reduce the total number of iterations needed to attain the SVC solution, and the runtime complexity as a consequence. To aim for this issue, the computer program SV cradit have been developed so as to solve the SVC for large training data sets using IRWLS procedure and fixed-size working sets. The paper is organized as follows. In Section 2, we start by giving a summary of the IRWLS procedure for SVC and explain how it can be incorporated to a chunking scheme to obtain an overall implementation which efficiently deals with large training data sets. We present in Section 3 a novel strategy to make up the working set. Section 4 shows the capabilities of the new implementation and they are compared with the fastest available SVC implementation, SV Mlight [6]. We end with some concluding remarks. 2 IRWLS-SVC In order to solve classification problems, the SVC has to minimize Lp = ~llwI12+CLei- LJliei- LQi(Yi(¢(xifw+b)-l+ei) (1) i i i with respectto w, band ei and maximize it with respectto Qi and Jli, subject to Qi, Jli ~ 0, where ¢(.) is a nonlinear transformation (usually unknown) to a higher dimensional space and C is a penalization factor. The solution to (1) is defined by the Karush-Kuhn-Tucker (KKT) conditions [2]. For further details on the SVC, one can refer to the tutorial survey by Burges [2] and to the work ofVapnik [13, 14]. In order to obtain an IRWLS procedure we will first need to rearrange (1) in such a way that the terms depending on ei can be removed because, at the solution C - Qi - Jli = 0 Vi (one of the KKT conditions [2]) must hold. Lp = 1 Qi(l- Yi(¢T(Xi)W + b)) 211wl12 + L i = (2) where The weighted least square nature of (2) can be understood if ei is defined as the error on each sample and ai as its associated weight, where! IIwl1 2 is a regularizing functional. The minimization of (2) cannot be accomplished in a single step because ai = ai(ei), and we need to apply an IRWLS procedure [4], summarized below in tree steps: 1. Considering the ai fixed, minimize (2). 2. Recalculate ai from the solution on step 1. 3. Repeat until convergence. In order to work with Reproducing Kernels in Hilbert Space (RKHS), as the QP procedure does, we require that w = Ei (JiYi¢(Xi) and in order to obtain a non-zero b, that Ei {JiYi = O. Substituting them into (2), its minimum with respect to {Ji and b for a fixed set of ai is found by solving the following linear equation system l (3) IThe detailed description of the steps needed to obtain (3) from (2) can be found in [10]. where y = [Yl, Y2, ... Yn]T (4) 'r/i,j = 1, ... ,n 'r/i,j = 1, ... ,n (H)ij = YiYj¢T(Xi)¢(Xj) = YiyjK(Xi,Xj) (Da)ij = aio[i - j] 13 = [,81, ,82, ... (5) (6) (7) , ,8n]T and 0[·] is the discrete impulse function. Finally, the dependency of ai upon the Lagrange multipliers is eliminated using the KKT conditions, obtaining a, ai 2.1 ={~ ei Yi' eiYi < Yt.et. > - ° ° (8) IRWLS ALGORITHMIC IMPLEMENTATION The SVC solution with the IRWLS procedure can be simplified by dividing the training samples into three sets. The first set, SI, contains the training samples verifying < ,8i < C, which have to be determined by solving (3). The second one, S2, includes every training sample whose,8i = 0. And the last one, S3, is made up of the training samples whose ,8i = C. This division in sets is fully justified in [10]. The IRWLS-SVC algorithm is shown in Table 1. ° 0. Initialization: SI will contain every training sample, S2 = 0 and S3 = 0. Compute H. e_a = y, f3_a = 0, b_a = 0, G 13 = Gin, a = 1 and G b3 = G bi n . 1 Solve [ (H)Sb S1 + D(al S1 . =° = e-lt a, 3. ai = { ~ (13) S2 2. e ° 1[ (Y)Sl (f3)Sl ] (y ) ~1 b and (13) Ss = C DyH(f3 - f3_a) - (b - b_a)1 =[1- G 13 ] G b3 ' °. eiYi < e- _ > O'r/Z E SI U S2 U S3 tYt 4. Sets reordering: a. Move every sample in S3 with eiYi < to S2. b. Move every sample in SI with ,8i = C to S3. c. Move every sample in SI with ai = to S2 . d. Move every sample in S2 with ai :I to SI. 5. e_a = e, f3_a = 13, G 13 = (H)Sl,SS (f3)ss + (G in )Sl' b-lt = band Gb3 = -y~s (f3)ss + Gbin · 6. Go to step 1 and repeat until convergence. ei Yi ' ° ° ° Table 1: IRWLS-SVC algorithm. The IRWLS-SVC procedure has to be slightly modified in order to be used inside a chunk:ing scheme as the one proposed in [8, 6], such that it can be directly applied in the one proposed in [1]. A chunking scheme is needed to solve the SVC whenever H is too large to fit into memory. In those cases, several SVC with a reduced set of training samples are iteratively solved until the solution for the whole set is found. The samples are divide into a working set, Sw, which is solved as a full SVC problem, and an inactive set, Sin. If there are support vectors in the inactive set, as it might be, the inactive set modifies the IRWLSSVC procedure, adding a contribution to the independent term in the linear equation system (3) . Those support vectors in S in can be seen as anchored samples in S3, because their ,8i is not zero and can not be modified by the IRWLS procedure. Then, such contribution (Gin and G bin ) will be calculated as G 13 and G b3 are (Table 1, 5th step), before calling the IRWLS-SVC algorithm. We have already modified the IRWLS-SVC in Table 1 to consider Gin and G bin , which must be set to zero if the Hessian matrix, H, fits into memory for the whole set of training samples. The resolution of the SVC for large training data sets, employing as minimization engine the IRWLS procedure, is summarized in the following steps: 1. Select the samples that will form the working set. 2. Construct Gin = (H)Sw,Sin (f3)s.n and G bin = -yIin (f3)Sin 3. Solve the IRWLS-SVC procedure, following the steps in Table 1. 4. Compute the error of every training sample. 5. If the stopping conditions Yiei < C eiYi> -c leiYil < C 'Vii 'Vii 'Vii (Ji = 0 (Ji = C 0 < (Ji < C (9) (10) (11) are fulfilled, the SVC solution has been reached. The stopping conditions are the ones proposed in [6] and C must be a small value around 10 - 3 , a full discussion concerning this topic can be found in [6]. 3 SAMPLE SELECTION STRATEGY The selection of the training samples that will constitute the working set in each iteration is the most critical decision in any chunking scheme, because such decision is directly involved in the number of IRWLS-SVC (or QP-SVC) procedures to be called and in the number of reproducing kernel evaluations to be made, which are, by far, the two most time consuming operations in any chunking schemes. In order to solve the SVC efficiently, we first need to define a candidate set of training samples to form the working set in each iteration. The candidate set will be made up, as it could not be otherwise, with all the training samples that violate the stopping conditions (9)-(11); and we will also add all those training samples that satisfy condition (11) but a small variation on their error will make them violate such condition. The strategies to select the working set are as numerous as the number of problems to be solved, but one can think three different simple strategies: • Select those samples which do not fulfill the stopping criteria and present the largest Iei I values. • Select those samples which do not fulfill the stopping criteria and present the smallest Iei I values. • Select them randomly from the ones that do not fulfill the stopping conditions. The first strategy seems the more natural one and it was proposed in [6]. If the largest leil samples are selected we guanrantee that attained solution gives the greatest step towards the solution of (1). But if the step is too large, which usually happens, it will cause the solution in each iteration and the (Ji values to oscillate around its optimal value. The magnitude of this effect is directly proportional to the value of C and q (size of the working set), so in the case ofsmall C (C < 10) and low q (q < 20) it would be less noticeable. The second one is the most conservative strategy because we will be moving towards the solution of (1) with small steps. Its drawback is readily discerned if the starting point is inappropriate, needing too many iterations to reach the SVC solution. The last strategy, which has been implemented together with the IRWLS-SVC procedure, is a mid-point between the other two, but if the number of samples whose 0 < (3i < C increases above q there might be some iterations where we will make no progress (working set is only made up of the training samples that fulfill the stopping condition in (11)). This situation is easily avoided by introducing one sample that violates each one of the stopping conditions per class. Finally, if the cardinality of the candidate set is less than q the working set is completed with those samples that fulfil the stopping criteria conditions and present the least leil. In summary, the sample selection strategy proposed is 2 : 1. Construct the candidate set, Se with those samples that do not fulfill stopping conditions (9) and (10), and those samples whose (3 obeys 0 < (3i < C. 2. IfISel < ngot05. 3. Choose a sample per class that violates each one of the stopping conditions and move them from Se to the working set, SW. 4. Choose randomly n - ISw I samples from Se and move then to SW. Go to Step 6. 5. Move every sample form Se to Sw and then-ISwl samples that fulfill the stopping conditions (9) and (10) and present the lowest leil values are used to complete SW . 6. Go on, obtaining Gin and Gbin. 4 BENCHMARK FOR THE IRWLS-SVC We have prepared two different experiments to test both the IRWLS and the sample selection strategy for solving the SVc. The first one compares the IRWLS against QP and the second one compares the samples selection strategy, together with the IRWLS, against a complete solving procedure for SVC, the SV Mlight. In the first trial, we have replaced the LOQO interior point optimizer used by SV M1ig ht version 3.02 [5] by the IRWLS-SVC procedure in Table 1, to compare both optimizing engines with equal samples selection strategy. The comparison has been made over a Pentium ill-450MHz with 128Mb running on Window98 and the programs have been compiled using Microsoft Developer 6.0. In Table 2, we show the results for two data sets: the first q 20 40 70 Adult44781 CPU time Optimize Time LOQO IRWLS LOQO IRWLS 21.25 20.70 0.61 0.39 20.60 19.22 1.01 0.17 21.15 18.72 2.30 0.46 Splice 2175 CPU time Optimize Time LOQO IRWLS LOQO IRWLS 46.19 30.76 21.94 4.77 71.34 24.93 46.26 8.07 53.77 20.32 34.24 7.72 Table 2: CPU Time indicates the consume time in seconds for the whole procedure. The Optimize Time indicates the consume time in second for the LOQO or IRWLS procedure. one, containing 4781 training samples, needs most CPU resources to compute the RKHS and the second one, containing 2175 training samples, uses most CPU resources to solve the SVC for each Sw, where q indicates the size of the working set. The value of C has 2In what follows, I . I represents absolute value for numbers and cardinality for sets been set to 1 and 1000, respectively, and a Radial Basis Function (RBF) RKHS [2] has been employed, where its parameter a has been set, respectively, to 10 and 70. As it can be seen, the SV M1ig ht with IRWLS is significantly faster than the LOQO procedure in all cases. The kernel cache size has been set to 64Mb for both data sets and for both procedures. The results in Table 2 validates the IRWLS procedure as the fastest SVC solver. For the second trial, we have compiled a computer program that uses the IRWLS-SVC procedure and the working set selection in Section 3, we will refer to it as svcradit from now on. We have borrowed the chunking and shrinking ideas from the SV Mlight [6] for our computer program. To test these two programs several data sets have been used. The Adult and Web data sets have been obtained from 1. Platt's web page http://research.microsoft.comr jplatt/smo.html/; the Gauss-M data set is a two dimensional classification problem proposed in [3] to test neural networks, which comprises a gaussian random variable for each class, which highly overlap. The Banana, Diabetes and Splice data sets have been obtained from Gunnar Ratsch web page http://svm.first.gmd.der raetschl. The selection of C and the RKHS has been done as indicated in [11] for Adult and Web data sets and in http://svm.first.gmd.derraetschl for Banana, Diabetes and Splice data sets. In Table 3, we show the runtime complexity for each data set, where the value of q has been elected as the one that reduces the runtime complexity. Database Dim Adult6 Adult9 Adult! Web 1 Web7 Gauss-M Gauss-M Banana Banana Diabetes Splice 123 123 123 300 300 2 2 2 2 8 69 N Sampl. 11221 32562 1605 2477 24693 4000 4000 400 4900 768 2175 C a SV 1 1 1000 5 5 1 100 316.2 316.2 10 1000 10 10 10 10 10 1 1 1 1 2 70 4477 12181 630 224 1444 1736 1516 80 1084 409 525 q CPU time radit light radit light 150 130 100 100 150 70 100 40 70 40 150 40 70 10 10 10 10 10 70 40 10 20 118.2 1093.29 25.98 2.42 158.13 12.69 61.68 0.33 22.46 2.41 14.06 124.46 1097.09 113.54 2.36 124.57 48.28 3053.20 0.77 1786.56 6.04 49.19 Table 3: Several data sets runtime complexity, when solved with the short, and SV Mlight, light for short. s v c radit , radit for One can appreciate that the svcradit is faster than the SV M1ig ht for most data sets. For the Web data set, which is the only data set the SV Mlight is sligthly faster, the value of C is low and most training samples end up as support vector with (3i < C. In such cases the best strategy is to take the largest step towards the solution in every iteration, as the SV Mlig ht does [6], because most training samples (3i will not be affected by the others training samples (3j value. But in those case the value of C increases the SV c radit samples selection strategy is a much more appropriate strategy than the one used in SV Mlight. 5 CONCLUSIONS In this communication a new algorithm for solving the SVC for large training data sets has been presented. Its two major contributions deal with the optimizing engine and the sample selection strategy. An IRWLS procedure is used to solve the SVC in each step, which is much faster that the usual QP procedure, and simpler to implement, because the most difficult step is the linear equation system solution that can be easily obtained by LU decomposition means [12]. The random working set selection from the samples not fulfilling the KKT conditions is the best option if the working is be large, because it reduces the number of chunks to be solved. This strategy benefits from the IRWLS procedure, which allows to work with large training data set. All these modifications have been concreted in the svcradit solving procedure, publicly available at http://svm.tsc.uc3m.es/. 6 ACKNOWLEDGEMENTS We are sincerely grateful to Thorsten Joachims who has allowed and encouraged us to use his SV Mlight to test our IRWLS procedure, comparisons which could not have been properly done otherwise. References [1] B. E. Boser, I. M . Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In 5th Annual Workshop on Computational Learning Theory, Pittsburg, U.S.A., 1992. [2] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121-167, 1998. [3] S. Haykin. Neural Networks: A comprehensivefoundation. Prentice-Hall, 1994. [4] P. W. Holland and R. E. Welch. Robust regression using iterative re-weighted least squares. Communications of Statistics Theory Methods, A6(9):813-27, 1977. [5] T. Joachims. http://www-ai.infonnatik.uni-dortmund.de/forschung/verfahren Isvmlight Isvmlight.eng.html. Technical report, University of Dortmund, Informatik, AI-Unit Collaborative Research Center on 'Complexity Reduction in Multivariate Data', 1998. [6] T. Joachims. Making Large Scale SVM Learning Practical, In Advances in Kernel Methods- Support Vector Learning, Editors SchOlkopf, B., Burges, C. 1. C. and Smola, A. 1., pages 169-184. M.I.T. Press, 1999. [7] E. Osuna, R. Freund, and F. Girosi. An improved training algorithm for support vector machines. In Proc. of the 1997 IEEE Workshop on Neural Networks for Signal Processing, pages 276-285, Amelia Island, U.S.A, 1997. [8] E. Osuna and F. Girosi. Reducing the run-time complexity of support vector machines. In ICPR'98, Brisbane, Australia, August 1998. [9] F. Perez-Cruz, A. Navia-Vazquez

6 0.57180136 7 nips-2000-A New Approximate Maximal Margin Classification Algorithm

7 0.53676808 111 nips-2000-Regularized Winnow Methods

8 0.53494686 4 nips-2000-A Linear Programming Approach to Novelty Detection

9 0.52842599 74 nips-2000-Kernel Expansions with Unlabeled Examples

10 0.52611899 21 nips-2000-Algorithmic Stability and Generalization Performance

11 0.51113403 119 nips-2000-Some New Bounds on the Generalization Error of Combined Classifiers

12 0.51016396 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks

13 0.50907433 75 nips-2000-Large Scale Bayes Point Machines

14 0.50180966 70 nips-2000-Incremental and Decremental Support Vector Machine Learning

15 0.49707237 61 nips-2000-Generalizable Singular Value Decomposition for Ill-posed Datasets

16 0.49147648 68 nips-2000-Improved Output Coding for Classification Using Continuous Relaxation

17 0.47128114 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition

18 0.46735498 93 nips-2000-On Iterative Krylov-Dogleg Trust-Region Steps for Solving Neural Networks Nonlinear Least Squares Problems

19 0.46566984 133 nips-2000-The Kernel Gibbs Sampler

20 0.46054977 12 nips-2000-A Support Vector Method for Clustering