jmlr jmlr2008 jmlr2008-85 jmlr2008-85-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Christian Igel, Verena Heidrich-Meisner, Tobias Glasmachers
Abstract: SHARK is an object-oriented library for the design of adaptive systems. It comprises methods for single- and multi-objective optimization (e.g., evolutionary and gradient-based algorithms) as well as kernel-based methods, neural networks, and other machine learning techniques. Keywords: machine learning software, neural networks, kernel-methods, evolutionary algorithms, optimization, multi-objective-optimization 1. Overview SHARK is a modular C++ library for the design and optimization of adaptive systems. It serves as a toolbox for real world applications and basic research in computational intelligence and machine learning. The library provides methods for single- and multi-objective optimization, in particular evolutionary and gradient-based algorithms, kernel-based learning methods, neural networks, and many other machine learning techniques. Its main design criteria are flexibility and speed. Here we restrict the description of SHARK to its core components, albeit the library contains plenty of additional functionality. Further information can be obtained from the HTML documentation and tutorials. More than 60 illustrative example programs serve as starting points for using SHARK. 2. Basic Tools—Rng, Array, and LinAlg The library provides general auxiliary functions and data structures for the development of machine learning algorithms. The Rng module generates reproducible and platform independent sequences of pseudo random numbers, which can be drawn from 14 predefined discrete and continuous parametric distributions. The Array class provides dynamical array templates of arbitrary type and dimension as well as basic operations acting on these templates. LinAlg implements linear algebra algorithms such as matrix inversion and singular value decomposition. 3. ReClaM—Regression and Classification Methods The goal of the ReClaM module is to provide machine learning algorithms for supervised classification and regression in a unified, modular framework. It is built like a construction kit, where the main building blocks are adaptive data processing models, error functions, and optimization c 2008 Christian Igel, Verena Heidrich-Meisner and Tobias Glasmachers. I GEL , H EIDRICH -M EISNER AND G LASMACHERS 8 90736D 3 ¨¥¨¥¥£ ¡ §§©§¦¤¢ init(...) optimize(...) E 8973 B@ 6 4C3 A 86 973 543 %$#¨!
O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1):131–159, 2002. R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using the second order information for training support vector machines. Journal of Machine Learning Research, 6:1889–1918, 2005. T. Glasmachers and C. Igel. Gradient-based adaptation of general Gaussian kernels. Neural Computation, 17(10):2099–2105, 2005. T. Glasmachers and C. Igel. Maximum-gain working set selection for support vector machines. Journal of Machine Learning Research, 7:1437–1466, 2006. N. Hansen, S. D. M¨ ller, and P. Koumoutsakos. Reducing the time complexity of the derandomized u evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11 (1):1–18, 2003. C. Igel and M. H¨ sken. Empirical evaluation of the improved Rprop learning algorithm. Neurou computing, 50(C):105–123, 2003. C. Igel, T. Glasmachers, B. Mersch, N. Pfeifer, and P. Meinicke. Gradient-based optimization of kernel-target alignment for sequence kernels applied to bacterial gene start detection. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 4(2):216–226, 2007a. C. Igel, N. Hansen, and S. Roth. Covariance matrix adaptation for multi-objective optimization. Evolutionary Computation, 15(1):1–28, 2007b. S. Romdhani, P. Torr, B. Sch¨ lkopf, and A. Blake. Efficient face detection by a cascaded supporto vector machine expansion. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 460(2051):3283–3297, 2004. T. Suttorp and C. Igel. Resilient simplification of kernel classifiers. In J. Marques de S´ et al., a editors, International Conference on Artificial Neural Networks (ICANN 2007), volume 4668 of LNCS, pages 139–148. Springer-Verlag, 2007. 996