jmlr jmlr2009 jmlr2009-26 knowledge-graph by maker-knowledge-mining

26 jmlr-2009-Dlib-ml: A Machine Learning Toolkit    (Machine Learning Open Source Software Paper)


Source: pdf

Author: Davis E. King

Abstract: There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-ml is an open source library, targeted at both engineers and research scientists, which aims to provide a similarly rich environment for developing machine learning software in the C++ language. Towards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS support. It also houses implementations of algorithms for performing inference in Bayesian networks and kernel-based methods for classification, regression, clustering, anomaly detection, and feature ranking. To enable easy use of these tools, the entire library has been developed with contract programming, which provides complete and precise documentation as well as powerful debugging tools. Keywords: kernel-methods, svm, rvm, kernel clustering, C++, Bayesian networks

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 NET Northrop Grumman ES, ATR and Image Exploitation Group Baltimore, Maryland, USA Editor: Soeren Sonnenburg Abstract There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. [sent-4, score-0.422]

2 Dlib-ml is an open source library, targeted at both engineers and research scientists, which aims to provide a similarly rich environment for developing machine learning software in the C++ language. [sent-5, score-0.452]

3 Towards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS support. [sent-6, score-0.28]

4 It also houses implementations of algorithms for performing inference in Bayesian networks and kernel-based methods for classification, regression, clustering, anomaly detection, and feature ranking. [sent-7, score-0.185]

5 To enable easy use of these tools, the entire library has been developed with contract programming, which provides complete and precise documentation as well as powerful debugging tools. [sent-8, score-0.636]

6 Keywords: kernel-methods, svm, rvm, kernel clustering, C++, Bayesian networks 1. [sent-9, score-0.092]

7 Introduction Dlib-ml is a cross platform open source software library written in the C++ programming language. [sent-10, score-0.589]

8 Its design is heavily influenced by ideas from design by contract and component-based software engineering. [sent-11, score-0.382]

9 This means it is first and foremost a collection of independent software components, each accompanied by extensive documentation and thorough debugging modes. [sent-12, score-0.363]

10 Moreover, the library is intended to be useful in both research and real world commercial projects and has been carefully designed to make it easy to integrate into a user’s C++ application. [sent-13, score-0.515]

11 However, many of these libraries focus on providing a good environment for doing research using languages other than C++. [sent-15, score-0.29]

12 Two examples of this kind of project are the Shogun (Sonnenburg et al. [sent-16, score-0.045]

13 , 2006) and Torch (Collobert and Bengio, 2001) toolkits which, while they are implemented in C++, are not focused on providing support for developing machine learning software in that language. [sent-17, score-0.381]

14 Instead they are primarily intended to be used with languages like R, Python, Matlab, or Lua. [sent-18, score-0.118]

15 Then there are toolkits such as Shark (Igel et al. [sent-19, score-0.174]

16 , 2008) and dlib-ml which are explicitly targeted at users who wish to develop software in C++. [sent-20, score-0.234]

17 Given these considerations, dlib-ml attempts to help fill some of the gaps in tool support not already filled by libraries such as Shark. [sent-21, score-0.234]

18 It is hoped that these efforts will prove useful for researchers and engineers who wish to develop machine learning software in this language. [sent-22, score-0.295]

19 Elements of the Library The library is composed of the four distinct components shown in Figure 1. [sent-28, score-0.386]

20 The linear algebra component provides a set of core functionality while the other three implement various useful tools. [sent-29, score-0.166]

21 This paper addresses the two main components, linear algebra and machine learning tools. [sent-30, score-0.123]

22 1 Linear Algebra The design of the linear algebra component of the library is based on the template expression techniques popularized by Veldhuizen and Ponnambalam (1996) in the Blitz++ numerical software. [sent-32, score-0.649]

23 This technique allows an author to write simple Matlab-like expressions that, when compiled, execute with speed comparable to hand-optimized C code. [sent-33, score-0.042]

24 The dlib-ml implementation extends this original design in a number of ways. [sent-34, score-0.066]

25 Most notably, the library can use the BLAS when available, meaning that the performance of code developed using dlib-ml can gain the speed of highly optimized libraries such as ATLAS or the Intel MKL while still using a very simple syntax. [sent-35, score-0.572]

26 Performing these transformations by hand is tedious and error prone. [sent-37, score-0.098]

27 Dlib-ml automatically performs these transformations on all expressions and invokes the appropriate BLAS calls. [sent-38, score-0.098]

28 This enables the user to write equations in the form most intuitive to them and leave these details of software optimization to the library. [sent-39, score-0.227]

29 This is a feature not found in the supporting tools of other C++ machine learning libraries. [sent-40, score-0.08]

30 2 Machine Learning Tools A major design goal of this portion of the library is to provide a highly modular and simple architecture for dealing with kernel algorithms. [sent-42, score-0.544]

31 In particular, each algorithm is parameterized to allow a user to supply either one of the predefined dlib-ml kernels, or a new user defined kernel. [sent-43, score-0.177]

32 Moreover, the implementations of the algorithms are totally separated from the data on which they operate. [sent-44, score-0.05]

33 1756 D LIB - ML : A M ACHINE L EARNING T OOLKIT This makes the dlib-ml implementation generic enough to operate on any kind of data, be it column vectors, images, or some other form of structured data. [sent-45, score-0.19]

34 Many libraries allow arbitrary precomputed kernels and some even allow user defined kernels but have interfaces which restrict them to operating on column vectors. [sent-48, score-0.435]

35 However, none allow the flexibility to operate directly on arbitrary objects, making it much easier to apply custom kernels in the case where the kernels operate on objects other than fixed length vectors. [sent-49, score-0.399]

36 The library provides implementations of popular algorithms such as RBF networks and support vector machines for classification. [sent-50, score-0.487]

37 It also includes algorithms not present in other major ML toolkits such as relevance vector machines for classification and regression (Tipping and Faul, 2003). [sent-51, score-0.225]

38 All of these algorithms are implemented as generic trainer objects with a standard interface. [sent-52, score-0.322]

39 This generic trainer interface, along with the contract programming approach, makes the library easily extensible by other developers. [sent-54, score-0.808]

40 Another good example of a generic kernel algorithm provided by the library is the kernel RLS technique introduced by Engel et al. [sent-55, score-0.712]

41 It is a kernelized version of the famous recursive least squares filter, and functions as an excellent online regression method. [sent-57, score-0.17]

42 With it, Engel introduced a simple but very effective technique for producing sparse outputs from kernel learning algorithms. [sent-58, score-0.181]

43 Engel’s sparsification technique is also used by one of dlib-ml’s most versatile tools, the kcentroid object. [sent-59, score-0.31]

44 It is a general utility for representing a weighted sum of sample points in a kernel induced feature space. [sent-60, score-0.092]

45 It can be used to easily kernelize any algorithm that requires only the ability to perform vector addition, subtraction, scalar multiplication, and inner products. [sent-61, score-0.048]

46 The kcentroid object enables the library to provide a number of useful kernel-based machine learning algorithms. [sent-62, score-0.614]

47 The most straightforward of which is online anomaly detection, which simply marks data samples as novel if their distance from the centroid of a previously observed body of data is large (e. [sent-63, score-0.127]

48 Another straightforward application of this technique is in kernelized cluster analysis. [sent-67, score-0.128]

49 Using the kcentroid it is easy to create sparse kernel clustering algorithms. [sent-68, score-0.427]

50 To demonstrate this, the library comes with a sparse kernel k-means algorithm. [sent-69, score-0.525]

51 One is essentially a reimplementation of LIBSVM (Chang and Lin, 2001) but with the generic parameterized kernel approach used in the rest of the library. [sent-71, score-0.235]

52 This solver has roughly the same CPU and memory utilization characteristics as LIBSVM. [sent-72, score-0.112]

53 The other SVM solver is a kernelized version of the Pegasos algorithm introduced by Shalev-Shwartz et al. [sent-73, score-0.15]

54 It is built using the kcentroid and thus produces sparse outputs. [sent-75, score-0.275]

55 Availability and Requirements The library is released under the Boost Software License, allowing it to be incorporated into both open-source and commercial software. [sent-77, score-0.46]

56 It requires no additional libraries, does not need to be configured or installed, and is frequently tested on MS Windows, Linux and MacOS X but should work with any ISO C++ compliant compiler. [sent-78, score-0.048]

57 1757 K ING Note that dlib-ml is a subset of a larger project named dlib hosted at http://dclib. [sent-79, score-0.259]

58 Dlib is a general purpose software development library containing a graphical application for creating Bayesian networks as well as tools for handling threads, network I/O, and numerous other tasks. [sent-82, score-0.626]

59 Dlib-ml is available from the dlib project’s download page on SourceForge. [sent-83, score-0.171]

60 Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. [sent-107, score-0.051]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('library', 0.386), ('trans', 0.285), ('kcentroid', 0.228), ('blas', 0.217), ('libraries', 0.186), ('engel', 0.174), ('toolkits', 0.174), ('dlib', 0.171), ('software', 0.16), ('igel', 0.145), ('trainer', 0.145), ('algebra', 0.123), ('suttorp', 0.114), ('veldhuizen', 0.114), ('generic', 0.1), ('debugging', 0.097), ('tipping', 0.097), ('kernel', 0.092), ('sonnenburg', 0.09), ('contract', 0.09), ('operate', 0.09), ('extensible', 0.087), ('anomaly', 0.087), ('collobert', 0.087), ('davis', 0.087), ('engineers', 0.087), ('kernelized', 0.086), ('tools', 0.08), ('python', 0.08), ('objects', 0.077), ('template', 0.074), ('commercial', 0.074), ('targeted', 0.074), ('libsvm', 0.074), ('kernels', 0.071), ('toolkit', 0.07), ('user', 0.067), ('design', 0.066), ('solver', 0.064), ('documentation', 0.063), ('christian', 0.063), ('languages', 0.063), ('pegasos', 0.063), ('clustering', 0.06), ('chang', 0.057), ('ml', 0.057), ('issn', 0.057), ('intended', 0.055), ('transformations', 0.055), ('machines', 0.051), ('implementations', 0.05), ('rls', 0.048), ('atlas', 0.048), ('houses', 0.048), ('kernelize', 0.048), ('compliant', 0.048), ('gaps', 0.048), ('gunnar', 0.048), ('gured', 0.048), ('hoped', 0.048), ('lib', 0.048), ('rvm', 0.048), ('shogun', 0.048), ('todd', 0.048), ('transposes', 0.048), ('utilization', 0.048), ('yaakov', 0.048), ('developing', 0.047), ('sparse', 0.047), ('project', 0.045), ('ing', 0.044), ('mkl', 0.043), ('maryland', 0.043), ('accompanied', 0.043), ('invokes', 0.043), ('functionality', 0.043), ('hosted', 0.043), ('installed', 0.043), ('maximisation', 0.043), ('ronan', 0.043), ('samy', 0.043), ('subtraction', 0.043), ('tedious', 0.043), ('parameterized', 0.043), ('recursive', 0.043), ('source', 0.043), ('technique', 0.042), ('environment', 0.041), ('matlab', 0.041), ('excellent', 0.041), ('versatile', 0.04), ('reorder', 0.04), ('fer', 0.04), ('precomputed', 0.04), ('boost', 0.04), ('achine', 0.04), ('marks', 0.04), ('multiplies', 0.04), ('resilient', 0.04), ('shie', 0.04)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 26 jmlr-2009-Dlib-ml: A Machine Learning Toolkit    (Machine Learning Open Source Software Paper)

Author: Davis E. King

Abstract: There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-ml is an open source library, targeted at both engineers and research scientists, which aims to provide a similarly rich environment for developing machine learning software in the C++ language. Towards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS support. It also houses implementations of algorithms for performing inference in Bayesian networks and kernel-based methods for classification, regression, clustering, anomaly detection, and feature ranking. To enable easy use of these tools, the entire library has been developed with contract programming, which provides complete and precise documentation as well as powerful debugging tools. Keywords: kernel-methods, svm, rvm, kernel clustering, C++, Bayesian networks

2 0.18849146 43 jmlr-2009-Java-ML: A Machine Learning Library    (Machine Learning Open Source Software Paper)

Author: Thomas Abeel, Yves Van de Peer, Yvan Saeys

Abstract: Java-ML is a collection of machine learning and data mining algorithms, which aims to be a readily usable and easily extensible API for both software developers and research scientists. The interfaces for each type of algorithm are kept simple and algorithms strictly follow their respective interface. Comparing different classifiers or clustering algorithms is therefore straightforward, and implementing new algorithms is also easy. The implementations of the algorithms are clearly written, properly documented and can thus be used as a reference. The library is written in Java and is available from http://java-ml.sourceforge.net/ under the GNU GPL license. Keywords: open source, machine learning, data mining, java library, clustering, feature selection, classification

3 0.1062993 39 jmlr-2009-Hybrid MPI OpenMP Parallel Linear Support Vector Machine Training

Author: Kristian Woodsend, Jacek Gondzio

Abstract: Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally challenging. A parallel implementation of linear Support Vector Machine training has been developed, using a combination of MPI and OpenMP. Using an interior point method for the optimization and a reformulation that avoids the dense Hessian matrix, the structure of the augmented system matrix is exploited to partition data and computations amongst parallel processors efficiently. The new implementation has been applied to solve problems from the PASCAL Challenge on Large-scale Learning. We show that our approach is competitive, and is able to solve problems in the Challenge many times faster than other parallel approaches. We also demonstrate that the hybrid version performs more efficiently than the version using pure MPI. Keywords: linear SVM training, hybrid parallelism, largescale learning, interior point method

4 0.086963855 76 jmlr-2009-Python Environment for Bayesian Learning: Inferring the Structure of Bayesian Networks from Knowledge and Data    (Machine Learning Open Source Software Paper)

Author: Abhik Shah, Peter Woolf

Abstract: In this paper, we introduce PEBL, a Python library and application for learning Bayesian network structure from data and prior knowledge that provides features unmatched by alternative software packages: the ability to use interventional data, flexible specification of structural priors, modeling with hidden variables and exploitation of parallel processing. PEBL is released under the MIT open-source license, can be installed from the Python Package Index and is available at http://pebl-project.googlecode.com. Keywords: Bayesian networks, python, open source software

5 0.067718856 77 jmlr-2009-RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments    (Machine Learning Open Source Software Paper)

Author: Brian Tanner, Adam White

Abstract: RL-Glue is a standard, language-independent software package for reinforcement-learning experiments. The standardization provided by RL-Glue facilitates code sharing and collaboration. Code sharing reduces the need to re-engineer tasks and experimental apparatus, both common barriers to comparatively evaluating new ideas in the context of the literature. Our software features a minimalist interface and works with several languages and computing platforms. RL-Glue compatibility can be extended to any programming language that supports network socket communication. RL-Glue has been used to teach classes, to run international competitions, and is currently used by several other open-source software and hardware projects. Keywords: reinforcement learning, empirical evaluation, standardization, open source 1. Introduction and Motivation Reinforcement learning is an embodied, trial-and-error problem formulation for artificial intelligence (Sutton and Barto, 1998; Kaelbling et al., 1996; Bertsekas and Tsitsiklis, 1996). At a series of time steps, the agent emits actions in response to observations and rewards generated by the environment. The agent’s objective is to select actions that maximize the future rewards. Reinforcementlearning methods have been successfully applied to many problems including backgammon (Tesauro, 1994), elevator control (Crites and Barto, 1998), and helicopter control (Ng et al., 2004). Reinforcementlearning models and formalisms have influenced a number of fields, including operations research, cognitive science, optimal control, psychology, neuroscience, and others. Reinforcement-learning practitioners create their agents and environments using various incompatible software frameworks, making collaboration inconvenient and thus slowing progress in our community. It can be time consuming, difficult, and sometimes even impossible to exactly reproduce the work of others. A conference or journal article is not the appropriate medium to share a sufficiently detailed specification of the environment, agent and overall experimental apparatus. We need a convenient way to share source code. We believe that a standard programming interface for reinforcement-learning experiments will remove some barriers to collaboration and accelerate the pace of research in our field. To encourage widespread adoption, this interface should be easy to adhere to, and it should not force users to abandon their favorite tools or languages. With these goals in mind, we have developed RL-Glue: language independent software for reinforcement-learning experiments. c 2009 Brian Tanner and Adam White. TANNER AND W HITE 2. RL-Glue Reinforcement-learning environments cannot be stored as fixed data-sets, as is common in conventional supervised machine learning. The environment generates observations and rewards in response to actions selected by the agent, making it more natural to think of the environment and agent as interactive programs. Sutton and Barto describe one prevalent view of agent-environment interactions in their introductory text (1998). Their view, shown in Figure 1, clearly separates the agent and environment into different components which interact in a particular way, following a particular sequence. observation ot reward rt Agent action at rt+1 Environment ot+1 Figure 1: Sutton and Barto’s agent-environment interface, with states generalized to observations. White’s RL-Glue Protocol (2006) formalizes Sutton and Barto’s interface for online, singleagent reinforcement learning. The RL-Glue Protocol describes how the different aspects of a reinforcement-learning experiment should be arranged into programs, and the etiquette they should follow when communicating with each other. These programs (Figure 2) are the agent, the environment, the experiment, and RL-Glue. The agent program implements the learning algorithm and action-selection mechanism. The environment program implements the dynamics of the task and generates the observations and rewards. The experiment program directs the experiment’s execution, including the sequence of agent-environment interactions and agent performance evaluation. The RL-Glue program mediates the communication between the agent and environment programs in response to commands from the experiment program. Our RL-Glue software (RL-Glue) implements White’s protocol.1 Experiment Program Agent Program RL-Glue Program Environment Program Figure 2: The four programs specified by the RL-Glue Protocol. Arrows indicate the direction of the flow of control. RL-Glue can be used either in internal or external mode. In internal mode, the agent, environment and experiment are linked into a single program, and their communication is through function calls. Internal mode is currently an option if the agent, environment, and experiment are written exclusively in Java or C/C++. In external mode, the agent, environment and experiment are linked 1. This can be found at http://glue.rl-community.org/protocol. 2134 RL-G LUE into separate programs. Each program connects to the RL-Glue server program, and all communication is over TCP/IP socket connections. External mode allows these programs to be written in any programming language that supports socket communication. External mode is currently supported for C/C++, Java, Python, Lisp, and Matlab. Each mode has its strengths and weaknesses. Internal mode has less overhead, so it can execute more steps per second. External mode is more flexible and portable. The performance difference between the two modes vanishes as the agent or environment becomes complex enough that computation dominates the socket overhead in terms of time per step. The agent and environment are indifferent and unaware of their execution mode; the difference in modes lies only in how the agent and environment are linked or loaded. 3. RL-Glue in Practice RL-Glue’s provides a common interface for a number of software and hardware projects in the reinforcement-learning community. For example, there is the annual RL-Competition, where teams from around the world compare their agents on a variety of challenging environments. The competition software uses the API, called RL-Viz, that is layered on top of RL-Glue to dynamically load agent and environment programs, modify parameters at runtime and visualize interaction and performance. All of the environments and sample agents created by the competition organizers are added to the RL-Library, a public, community-supported repository of RL-Glue compatible code. The RL-Library is also available as an archive of top competition agents, challenge problems, project code from academic publications, or any other RL-Glue compatible software that members of our community would like to share. The socket architecture of RL-Glue allows diverse software and hardware platforms to be connected as RL-Glue environment programs. There are ongoing projects that connect a mobile robot platform, a keepaway soccer server, a real-time strategy game, and an Atari emulator to RL-Glue. Our socket architecture helps lower the barriers for researchers wishing to work on larger scale environments by providing a simple and familiar interface. RL-Glue has been used for teaching reinforcement learning in several university courses and to create experiments for scientific articles published in leading conferences. See our RL-Glue in practice web page for an updated list of projects and papers that have used RL-Glue.2 4. Other Reinforcement-Learning Software Projects RL-Glue is not the first software project that aims to standardize empirical reinforcement learning or to make agent and environment programs more accessible within our community. However, RLGlue is the only project that offers a standardized language-independent interface, rich actions and observations, and fine-grained control of the experiment. Other projects, most notably: CLSquare,3 PIQLE,4 RL Toolbox,5 JRLF,6 and LibPG,7 offer significant value to the reinforcement-learning community by offering agents and environments, 2. 3. 4. 5. 6. 7. This can be found at http://glue.rl-community.org/rl-glue-in-practice. This can be found at http://www.ni.uos.de/index.php?id=70. This can be found at http://piqle.sourceforge.net/. This can be found at http://www.igi.tugraz.at/ril-toolbox/. This can be found at http://mykel.kochenderfer.com/jrlf/. This can be found at http://code.google.com/p/libpgrl/. 2135 TANNER AND W HITE intuitive visualizations, programming tools, etc. Users should not be forced to choose between RL-Glue and these alternative projects. Our design makes it relatively easy to interface existing frameworks with RL-Glue. We are currently offering our assistance in bridging other frameworks to RL-Glue, with the hope of improving access to all of these tools for all members of our community. 5. RL-Glue Open Source Project Website: http://glue.rl-community.org License: Apache 2.0 RL-Glue is more than an interface; it connects a family of community projects, with many levels of possible participation. Members of the community are invited to submit agent, environment and experiment programs to the RL-Library. Developers can also extend the reach of RL-Glue compatibility by writing external-mode or internal-mode interfaces for their favorite programming language. The RL-Glue software project also welcomes submissions and improvements for all parts of the software and documentation. Acknowledgments We would like to thank the users, testers, and developers for their contributions to RL-Glue 3.0. Special thanks to G´ bor Bal´ zs, Jos´ Antonio Martin H., Scott Livingston, Marc Bellemare, Istv´ n a a e a Szita, Marc Lanctot, Anna Koop, Dan Lizotte, Richard Sutton, Monica Dinculescu, Jordan Frank, and Andrew Butcher. Of course, we also owe a great debt to all of the talented people responsible for the historic and ongoing development of RL-Glue.8 References Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming (Optimization and Neural Computation Series, 3). Athena Scientific, May 1996. ISBN 1886529108. Robert H. Crites and Andrew G. Barto. Elevator group control using multiple reinforcement learning agents. Machine Learning, 33(2-3):235–262, 1998. Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: a survey. Journal of Artificial Intelligence Research, 4:237–285, 1996. Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric Berger, and Eric Liang. Autonomous inverted helicopter flight via reinforcement learning. In Proceedings of the International Symposium on Experimental Robotics, pages 363–372, 2004. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, Massachusetts, 1998. Gerald Tesauro. TD-gammon, a self-teaching backgammon program achieves master-level play. Neural Computation, 6:215–219, 1994. Adam White. A Standard System for Benchmarking in Reinforcement Learning. Master’s thesis, University of Alberta, Alberta, Canada, 2006. 8. This can be found at http://glue.rl-community.org/contributors-history. 2136

6 0.063734733 60 jmlr-2009-Nieme: Large-Scale Energy-Based Models    (Machine Learning Open Source Software Paper)

7 0.055762596 56 jmlr-2009-Model Monitor (M2): Evaluating, Comparing, and Monitoring Models    (Machine Learning Open Source Software Paper)

8 0.052939355 2 jmlr-2009-A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization

9 0.040539991 20 jmlr-2009-DL-Learner: Learning Concepts in Description Logics

10 0.040296882 4 jmlr-2009-A Survey of Accuracy Evaluation Metrics of Recommendation Tasks

11 0.039396778 78 jmlr-2009-Refinement of Reproducing Kernels

12 0.03725468 69 jmlr-2009-Optimized Cutting Plane Algorithm for Large-Scale Risk Minimization

13 0.037214711 38 jmlr-2009-Hash Kernels for Structured Data

14 0.037057992 98 jmlr-2009-Universal Kernel-Based Learning with Applications to Regular Languages    (Special Topic on Mining and Learning with Graphs and Relations)

15 0.032851048 96 jmlr-2009-Transfer Learning for Reinforcement Learning Domains: A Survey

16 0.031813353 86 jmlr-2009-Similarity-based Classification: Concepts and Algorithms

17 0.031683873 23 jmlr-2009-Discriminative Learning Under Covariate Shift

18 0.031264793 31 jmlr-2009-Evolutionary Model Type Selection for Global Surrogate Modeling

19 0.030807154 87 jmlr-2009-Sparse Online Learning via Truncated Gradient

20 0.029597348 22 jmlr-2009-Deterministic Error Analysis of Support Vector Regression and Related Regularized Kernel Methods


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.158), (1, -0.114), (2, 0.075), (3, -0.124), (4, 0.057), (5, -0.129), (6, 0.257), (7, 0.029), (8, 0.142), (9, 0.133), (10, -0.094), (11, 0.387), (12, -0.051), (13, 0.214), (14, -0.029), (15, 0.046), (16, 0.048), (17, -0.099), (18, 0.089), (19, 0.046), (20, 0.003), (21, 0.021), (22, 0.009), (23, 0.021), (24, 0.06), (25, 0.004), (26, 0.069), (27, 0.102), (28, -0.011), (29, -0.038), (30, -0.012), (31, -0.057), (32, 0.088), (33, -0.052), (34, 0.026), (35, -0.044), (36, -0.028), (37, 0.076), (38, -0.051), (39, -0.014), (40, -0.013), (41, 0.109), (42, -0.079), (43, -0.011), (44, -0.154), (45, -0.005), (46, 0.054), (47, -0.063), (48, -0.017), (49, 0.037)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97381538 26 jmlr-2009-Dlib-ml: A Machine Learning Toolkit    (Machine Learning Open Source Software Paper)

Author: Davis E. King

Abstract: There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-ml is an open source library, targeted at both engineers and research scientists, which aims to provide a similarly rich environment for developing machine learning software in the C++ language. Towards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS support. It also houses implementations of algorithms for performing inference in Bayesian networks and kernel-based methods for classification, regression, clustering, anomaly detection, and feature ranking. To enable easy use of these tools, the entire library has been developed with contract programming, which provides complete and precise documentation as well as powerful debugging tools. Keywords: kernel-methods, svm, rvm, kernel clustering, C++, Bayesian networks

2 0.75463516 43 jmlr-2009-Java-ML: A Machine Learning Library    (Machine Learning Open Source Software Paper)

Author: Thomas Abeel, Yves Van de Peer, Yvan Saeys

Abstract: Java-ML is a collection of machine learning and data mining algorithms, which aims to be a readily usable and easily extensible API for both software developers and research scientists. The interfaces for each type of algorithm are kept simple and algorithms strictly follow their respective interface. Comparing different classifiers or clustering algorithms is therefore straightforward, and implementing new algorithms is also easy. The implementations of the algorithms are clearly written, properly documented and can thus be used as a reference. The library is written in Java and is available from http://java-ml.sourceforge.net/ under the GNU GPL license. Keywords: open source, machine learning, data mining, java library, clustering, feature selection, classification

3 0.64240968 76 jmlr-2009-Python Environment for Bayesian Learning: Inferring the Structure of Bayesian Networks from Knowledge and Data    (Machine Learning Open Source Software Paper)

Author: Abhik Shah, Peter Woolf

Abstract: In this paper, we introduce PEBL, a Python library and application for learning Bayesian network structure from data and prior knowledge that provides features unmatched by alternative software packages: the ability to use interventional data, flexible specification of structural priors, modeling with hidden variables and exploitation of parallel processing. PEBL is released under the MIT open-source license, can be installed from the Python Package Index and is available at http://pebl-project.googlecode.com. Keywords: Bayesian networks, python, open source software

4 0.4145847 39 jmlr-2009-Hybrid MPI OpenMP Parallel Linear Support Vector Machine Training

Author: Kristian Woodsend, Jacek Gondzio

Abstract: Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally challenging. A parallel implementation of linear Support Vector Machine training has been developed, using a combination of MPI and OpenMP. Using an interior point method for the optimization and a reformulation that avoids the dense Hessian matrix, the structure of the augmented system matrix is exploited to partition data and computations amongst parallel processors efficiently. The new implementation has been applied to solve problems from the PASCAL Challenge on Large-scale Learning. We show that our approach is competitive, and is able to solve problems in the Challenge many times faster than other parallel approaches. We also demonstrate that the hybrid version performs more efficiently than the version using pure MPI. Keywords: linear SVM training, hybrid parallelism, largescale learning, interior point method

5 0.32233989 77 jmlr-2009-RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments    (Machine Learning Open Source Software Paper)

Author: Brian Tanner, Adam White

Abstract: RL-Glue is a standard, language-independent software package for reinforcement-learning experiments. The standardization provided by RL-Glue facilitates code sharing and collaboration. Code sharing reduces the need to re-engineer tasks and experimental apparatus, both common barriers to comparatively evaluating new ideas in the context of the literature. Our software features a minimalist interface and works with several languages and computing platforms. RL-Glue compatibility can be extended to any programming language that supports network socket communication. RL-Glue has been used to teach classes, to run international competitions, and is currently used by several other open-source software and hardware projects. Keywords: reinforcement learning, empirical evaluation, standardization, open source 1. Introduction and Motivation Reinforcement learning is an embodied, trial-and-error problem formulation for artificial intelligence (Sutton and Barto, 1998; Kaelbling et al., 1996; Bertsekas and Tsitsiklis, 1996). At a series of time steps, the agent emits actions in response to observations and rewards generated by the environment. The agent’s objective is to select actions that maximize the future rewards. Reinforcementlearning methods have been successfully applied to many problems including backgammon (Tesauro, 1994), elevator control (Crites and Barto, 1998), and helicopter control (Ng et al., 2004). Reinforcementlearning models and formalisms have influenced a number of fields, including operations research, cognitive science, optimal control, psychology, neuroscience, and others. Reinforcement-learning practitioners create their agents and environments using various incompatible software frameworks, making collaboration inconvenient and thus slowing progress in our community. It can be time consuming, difficult, and sometimes even impossible to exactly reproduce the work of others. A conference or journal article is not the appropriate medium to share a sufficiently detailed specification of the environment, agent and overall experimental apparatus. We need a convenient way to share source code. We believe that a standard programming interface for reinforcement-learning experiments will remove some barriers to collaboration and accelerate the pace of research in our field. To encourage widespread adoption, this interface should be easy to adhere to, and it should not force users to abandon their favorite tools or languages. With these goals in mind, we have developed RL-Glue: language independent software for reinforcement-learning experiments. c 2009 Brian Tanner and Adam White. TANNER AND W HITE 2. RL-Glue Reinforcement-learning environments cannot be stored as fixed data-sets, as is common in conventional supervised machine learning. The environment generates observations and rewards in response to actions selected by the agent, making it more natural to think of the environment and agent as interactive programs. Sutton and Barto describe one prevalent view of agent-environment interactions in their introductory text (1998). Their view, shown in Figure 1, clearly separates the agent and environment into different components which interact in a particular way, following a particular sequence. observation ot reward rt Agent action at rt+1 Environment ot+1 Figure 1: Sutton and Barto’s agent-environment interface, with states generalized to observations. White’s RL-Glue Protocol (2006) formalizes Sutton and Barto’s interface for online, singleagent reinforcement learning. The RL-Glue Protocol describes how the different aspects of a reinforcement-learning experiment should be arranged into programs, and the etiquette they should follow when communicating with each other. These programs (Figure 2) are the agent, the environment, the experiment, and RL-Glue. The agent program implements the learning algorithm and action-selection mechanism. The environment program implements the dynamics of the task and generates the observations and rewards. The experiment program directs the experiment’s execution, including the sequence of agent-environment interactions and agent performance evaluation. The RL-Glue program mediates the communication between the agent and environment programs in response to commands from the experiment program. Our RL-Glue software (RL-Glue) implements White’s protocol.1 Experiment Program Agent Program RL-Glue Program Environment Program Figure 2: The four programs specified by the RL-Glue Protocol. Arrows indicate the direction of the flow of control. RL-Glue can be used either in internal or external mode. In internal mode, the agent, environment and experiment are linked into a single program, and their communication is through function calls. Internal mode is currently an option if the agent, environment, and experiment are written exclusively in Java or C/C++. In external mode, the agent, environment and experiment are linked 1. This can be found at http://glue.rl-community.org/protocol. 2134 RL-G LUE into separate programs. Each program connects to the RL-Glue server program, and all communication is over TCP/IP socket connections. External mode allows these programs to be written in any programming language that supports socket communication. External mode is currently supported for C/C++, Java, Python, Lisp, and Matlab. Each mode has its strengths and weaknesses. Internal mode has less overhead, so it can execute more steps per second. External mode is more flexible and portable. The performance difference between the two modes vanishes as the agent or environment becomes complex enough that computation dominates the socket overhead in terms of time per step. The agent and environment are indifferent and unaware of their execution mode; the difference in modes lies only in how the agent and environment are linked or loaded. 3. RL-Glue in Practice RL-Glue’s provides a common interface for a number of software and hardware projects in the reinforcement-learning community. For example, there is the annual RL-Competition, where teams from around the world compare their agents on a variety of challenging environments. The competition software uses the API, called RL-Viz, that is layered on top of RL-Glue to dynamically load agent and environment programs, modify parameters at runtime and visualize interaction and performance. All of the environments and sample agents created by the competition organizers are added to the RL-Library, a public, community-supported repository of RL-Glue compatible code. The RL-Library is also available as an archive of top competition agents, challenge problems, project code from academic publications, or any other RL-Glue compatible software that members of our community would like to share. The socket architecture of RL-Glue allows diverse software and hardware platforms to be connected as RL-Glue environment programs. There are ongoing projects that connect a mobile robot platform, a keepaway soccer server, a real-time strategy game, and an Atari emulator to RL-Glue. Our socket architecture helps lower the barriers for researchers wishing to work on larger scale environments by providing a simple and familiar interface. RL-Glue has been used for teaching reinforcement learning in several university courses and to create experiments for scientific articles published in leading conferences. See our RL-Glue in practice web page for an updated list of projects and papers that have used RL-Glue.2 4. Other Reinforcement-Learning Software Projects RL-Glue is not the first software project that aims to standardize empirical reinforcement learning or to make agent and environment programs more accessible within our community. However, RLGlue is the only project that offers a standardized language-independent interface, rich actions and observations, and fine-grained control of the experiment. Other projects, most notably: CLSquare,3 PIQLE,4 RL Toolbox,5 JRLF,6 and LibPG,7 offer significant value to the reinforcement-learning community by offering agents and environments, 2. 3. 4. 5. 6. 7. This can be found at http://glue.rl-community.org/rl-glue-in-practice. This can be found at http://www.ni.uos.de/index.php?id=70. This can be found at http://piqle.sourceforge.net/. This can be found at http://www.igi.tugraz.at/ril-toolbox/. This can be found at http://mykel.kochenderfer.com/jrlf/. This can be found at http://code.google.com/p/libpgrl/. 2135 TANNER AND W HITE intuitive visualizations, programming tools, etc. Users should not be forced to choose between RL-Glue and these alternative projects. Our design makes it relatively easy to interface existing frameworks with RL-Glue. We are currently offering our assistance in bridging other frameworks to RL-Glue, with the hope of improving access to all of these tools for all members of our community. 5. RL-Glue Open Source Project Website: http://glue.rl-community.org License: Apache 2.0 RL-Glue is more than an interface; it connects a family of community projects, with many levels of possible participation. Members of the community are invited to submit agent, environment and experiment programs to the RL-Library. Developers can also extend the reach of RL-Glue compatibility by writing external-mode or internal-mode interfaces for their favorite programming language. The RL-Glue software project also welcomes submissions and improvements for all parts of the software and documentation. Acknowledgments We would like to thank the users, testers, and developers for their contributions to RL-Glue 3.0. Special thanks to G´ bor Bal´ zs, Jos´ Antonio Martin H., Scott Livingston, Marc Bellemare, Istv´ n a a e a Szita, Marc Lanctot, Anna Koop, Dan Lizotte, Richard Sutton, Monica Dinculescu, Jordan Frank, and Andrew Butcher. Of course, we also owe a great debt to all of the talented people responsible for the historic and ongoing development of RL-Glue.8 References Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming (Optimization and Neural Computation Series, 3). Athena Scientific, May 1996. ISBN 1886529108. Robert H. Crites and Andrew G. Barto. Elevator group control using multiple reinforcement learning agents. Machine Learning, 33(2-3):235–262, 1998. Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: a survey. Journal of Artificial Intelligence Research, 4:237–285, 1996. Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric Berger, and Eric Liang. Autonomous inverted helicopter flight via reinforcement learning. In Proceedings of the International Symposium on Experimental Robotics, pages 363–372, 2004. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, Massachusetts, 1998. Gerald Tesauro. TD-gammon, a self-teaching backgammon program achieves master-level play. Neural Computation, 6:215–219, 1994. Adam White. A Standard System for Benchmarking in Reinforcement Learning. Master’s thesis, University of Alberta, Alberta, Canada, 2006. 8. This can be found at http://glue.rl-community.org/contributors-history. 2136

6 0.29847741 56 jmlr-2009-Model Monitor (M2): Evaluating, Comparing, and Monitoring Models    (Machine Learning Open Source Software Paper)

7 0.28353003 60 jmlr-2009-Nieme: Large-Scale Energy-Based Models    (Machine Learning Open Source Software Paper)

8 0.22175407 69 jmlr-2009-Optimized Cutting Plane Algorithm for Large-Scale Risk Minimization

9 0.20570962 38 jmlr-2009-Hash Kernels for Structured Data

10 0.19767822 22 jmlr-2009-Deterministic Error Analysis of Support Vector Regression and Related Regularized Kernel Methods

11 0.18489496 61 jmlr-2009-Nonextensive Information Theoretic Kernels on Measures

12 0.17978518 86 jmlr-2009-Similarity-based Classification: Concepts and Algorithms

13 0.17814942 63 jmlr-2009-On Efficient Large Margin Semisupervised Learning: Method and Theory

14 0.17760453 98 jmlr-2009-Universal Kernel-Based Learning with Applications to Regular Languages    (Special Topic on Mining and Learning with Graphs and Relations)

15 0.16368888 4 jmlr-2009-A Survey of Accuracy Evaluation Metrics of Recommendation Tasks

16 0.1624179 78 jmlr-2009-Refinement of Reproducing Kernels

17 0.16227883 8 jmlr-2009-An Anticorrelation Kernel for Subsystem Training in Multiple Classifier Systems

18 0.16184995 2 jmlr-2009-A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization

19 0.15534417 20 jmlr-2009-DL-Learner: Learning Concepts in Description Logics

20 0.1468313 30 jmlr-2009-Estimation of Sparse Binary Pairwise Markov Networks using Pseudo-likelihoods


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.392), (8, 0.034), (11, 0.021), (21, 0.016), (26, 0.023), (38, 0.094), (47, 0.022), (52, 0.038), (55, 0.02), (58, 0.038), (66, 0.057), (68, 0.02), (90, 0.047), (91, 0.074), (96, 0.039)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77364904 26 jmlr-2009-Dlib-ml: A Machine Learning Toolkit    (Machine Learning Open Source Software Paper)

Author: Davis E. King

Abstract: There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-ml is an open source library, targeted at both engineers and research scientists, which aims to provide a similarly rich environment for developing machine learning software in the C++ language. Towards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS support. It also houses implementations of algorithms for performing inference in Bayesian networks and kernel-based methods for classification, regression, clustering, anomaly detection, and feature ranking. To enable easy use of these tools, the entire library has been developed with contract programming, which provides complete and precise documentation as well as powerful debugging tools. Keywords: kernel-methods, svm, rvm, kernel clustering, C++, Bayesian networks

2 0.32318053 29 jmlr-2009-Estimating Labels from Label Proportions

Author: Novi Quadrianto, Alex J. Smola, Tibério S. Caetano, Quoc V. Le

Abstract: Consider the following problem: given sets of unlabeled observations, each set with known label proportions, predict the labels of another set of observations, possibly with known label proportions. This problem occurs in areas like e-commerce, politics, spam filtering and improper content detection. We present consistent estimators which can reconstruct the correct labels with high probability in a uniform convergence sense. Experiments show that our method works well in practice. Keywords: unsupervised learning, Gaussian processes, classification and prediction, probabilistic models, missing variables

3 0.32269898 43 jmlr-2009-Java-ML: A Machine Learning Library    (Machine Learning Open Source Software Paper)

Author: Thomas Abeel, Yves Van de Peer, Yvan Saeys

Abstract: Java-ML is a collection of machine learning and data mining algorithms, which aims to be a readily usable and easily extensible API for both software developers and research scientists. The interfaces for each type of algorithm are kept simple and algorithms strictly follow their respective interface. Comparing different classifiers or clustering algorithms is therefore straightforward, and implementing new algorithms is also easy. The implementations of the algorithms are clearly written, properly documented and can thus be used as a reference. The library is written in Java and is available from http://java-ml.sourceforge.net/ under the GNU GPL license. Keywords: open source, machine learning, data mining, java library, clustering, feature selection, classification

4 0.31786406 60 jmlr-2009-Nieme: Large-Scale Energy-Based Models    (Machine Learning Open Source Software Paper)

Author: Francis Maes

Abstract: N IEME,1 In this paper we introduce a machine learning library for large-scale classification, regression and ranking. N IEME relies on the framework of energy-based models (LeCun et al., 2006) which unifies several learning algorithms ranging from simple perceptrons to recent models such as the pegasos support vector machine or l1-regularized maximum entropy models. This framework also unifies batch and stochastic learning which are both seen as energy minimization problems. N IEME can hence be used in a wide range of situations, but is particularly interesting for large-scale learning tasks where both the examples and the features are processed incrementally. Being able to deal with new incoming features at any time within the learning process is another original feature of the N IEME toolbox. N IEME is released under the GPL license. It is efficiently implemented in C++, it works on Linux, Mac OS X and Windows and provides interfaces for C++, Java and Python. Keywords: large-scale machine learning, classification, ranking, regression, energy-based models, machine learning software

5 0.28935674 76 jmlr-2009-Python Environment for Bayesian Learning: Inferring the Structure of Bayesian Networks from Knowledge and Data    (Machine Learning Open Source Software Paper)

Author: Abhik Shah, Peter Woolf

Abstract: In this paper, we introduce PEBL, a Python library and application for learning Bayesian network structure from data and prior knowledge that provides features unmatched by alternative software packages: the ability to use interventional data, flexible specification of structural priors, modeling with hidden variables and exploitation of parallel processing. PEBL is released under the MIT open-source license, can be installed from the Python Package Index and is available at http://pebl-project.googlecode.com. Keywords: Bayesian networks, python, open source software

6 0.26506647 77 jmlr-2009-RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments    (Machine Learning Open Source Software Paper)

7 0.26396361 75 jmlr-2009-Provably Efficient Learning with Typed Parametric Models

8 0.26175287 85 jmlr-2009-Settable Systems: An Extension of Pearl's Causal Model with Optimization, Equilibrium, and Learning

9 0.25992996 38 jmlr-2009-Hash Kernels for Structured Data

10 0.25868541 69 jmlr-2009-Optimized Cutting Plane Algorithm for Large-Scale Risk Minimization

11 0.25700864 70 jmlr-2009-Particle Swarm Model Selection    (Special Topic on Model Selection)

12 0.2564925 97 jmlr-2009-Ultrahigh Dimensional Feature Selection: Beyond The Linear Model

13 0.25391161 27 jmlr-2009-Efficient Online and Batch Learning Using Forward Backward Splitting

14 0.25297606 58 jmlr-2009-NEUROSVM: An Architecture to Reduce the Effect of the Choice of Kernel on the Performance of SVM

15 0.25294641 48 jmlr-2009-Learning Nondeterministic Classifiers

16 0.25208676 84 jmlr-2009-Scalable Collaborative Filtering Approaches for Large Recommender Systems    (Special Topic on Mining and Learning with Graphs and Relations)

17 0.25201392 99 jmlr-2009-Using Local Dependencies within Batches to Improve Large Margin Classifiers

18 0.25184292 71 jmlr-2009-Perturbation Corrections in Approximate Inference: Mixture Modelling Applications

19 0.25154656 62 jmlr-2009-Nonlinear Models Using Dirichlet Process Mixtures

20 0.25091764 57 jmlr-2009-Multi-task Reinforcement Learning in Partially Observable Stochastic Environments