jmlr jmlr2011 jmlr2011-92 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Jan Saputra Müller, Paul von Bünau, Frank C. Meinecke, Franz J. Király, Klaus-Robert Müller
Abstract: The Stationary Subspace Analysis (SSA) algorithm linearly factorizes a high-dimensional time series into stationary and non-stationary components. The SSA Toolbox is a platform-independent efficient stand-alone implementation of the SSA algorithm with a graphical user interface written in Java, that can also be invoked from the command line and from Matlab. The graphical interface guides the user through the whole process; data can be imported and exported from comma separated values (CSV) and Matlab’s .mat files. Keywords: non-stationarities, blind source separation, dimensionality reduction, unsupervised learning
Reference: text
sentIndex sentText sentNum sentScore
1 Journal of Machine Learning Research 12 (2011) 3065-3069 Submitted 10/10; Revised 8/11; Published 10/11 The Stationary Subspace Analysis Toolbox ¨ Jan Saputra Muller ¨ Paul von Bunau Frank C. [sent-1, score-0.196]
2 28/29, 10587 Berlin, Germany Editor: Cheng Soon Ong Abstract The Stationary Subspace Analysis (SSA) algorithm linearly factorizes a high-dimensional time series into stationary and non-stationary components. [sent-15, score-0.185]
3 The SSA Toolbox is a platform-independent efficient stand-alone implementation of the SSA algorithm with a graphical user interface written in Java, that can also be invoked from the command line and from Matlab. [sent-16, score-0.279]
4 The graphical interface guides the user through the whole process; data can be imported and exported from comma separated values (CSV) and Matlab’s . [sent-17, score-0.339]
5 Keywords: non-stationarities, blind source separation, dimensionality reduction, unsupervised learning 1. [sent-19, score-0.031]
6 In particular, when the observed data is a mixture of latent factors that cannot be measured directly, visual inspection of multivariate time series is not informative to discern stationary and non-stationary contributions. [sent-21, score-0.185]
7 For example, a single non-stationary factor can be spread out among all channels and make the whole data appear non-stationary, even when all other sources are perfectly stationary. [sent-22, score-0.095]
8 Conversely, a non-stationary component with low power can remain hidden among stronger stationary sources. [sent-23, score-0.185]
9 In electroencephalography (EEG) analysis (Niedermeyer and Lopes da Silva, 2005), for instance, the electrodes on the scalp record a mixture of the activity from a multitude of sources located inside the brain, which we cannot measure individually with non-invasive methods. [sent-24, score-0.296]
10 Thus, in order to distinguish the activity of stationary and non-stationary brain sources, we need to separate their contributions in the measured EEG signals (von B¨ nau u et al. [sent-25, score-0.517]
11 To that end, in the Stationary Subspace Analysis (SSA) model (von B¨ nau et al. [sent-27, score-0.246]
12 , 2009), the u observed data x(t) ∈ RD is assumed to be generated as a linear mixture of d stationary sources ss (t) and D − d non-stationary sources sn (t), x(t) = As(t) = As An ss (t) , sn (t) c 2011 Jan Saputra M¨ ller, Paul von B¨ nau, Frank C. [sent-28, score-0.653]
13 Note that the sources s(t) are not assumed to be independent or uncorrelated. [sent-32, score-0.095]
14 A time series is considered stationary if its mean and covariance are constant over time, that is, a time series u(t) is called stationary if E[u(t1 )] = E[u(t2 )] and E[u(t1 )u(t1 )⊤ ] = E[u(t2 )u(t2 )⊤ ], at all pairs of time points t1 ,t2 ≥ 0. [sent-33, score-0.37]
15 , 2011) finds the u demixing matrix that separates the stationary and non-stationary sources given samples from x(t) by solving a non-convex optimization problem. [sent-38, score-0.31]
16 This yields an estimate for the mixing matrix, and the stationary and non-stationary sources. [sent-39, score-0.185]
17 Capabilities of the SSA Toolbox The SSA Toolbox is a platform-independent implementation of the SSA algorithm with a convenient graphical user interface. [sent-41, score-0.117]
18 The latest release is available from the SSA website. [sent-42, score-0.076]
19 • As a stand-alone application with a graphical user interface. [sent-44, score-0.117]
20 • From Matlab via an efficient in-memory interface through the wrapper script ssa. [sent-46, score-0.236]
21 5 (released in 2004); native libraries are included for all major platforms with a pure-Java fallback. [sent-52, score-0.103]
22 2 Data Import/Export The stand-alone application can read data and write results from comma separated values (CSV) and from Matlab’s . [sent-54, score-0.103]
23 3 Efficiency The efficiency of the toolbox is mainly due to the underlying matrix libraries. [sent-57, score-0.186]
24 The user can choose between COLT,3 written in pure Java, and the high-performance library jblas4 (Braun et al. [sent-58, score-0.114]
25 , 2010), which wraps the state-of-the-art BLAS and LAPACK implementations included as native binaries for Windows, Linux and MacOS in 32 and 64 bit. [sent-59, score-0.109]
26 3066 T HE S TATIONARY S UBSPACE A NALYSIS T OOLBOX Figure 1: Graphical user interface of the SSA Toolbox. [sent-75, score-0.198]
27 From top to bottom, the panels correspond to the steps data import, parameter specification, and export of results. [sent-76, score-0.054]
28 The window also includes a log panel at the bottom, which is not shown here. [sent-77, score-0.032]
29 4 User Interface The graphical user interface of the stand-alone application provides step-by-step guidance through the whole process: from data import, specification of parameters to the export of results. [sent-79, score-0.29]
30 The toolbox also suggests sensible parameter values based on heuristics. [sent-80, score-0.186]
31 The log panel, not pictured in Figure 1, shows instructive error and diagnostic messages. [sent-81, score-0.027]
32 5 Matlab Interface The implementation of the SSA algorithm can also be accessed directly from Matlab, using the wrapper script ssa. [sent-84, score-0.117]
33 6 Documentation The user manual explains the SSA algorithm, the use of the toolbox, interpretation of results and answers frequently asked questions. [sent-88, score-0.118]
34 It also includes a section for developers that provides an overview of the source code and a description of the unit tests. [sent-89, score-0.09]
35 7 Examples The toolbox comes with example data in CSV and . [sent-91, score-0.186]
36 mat format, a Matlab script for generating synthetic data sets (documented in the manual, and a self-contained Matlab demo ssa demo. [sent-92, score-0.738]
37 8 Developer Access, License and Unit Tests The source code is provided under the BSD license and is available in a separate archive for each released version. [sent-95, score-0.187]
38 The latest version of the source code is available from github,5 a free hosting services for the git version control system. [sent-96, score-0.196]
39 The source code is fully documented according to the Javadoc conventions and accompanied by a set of unit tests, which are described in the developer section of the user manual. [sent-97, score-0.253]
40 Satoshi Hara, Yoshinobu Kawahara, Takashi Washio, and Paul von B¨ nau. [sent-103, score-0.196]
41 Stationary subspace u analysis as a generalized eigenvalue problem. [sent-104, score-0.053]
42 Motoaki Kawanabe, Wojciech Samek, Paul von B¨ nau, and Frank Meinecke. [sent-107, score-0.196]
43 An information geou metrical view of stationary subspace analysis. [sent-108, score-0.238]
44 3068 T HE S TATIONARY S UBSPACE A NALYSIS T OOLBOX Paul von B¨ nau, Frank C. [sent-126, score-0.196]
45 Finding stationary u a u subspaces in multivariate time series. [sent-129, score-0.185]
46 Finding stationary u u brain sources in EEG data. [sent-140, score-0.329]
wordName wordTfidf (topN-words)
[('ssa', 0.634), ('nau', 0.246), ('von', 0.196), ('toolbox', 0.186), ('stationary', 0.185), ('meinecke', 0.176), ('tu', 0.148), ('franz', 0.141), ('saputra', 0.141), ('paul', 0.136), ('matlab', 0.125), ('berlin', 0.122), ('uller', 0.12), ('interface', 0.119), ('java', 0.108), ('csv', 0.106), ('kir', 0.106), ('ly', 0.106), ('sources', 0.095), ('eeg', 0.09), ('frank', 0.083), ('user', 0.079), ('script', 0.074), ('aly', 0.07), ('comma', 0.07), ('einecke', 0.07), ('electroencephalography', 0.07), ('niedermeyer', 0.07), ('oolbox', 0.07), ('tationary', 0.07), ('unau', 0.07), ('hara', 0.06), ('documented', 0.06), ('ubspace', 0.06), ('lopes', 0.06), ('jan', 0.058), ('mueller', 0.054), ('kawanabe', 0.054), ('developer', 0.054), ('platforms', 0.054), ('license', 0.054), ('export', 0.054), ('subspace', 0.053), ('native', 0.049), ('import', 0.049), ('brain', 0.049), ('ps', 0.046), ('heidelberg', 0.046), ('released', 0.046), ('latest', 0.046), ('muller', 0.043), ('command', 0.043), ('wrapper', 0.043), ('ss', 0.041), ('bernstein', 0.041), ('nalysis', 0.039), ('manual', 0.039), ('graphical', 0.038), ('braun', 0.037), ('activity', 0.037), ('library', 0.035), ('da', 0.034), ('separated', 0.033), ('panel', 0.032), ('source', 0.031), ('ir', 0.03), ('ernst', 0.03), ('demixing', 0.03), ('kawahara', 0.03), ('washio', 0.03), ('yoshinobu', 0.03), ('hosting', 0.03), ('maurice', 0.03), ('icann', 0.03), ('wojciech', 0.03), ('johannes', 0.03), ('epoch', 0.03), ('binaries', 0.03), ('bsd', 0.03), ('git', 0.03), ('release', 0.03), ('services', 0.03), ('lapack', 0.03), ('demo', 0.03), ('developers', 0.03), ('electrodes', 0.03), ('multitude', 0.03), ('wraps', 0.03), ('code', 0.029), ('err', 0.027), ('archive', 0.027), ('klaus', 0.027), ('satoshi', 0.027), ('takashi', 0.027), ('motoaki', 0.027), ('fkz', 0.027), ('cooperation', 0.027), ('blas', 0.027), ('instructive', 0.027), ('finding', 0.027)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999982 92 jmlr-2011-The Stationary Subspace Analysis Toolbox
Author: Jan Saputra Müller, Paul von Bünau, Frank C. Meinecke, Franz J. Király, Klaus-Robert Müller
Abstract: The Stationary Subspace Analysis (SSA) algorithm linearly factorizes a high-dimensional time series into stationary and non-stationary components. The SSA Toolbox is a platform-independent efficient stand-alone implementation of the SSA algorithm with a graphical user interface written in Java, that can also be invoked from the command line and from Matlab. The graphical interface guides the user through the whole process; data can be imported and exported from comma separated values (CSV) and Matlab’s .mat files. Keywords: non-stationarities, blind source separation, dimensionality reduction, unsupervised learning
2 0.037248481 63 jmlr-2011-MULAN: A Java Library for Multi-Label Learning
Author: Grigorios Tsoumakas, Eleftherios Spyromitros-Xioufis, Jozef Vilcek, Ioannis Vlahavas
Abstract: M ULAN is a Java library for learning from multi-label data. It offers a variety of classification, ranking, thresholding and dimensionality reduction algorithms, as well as algorithms for learning from hierarchically structured labels. In addition, it contains an evaluation framework that calculates a rich variety of performance measures. Keywords: multi-label data, classification, ranking, thresholding, dimensionality reduction, hierarchical classification, evaluation 1. Multi-Label Learning A multi-label data set consists of training examples that are associated with a subset of a finite set of labels. Nowadays, multi-label data are becoming ubiquitous. They arise in an increasing number and diversity of applications, such as semantic annotation of images and video, web page categorization, direct marketing, functional genomics and music categorization into genres and emotions. There exist two major multi-label learning tasks (Tsoumakas et al., 2010): multi-label classification and label ranking. The former is concerned with learning a model that outputs a bipartition of the set of labels into relevant and irrelevant with respect to a query instance. The latter is concerned with learning a model that outputs a ranking of the labels according to their relevance to a query instance. Some algorithms learn models that serve both tasks. Several algorithms learn models that primarily output a vector of numerical scores, one for each label. This vector is then converted to a ranking after solving ties, or to a bipartition, after thresholding (Ioannou et al., 2010). Multi-label learning methods addressing these tasks can be grouped into two categories (Tsoumakas et al., 2010): problem transformation and algorithm adaptation. The first group of methods are algorithm independent. They transform the learning task into one or more singlelabel classification tasks, for which a large body of learning algorithms exists. The second group of methods extend specific learning algorithms in order to handle multi-label data directly. There exist extensions of decision tree learners, nearest neighbor classifiers, neural networks, ensemble methods, support vector machines, kernel methods, genetic algorithms and others. Multi-label learning stretches across several other tasks. When labels are structured as a treeshaped hierarchy or a directed acyclic graph, then we have the interesting task of hierarchical multilabel learning. Dimensionality reduction is another important task for multi-label data, as it is for c 2011 Grigorios Tsoumakas, Eleftherios Spyromitros-Xioufis, Jozef Vilcek and Ioannis Vlahavas. T SOUMAKAS , S PYROMITROS -X IOUFIS , V ILCEK AND V LAHAVAS any kind of data. When bags of instances are used to represent a training object, then multi-instance multi-label learning algorithms are required. There also exist semi-supervised learning and active learning algorithms for multi-label data. 2. The M ULAN Library The main goal of M ULAN is to bring the benefits of machine learning open source software (MLOSS) (Sonnenburg et al., 2007) to people working with multi-label data. The availability of MLOSS is especially important in emerging areas like multi-label learning, because it removes the burden of implementing related work and speeds up the scientific progress. In multi-label learning, an extra burden is implementing appropriate evaluation measures, since these are different compared to traditional supervised learning tasks. Evaluating multi-label algorithms with a variety of measures, is considered important by the community, due to the different types of output (bipartition, ranking) and diverse applications. Towards this goal, M ULAN offers a plethora of state-of-the-art algorithms for multi-label classification and label ranking and an evaluation framework that computes a large variety of multi-label evaluation measures through hold-out evaluation and cross-validation. In addition, the library offers a number of thresholding strategies that produce bipartitions from score vectors, simple baseline methods for multi-label dimensionality reduction and support for hierarchical multi-label classification, including an implemented algorithm. M ULAN is a library. As such, it offers only programmatic API to the library users. There is no graphical user interface (GUI) available. The possibility to use the library via command line, is also currently not supported. Another drawback of M ULAN is that it runs everything in main memory so there exist limitations with very large data sets. M ULAN is written in Java and is built on top of Weka (Witten and Frank, 2005). This choice was made in order to take advantage of the vast resources of Weka on supervised learning algorithms, since many state-of-the-art multi-label learning algorithms are based on problem transformation. The fact that several machine learning researchers and practitioners are familiar with Weka was another reason for this choice. However, many aspects of the library are independent of Weka and there are interfaces for most of the core classes. M ULAN is an advocate of open science in general. One of the unique features of the library is a recently introduced experiments package, whose goal is to host code that reproduces experimental results reported on published papers on multi-label learning. To the best of our knowledge, most of the general learning platforms, like Weka, don’t support multi-label data. There are currently only a number of implementations of specific multi-label learning algorithms, but not a general library like M ULAN. 3. Using M ULAN This section presents an example of how to setup an experiment for empirically evaluating two multi-label algorithms on a multi-label data set using cross-validation. We create a new Java class for this experiment, which we call MulanExp1.java. The first thing to do is load the multi-label data set that will be used for the empirical evaluation. M ULAN requires two text files for the specification of a data set. The first one is in the ARFF format of Weka. The labels should be specified as nominal attributes with values “0” and “1” indicating 2412 M ULAN : A JAVA L IBRARY FOR M ULTI -L ABEL L EARNING absence and presence of the label respectively. The second file is in XML format. It specifies the labels and any hierarchical relationships among them. Hierarchies of labels can be expressed in the XML file by nesting the label tag. In our example, the two filenames are given to the experiment class through command-line parameters. String arffFile = Utils.getOption(
3 0.036440812 83 jmlr-2011-Scikit-learn: Machine Learning in Python
Author: Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Édouard Duchesnay
Abstract: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net. Keywords: Python, supervised learning, unsupervised learning, model selection
4 0.03571023 102 jmlr-2011-Waffles: A Machine Learning Toolkit
Author: Michael Gashler
Abstract: We present a breadth-oriented collection of cross-platform command-line tools for researchers in machine learning called Waffles. The Waffles tools are designed to offer a broad spectrum of functionality in a manner that is friendly for scripted automation. All functionality is also available in a C++ class library. Waffles is available under the GNU Lesser General Public License. Keywords: machine learning, toolkits, data mining, C++, open source
5 0.034314707 50 jmlr-2011-LPmade: Link Prediction Made Easy
Author: Ryan N. Lichtenwalter, Nitesh V. Chawla
Abstract: LPmade is a complete cross-platform software solution for multi-core link prediction and related tasks and analysis. Its first principal contribution is a scalable network library supporting highperformance implementations of the most commonly employed unsupervised link prediction methods. Link prediction in longitudinal data requires a sophisticated and disciplined procedure for correct results and fair evaluation, so the second principle contribution of LPmade is a sophisticated GNU make architecture that completely automates link prediction, prediction evaluation, and network analysis. Finally, LPmade streamlines and automates the procedure for creating multivariate supervised link prediction models with a version of WEKA modified to operate effectively on extremely large data sets. With mere minutes of manual work, one may start with a raw stream of records representing a network and progress through hundreds of steps to generate plots, gigabytes or terabytes of output, and actionable or publishable results. Keywords: link prediction, network analysis, multicore, GNU make, PropFlow, HPLP
6 0.032667968 10 jmlr-2011-Anechoic Blind Source Separation Using Wigner Marginals
7 0.031023625 48 jmlr-2011-Kernel Analysis of Deep Networks
8 0.02316362 62 jmlr-2011-MSVMpack: A Multi-Class Support Vector Machine Package
9 0.020247603 5 jmlr-2011-A Refined Margin Analysis for Boosting Algorithms via Equilibrium Margin
10 0.019689113 72 jmlr-2011-On the Relation between Realizable and Nonrealizable Cases of the Sequence Prediction Problem
11 0.017862521 105 jmlr-2011-lp-Norm Multiple Kernel Learning
12 0.017015746 74 jmlr-2011-Operator Norm Convergence of Spectral Clustering on Level Sets
13 0.016555585 1 jmlr-2011-A Bayesian Approach for Learning and Planning in Partially Observable Markov Decision Processes
14 0.01571954 93 jmlr-2011-The arules R-Package Ecosystem: Analyzing Interesting Patterns from Large Transaction Data Sets
15 0.015292956 80 jmlr-2011-Regression on Fixed-Rank Positive Semidefinite Matrices: A Riemannian Approach
16 0.015214038 23 jmlr-2011-DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model
17 0.015044006 86 jmlr-2011-Sparse Linear Identifiable Multivariate Modeling
18 0.01397388 15 jmlr-2011-CARP: Software for Fishing Out Good Clustering Algorithms
19 0.013897899 55 jmlr-2011-Learning Multi-modal Similarity
20 0.013755319 44 jmlr-2011-Information Rates of Nonparametric Gaussian Process Methods
topicId topicWeight
[(0, 0.063), (1, -0.029), (2, -0.013), (3, -0.029), (4, -0.031), (5, 0.005), (6, -0.013), (7, -0.036), (8, -0.049), (9, -0.006), (10, -0.029), (11, 0.007), (12, 0.14), (13, 0.018), (14, -0.197), (15, -0.099), (16, 0.018), (17, 0.045), (18, 0.133), (19, 0.122), (20, 0.088), (21, -0.088), (22, 0.014), (23, -0.064), (24, 0.066), (25, 0.054), (26, 0.05), (27, 0.025), (28, 0.209), (29, -0.061), (30, 0.093), (31, -0.192), (32, 0.022), (33, 0.082), (34, -0.142), (35, 0.008), (36, 0.155), (37, 0.099), (38, -0.075), (39, -0.068), (40, -0.098), (41, -0.053), (42, -0.007), (43, 0.387), (44, -0.008), (45, 0.105), (46, -0.007), (47, -0.097), (48, -0.291), (49, 0.424)]
simIndex simValue paperId paperTitle
same-paper 1 0.99062598 92 jmlr-2011-The Stationary Subspace Analysis Toolbox
Author: Jan Saputra Müller, Paul von Bünau, Frank C. Meinecke, Franz J. Király, Klaus-Robert Müller
Abstract: The Stationary Subspace Analysis (SSA) algorithm linearly factorizes a high-dimensional time series into stationary and non-stationary components. The SSA Toolbox is a platform-independent efficient stand-alone implementation of the SSA algorithm with a graphical user interface written in Java, that can also be invoked from the command line and from Matlab. The graphical interface guides the user through the whole process; data can be imported and exported from comma separated values (CSV) and Matlab’s .mat files. Keywords: non-stationarities, blind source separation, dimensionality reduction, unsupervised learning
2 0.35404557 10 jmlr-2011-Anechoic Blind Source Separation Using Wigner Marginals
Author: Lars Omlor, Martin A. Giese
Abstract: Blind source separation problems emerge in many applications, where signals can be modeled as superpositions of multiple sources. Many popular applications of blind source separation are based on linear instantaneous mixture models. If specific invariance properties are known about the sources, for example, translation or rotation invariance, the simple linear model can be extended by inclusion of the corresponding transformations. When the sources are invariant against translations (spatial displacements or time shifts) the resulting model is called an anechoic mixing model. We present a new algorithmic framework for the solution of anechoic problems in arbitrary dimensions. This framework is derived from stochastic time-frequency analysis in general, and the marginal properties of the Wigner-Ville spectrum in particular. The method reduces the general anechoic problem to a set of anechoic problems with non-negativity constraints and a phase retrieval problem. The first type of subproblem can be solved by existing algorithms, for example by an appropriate modification of non-negative matrix factorization (NMF). The second subproblem is solved by established phase retrieval methods. We discuss and compare implementations of this new algorithmic framework for several example problems with synthetic and real-world data, including music streams, natural 2D images, human motion trajectories and two-dimensional shapes. Keywords: blind source separation, anechoic mixtures, time-frequency transformations, linear canonical transform, Wigner-Ville spectrum
3 0.27282846 83 jmlr-2011-Scikit-learn: Machine Learning in Python
Author: Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Édouard Duchesnay
Abstract: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net. Keywords: Python, supervised learning, unsupervised learning, model selection
4 0.18842584 94 jmlr-2011-Theoretical Analysis of Bayesian Matrix Factorization
Author: Shinichi Nakajima, Masashi Sugiyama
Abstract: Recently, variational Bayesian (VB) techniques have been applied to probabilistic matrix factorization and shown to perform very well in experiments. In this paper, we theoretically elucidate properties of the VB matrix factorization (VBMF) method. Through finite-sample analysis of the VBMF estimator, we show that two types of shrinkage factors exist in the VBMF estimator: the positive-part James-Stein (PJS) shrinkage and the trace-norm shrinkage, both acting on each singular component separately for producing low-rank solutions. The trace-norm shrinkage is simply induced by non-flat prior information, similarly to the maximum a posteriori (MAP) approach. Thus, no trace-norm shrinkage remains when priors are non-informative. On the other hand, we show a counter-intuitive fact that the PJS shrinkage factor is kept activated even with flat priors. This is shown to be induced by the non-identifiability of the matrix factorization model, that is, the mapping between the target matrix and factorized matrices is not one-to-one. We call this model-induced regularization. We further extend our analysis to empirical Bayes scenarios where hyperparameters are also learned based on the VB free energy. Throughout the paper, we assume no missing entry in the observed matrix, and therefore collaborative filtering is out of scope. Keywords: matrix factorization, variational Bayes, empirical Bayes, positive-part James-Stein shrinkage, non-identifiable model, model-induced regularization
5 0.17033729 63 jmlr-2011-MULAN: A Java Library for Multi-Label Learning
Author: Grigorios Tsoumakas, Eleftherios Spyromitros-Xioufis, Jozef Vilcek, Ioannis Vlahavas
Abstract: M ULAN is a Java library for learning from multi-label data. It offers a variety of classification, ranking, thresholding and dimensionality reduction algorithms, as well as algorithms for learning from hierarchically structured labels. In addition, it contains an evaluation framework that calculates a rich variety of performance measures. Keywords: multi-label data, classification, ranking, thresholding, dimensionality reduction, hierarchical classification, evaluation 1. Multi-Label Learning A multi-label data set consists of training examples that are associated with a subset of a finite set of labels. Nowadays, multi-label data are becoming ubiquitous. They arise in an increasing number and diversity of applications, such as semantic annotation of images and video, web page categorization, direct marketing, functional genomics and music categorization into genres and emotions. There exist two major multi-label learning tasks (Tsoumakas et al., 2010): multi-label classification and label ranking. The former is concerned with learning a model that outputs a bipartition of the set of labels into relevant and irrelevant with respect to a query instance. The latter is concerned with learning a model that outputs a ranking of the labels according to their relevance to a query instance. Some algorithms learn models that serve both tasks. Several algorithms learn models that primarily output a vector of numerical scores, one for each label. This vector is then converted to a ranking after solving ties, or to a bipartition, after thresholding (Ioannou et al., 2010). Multi-label learning methods addressing these tasks can be grouped into two categories (Tsoumakas et al., 2010): problem transformation and algorithm adaptation. The first group of methods are algorithm independent. They transform the learning task into one or more singlelabel classification tasks, for which a large body of learning algorithms exists. The second group of methods extend specific learning algorithms in order to handle multi-label data directly. There exist extensions of decision tree learners, nearest neighbor classifiers, neural networks, ensemble methods, support vector machines, kernel methods, genetic algorithms and others. Multi-label learning stretches across several other tasks. When labels are structured as a treeshaped hierarchy or a directed acyclic graph, then we have the interesting task of hierarchical multilabel learning. Dimensionality reduction is another important task for multi-label data, as it is for c 2011 Grigorios Tsoumakas, Eleftherios Spyromitros-Xioufis, Jozef Vilcek and Ioannis Vlahavas. T SOUMAKAS , S PYROMITROS -X IOUFIS , V ILCEK AND V LAHAVAS any kind of data. When bags of instances are used to represent a training object, then multi-instance multi-label learning algorithms are required. There also exist semi-supervised learning and active learning algorithms for multi-label data. 2. The M ULAN Library The main goal of M ULAN is to bring the benefits of machine learning open source software (MLOSS) (Sonnenburg et al., 2007) to people working with multi-label data. The availability of MLOSS is especially important in emerging areas like multi-label learning, because it removes the burden of implementing related work and speeds up the scientific progress. In multi-label learning, an extra burden is implementing appropriate evaluation measures, since these are different compared to traditional supervised learning tasks. Evaluating multi-label algorithms with a variety of measures, is considered important by the community, due to the different types of output (bipartition, ranking) and diverse applications. Towards this goal, M ULAN offers a plethora of state-of-the-art algorithms for multi-label classification and label ranking and an evaluation framework that computes a large variety of multi-label evaluation measures through hold-out evaluation and cross-validation. In addition, the library offers a number of thresholding strategies that produce bipartitions from score vectors, simple baseline methods for multi-label dimensionality reduction and support for hierarchical multi-label classification, including an implemented algorithm. M ULAN is a library. As such, it offers only programmatic API to the library users. There is no graphical user interface (GUI) available. The possibility to use the library via command line, is also currently not supported. Another drawback of M ULAN is that it runs everything in main memory so there exist limitations with very large data sets. M ULAN is written in Java and is built on top of Weka (Witten and Frank, 2005). This choice was made in order to take advantage of the vast resources of Weka on supervised learning algorithms, since many state-of-the-art multi-label learning algorithms are based on problem transformation. The fact that several machine learning researchers and practitioners are familiar with Weka was another reason for this choice. However, many aspects of the library are independent of Weka and there are interfaces for most of the core classes. M ULAN is an advocate of open science in general. One of the unique features of the library is a recently introduced experiments package, whose goal is to host code that reproduces experimental results reported on published papers on multi-label learning. To the best of our knowledge, most of the general learning platforms, like Weka, don’t support multi-label data. There are currently only a number of implementations of specific multi-label learning algorithms, but not a general library like M ULAN. 3. Using M ULAN This section presents an example of how to setup an experiment for empirically evaluating two multi-label algorithms on a multi-label data set using cross-validation. We create a new Java class for this experiment, which we call MulanExp1.java. The first thing to do is load the multi-label data set that will be used for the empirical evaluation. M ULAN requires two text files for the specification of a data set. The first one is in the ARFF format of Weka. The labels should be specified as nominal attributes with values “0” and “1” indicating 2412 M ULAN : A JAVA L IBRARY FOR M ULTI -L ABEL L EARNING absence and presence of the label respectively. The second file is in XML format. It specifies the labels and any hierarchical relationships among them. Hierarchies of labels can be expressed in the XML file by nesting the label tag. In our example, the two filenames are given to the experiment class through command-line parameters. String arffFile = Utils.getOption(
6 0.14257437 74 jmlr-2011-Operator Norm Convergence of Spectral Clustering on Level Sets
7 0.11833681 5 jmlr-2011-A Refined Margin Analysis for Boosting Algorithms via Equilibrium Margin
8 0.11627767 48 jmlr-2011-Kernel Analysis of Deep Networks
9 0.11390348 62 jmlr-2011-MSVMpack: A Multi-Class Support Vector Machine Package
10 0.11105582 102 jmlr-2011-Waffles: A Machine Learning Toolkit
11 0.11052141 50 jmlr-2011-LPmade: Link Prediction Made Easy
12 0.10845959 13 jmlr-2011-Bayesian Generalized Kernel Mixed Models
13 0.10462826 68 jmlr-2011-Natural Language Processing (Almost) from Scratch
14 0.09742564 23 jmlr-2011-DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model
15 0.085298024 72 jmlr-2011-On the Relation between Realizable and Nonrealizable Cases of the Sequence Prediction Problem
16 0.075201742 15 jmlr-2011-CARP: Software for Fishing Out Good Clustering Algorithms
17 0.071951278 84 jmlr-2011-Semi-Supervised Learning with Measure Propagation
18 0.071757369 99 jmlr-2011-Unsupervised Similarity-Based Risk Stratification for Cardiovascular Events Using Long-Term Time-Series Data
19 0.064885274 29 jmlr-2011-Efficient Learning with Partially Observed Attributes
20 0.064288214 76 jmlr-2011-Parameter Screening and Optimisation for ILP using Designed Experiments
topicId topicWeight
[(4, 0.024), (9, 0.014), (10, 0.016), (11, 0.026), (24, 0.014), (31, 0.046), (32, 0.046), (41, 0.014), (43, 0.016), (60, 0.047), (66, 0.575), (73, 0.023), (78, 0.017), (90, 0.013)]
simIndex simValue paperId paperTitle
same-paper 1 0.89884424 92 jmlr-2011-The Stationary Subspace Analysis Toolbox
Author: Jan Saputra Müller, Paul von Bünau, Frank C. Meinecke, Franz J. Király, Klaus-Robert Müller
Abstract: The Stationary Subspace Analysis (SSA) algorithm linearly factorizes a high-dimensional time series into stationary and non-stationary components. The SSA Toolbox is a platform-independent efficient stand-alone implementation of the SSA algorithm with a graphical user interface written in Java, that can also be invoked from the command line and from Matlab. The graphical interface guides the user through the whole process; data can be imported and exported from comma separated values (CSV) and Matlab’s .mat files. Keywords: non-stationarities, blind source separation, dimensionality reduction, unsupervised learning
2 0.4608019 67 jmlr-2011-Multitask Sparsity via Maximum Entropy Discrimination
Author: Tony Jebara
Abstract: A multitask learning framework is developed for discriminative classification and regression where multiple large-margin linear classifiers are estimated for different prediction problems. These classifiers operate in a common input space but are coupled as they recover an unknown shared representation. A maximum entropy discrimination (MED) framework is used to derive the multitask algorithm which involves only convex optimization problems that are straightforward to implement. Three multitask scenarios are described. The first multitask method produces multiple support vector machines that learn a shared sparse feature selection over the input space. The second multitask method produces multiple support vector machines that learn a shared conic kernel combination. The third multitask method produces a pooled classifier as well as adaptively specialized individual classifiers. Furthermore, extensions to regression, graphical model structure estimation and other sparse methods are discussed. The maximum entropy optimization problems are implemented via a sequential quadratic programming method which leverages recent progress in fast SVM solvers. Fast monotonic convergence bounds are provided by bounding the MED sparsifying cost function with a quadratic function and ensuring only a constant factor runtime increase above standard independent SVM solvers. Results are shown on multitask data sets and favor multitask learning over single-task or tabula rasa methods. Keywords: meta-learning, support vector machines, feature selection, kernel selection, maximum entropy, large margin, Bayesian methods, variational bounds, classification, regression, Lasso, graphical model structure estimation, quadratic programming, convex programming
3 0.16265492 102 jmlr-2011-Waffles: A Machine Learning Toolkit
Author: Michael Gashler
Abstract: We present a breadth-oriented collection of cross-platform command-line tools for researchers in machine learning called Waffles. The Waffles tools are designed to offer a broad spectrum of functionality in a manner that is friendly for scripted automation. All functionality is also available in a C++ class library. Waffles is available under the GNU Lesser General Public License. Keywords: machine learning, toolkits, data mining, C++, open source
4 0.15709224 48 jmlr-2011-Kernel Analysis of Deep Networks
Author: Grégoire Montavon, Mikio L. Braun, Klaus-Robert Müller
Abstract: When training deep networks it is common knowledge that an efficient and well generalizing representation of the problem is formed. In this paper we aim to elucidate what makes the emerging representation successful. We analyze the layer-wise evolution of the representation in a deep network by building a sequence of deeper and deeper kernels that subsume the mapping performed by more and more layers of the deep network and measuring how these increasingly complex kernels fit the learning problem. We observe that deep networks create increasingly better representations of the learning problem and that the structure of the deep network controls how fast the representation of the task is formed layer after layer. Keywords: deep networks, kernel principal component analysis, representations
5 0.15354459 74 jmlr-2011-Operator Norm Convergence of Spectral Clustering on Level Sets
Author: Bruno Pelletier, Pierre Pudlo
Abstract: Following Hartigan (1975), a cluster is defined as a connected component of the t-level set of the underlying density, that is, the set of points for which the density is greater than t. A clustering algorithm which combines a density estimate with spectral clustering techniques is proposed. Our algorithm is composed of two steps. First, a nonparametric density estimate is used to extract the data points for which the estimated density takes a value greater than t. Next, the extracted points are clustered based on the eigenvectors of a graph Laplacian matrix. Under mild assumptions, we prove the almost sure convergence in operator norm of the empirical graph Laplacian operator associated with the algorithm. Furthermore, we give the typical behavior of the representation of the data set into the feature space, which establishes the strong consistency of our proposed algorithm. Keywords: spectral clustering, graph, unsupervised classification, level sets, connected components
6 0.14803225 62 jmlr-2011-MSVMpack: A Multi-Class Support Vector Machine Package
7 0.13955784 42 jmlr-2011-In All Likelihood, Deep Belief Is Not Enough
8 0.13547184 25 jmlr-2011-Discriminative Learning of Bayesian Networks via Factorized Conditional Log-Likelihood
9 0.13409266 43 jmlr-2011-Information, Divergence and Risk for Binary Experiments
10 0.12414119 5 jmlr-2011-A Refined Margin Analysis for Boosting Algorithms via Equilibrium Margin
11 0.12254471 80 jmlr-2011-Regression on Fixed-Rank Positive Semidefinite Matrices: A Riemannian Approach
12 0.12243429 94 jmlr-2011-Theoretical Analysis of Bayesian Matrix Factorization
13 0.12132148 63 jmlr-2011-MULAN: A Java Library for Multi-Label Learning
14 0.11943721 83 jmlr-2011-Scikit-learn: Machine Learning in Python
15 0.11884781 96 jmlr-2011-Two Distributed-State Models For Generating High-Dimensional Time Series
16 0.11746167 10 jmlr-2011-Anechoic Blind Source Separation Using Wigner Marginals
17 0.11300027 41 jmlr-2011-Improved Moves for Truncated Convex Models
18 0.11142665 77 jmlr-2011-Posterior Sparsity in Unsupervised Dependency Parsing
19 0.11043776 86 jmlr-2011-Sparse Linear Identifiable Multivariate Modeling
20 0.10968155 4 jmlr-2011-A Family of Simple Non-Parametric Kernel Learning Algorithms