hunch_net hunch_net-2011 hunch_net-2011-442 knowledge-graph by maker-knowledge-mining

442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial


meta infos for this blog

Source: html

Introduction: Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. This tutorial should interest anyone trying to use machine learning on significant quantities of data, anyone interested in developing algorithms for such, and of course who has bragging rights to the fastest learning algorithm on planet earth (Also note the Modeling with Hadoop tutorial just before ours which deals with one way of trying to speed up learning algorithms. We have almost no


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. [sent-1, score-0.92]

2 The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. [sent-2, score-0.948]

3 This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. [sent-3, score-1.862]

4 This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. [sent-4, score-1.166]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('tutorial', 0.397), ('book', 0.303), ('efforts', 0.239), ('parallel', 0.208), ('put', 0.196), ('ron', 0.173), ('planet', 0.173), ('fastest', 0.16), ('rights', 0.16), ('earth', 0.16), ('misha', 0.16), ('inviting', 0.151), ('deals', 0.151), ('trying', 0.148), ('anyone', 0.143), ('parallelize', 0.139), ('position', 0.13), ('hadoop', 0.13), ('helping', 0.126), ('breadth', 0.126), ('array', 0.126), ('survey', 0.123), ('quantities', 0.12), ('modeling', 0.12), ('developing', 0.12), ('unique', 0.117), ('surprised', 0.114), ('wide', 0.112), ('speed', 0.099), ('fraction', 0.099), ('aware', 0.094), ('together', 0.094), ('limited', 0.09), ('kdd', 0.088), ('knowledge', 0.085), ('started', 0.084), ('subject', 0.081), ('course', 0.08), ('effort', 0.079), ('algorithms', 0.079), ('note', 0.077), ('machine', 0.075), ('create', 0.067), ('natural', 0.065), ('interest', 0.065), ('almost', 0.063), ('however', 0.062), ('come', 0.062), ('learning', 0.062), ('interested', 0.061)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

Introduction: Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. This tutorial should interest anyone trying to use machine learning on significant quantities of data, anyone interested in developing algorithms for such, and of course who has bragging rights to the fastest learning algorithm on planet earth (Also note the Modeling with Hadoop tutorial just before ours which deals with one way of trying to speed up learning algorithms. We have almost no

2 0.15891263 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

Introduction: I would like to encourage people to consider giving a tutorial at next years ICML. The ideal tutorial attracts a wide audience, provides a gentle and easily taught introduction to the chosen research area, and also covers the most important contributions in depth. Submissions are due January 14  (about two weeks before paper deadline). http://www.icml-2011.org/tutorials.php Regards, Ulf

3 0.14994986 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin

4 0.12458862 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

Introduction: Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. Please join us if you are parallel-interested. And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. 17 for presentation at the workshop.

5 0.12386163 54 hunch net-2005-04-08-Fast SVMs

Introduction: There was a presentation at snowbird about parallelized support vector machines. In many cases, people parallelize by ignoring serial operations, but that is not what happened here—they parallelize with optimizations. Consequently, this seems to be the fastest SVM in existence. There is a related paper here .

6 0.12043139 346 hunch net-2009-03-18-Parallel ML primitives

7 0.10899146 229 hunch net-2007-01-26-Parallel Machine Learning Problems

8 0.10754648 355 hunch net-2009-05-19-CI Fellows

9 0.10071308 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

10 0.099781863 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

11 0.093365505 304 hunch net-2008-06-27-Reviewing Horror Stories

12 0.079111129 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.078225806 122 hunch net-2005-10-13-Site tweak

14 0.07523796 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

15 0.073608786 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

16 0.072772682 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

17 0.070921808 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs

18 0.068065733 120 hunch net-2005-10-10-Predictive Search is Coming

19 0.065863237 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

20 0.064717367 281 hunch net-2007-12-21-Vowpal Wabbit Code Release


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.143), (1, -0.016), (2, -0.114), (3, -0.004), (4, 0.038), (5, 0.038), (6, -0.059), (7, -0.031), (8, -0.045), (9, 0.052), (10, -0.093), (11, -0.058), (12, 0.036), (13, 0.027), (14, -0.026), (15, -0.086), (16, 0.024), (17, -0.018), (18, -0.044), (19, -0.095), (20, 0.028), (21, 0.035), (22, -0.045), (23, 0.024), (24, -0.002), (25, -0.063), (26, 0.009), (27, -0.069), (28, -0.016), (29, 0.07), (30, -0.001), (31, -0.057), (32, -0.017), (33, 0.037), (34, 0.072), (35, -0.002), (36, 0.1), (37, 0.109), (38, 0.082), (39, 0.036), (40, -0.049), (41, -0.036), (42, -0.088), (43, 0.022), (44, -0.149), (45, 0.006), (46, 0.14), (47, 0.011), (48, 0.025), (49, 0.006)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90756536 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

Introduction: Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. This tutorial should interest anyone trying to use machine learning on significant quantities of data, anyone interested in developing algorithms for such, and of course who has bragging rights to the fastest learning algorithm on planet earth (Also note the Modeling with Hadoop tutorial just before ours which deals with one way of trying to speed up learning algorithms. We have almost no

2 0.63110811 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

Introduction: I just made version 6.1 of Vowpal Wabbit . Relative to 6.0 , there are few new features, but many refinements. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core. The online learning algorithms are more general, with support for l 1 (via a truncated gradient variant) and l 2 regularization, and a generalized form of variable metric learning. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary. This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation. As per tradition , we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the

3 0.58773667 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin

4 0.5860393 346 hunch net-2009-03-18-Parallel ML primitives

Introduction: Previously, we discussed parallel machine learning a bit. As parallel ML is rather difficult, I’d like to describe my thinking at the moment, and ask for advice from the rest of the world. This is particularly relevant right now, as I’m attending a workshop tomorrow on parallel ML. Parallelizing slow algorithms seems uncompelling. Parallelizing many algorithms also seems uncompelling, because the effort required to parallelize is substantial. This leaves the question: Which one fast algorithm is the best to parallelize? What is a substantially different second? One compellingly fast simple algorithm is online gradient descent on a linear representation. This is the core of Leon’s sgd code and Vowpal Wabbit . Antoine Bordes showed a variant was competitive in the large scale learning challenge . It’s also a decades old primitive which has been reused in many algorithms, and continues to be reused. It also applies to online learning rather than just online optimiz

5 0.58541441 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

Introduction: Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. Please join us if you are parallel-interested. And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. 17 for presentation at the workshop.

6 0.57986379 229 hunch net-2007-01-26-Parallel Machine Learning Problems

7 0.57147425 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

8 0.54620874 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

9 0.50200593 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs

10 0.4828777 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

11 0.46721038 54 hunch net-2005-04-08-Fast SVMs

12 0.45934439 471 hunch net-2012-08-24-Patterns for research in machine learning

13 0.45100465 262 hunch net-2007-09-16-Optimizing Machine Learning Programs

14 0.45015395 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

15 0.44416833 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

16 0.43736216 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

17 0.43663117 53 hunch net-2005-04-06-Structured Regret Minimization

18 0.41867116 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

19 0.41854089 366 hunch net-2009-08-03-Carbon in Computer Science Research

20 0.40862933 222 hunch net-2006-12-05-Recruitment Conferences


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.086), (53, 0.052), (55, 0.108), (64, 0.481), (94, 0.151)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.91600531 291 hunch net-2008-03-07-Spock Challenge Winners

Introduction: The spock challenge for named entity recognition was won by Berno Stein , Sven Eissen, Tino Rub, Hagen Tonnies, Christof Braeutigam, and Martin Potthast .

same-blog 2 0.86367583 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

Introduction: Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. This tutorial should interest anyone trying to use machine learning on significant quantities of data, anyone interested in developing algorithms for such, and of course who has bragging rights to the fastest learning algorithm on planet earth (Also note the Modeling with Hadoop tutorial just before ours which deals with one way of trying to speed up learning algorithms. We have almost no

3 0.79306227 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition

Introduction: Francisco Pereira points out a fun Prediction Competition . Francisco says: DARPA is sponsoring a competition to analyze data from an unusual functional Magnetic Resonance Imaging experiment. Subjects watch videos inside the scanner while fMRI data are acquired. Unbeknownst to these subjects, the videos have been seen by a panel of other subjects that labeled each instant with labels in categories such as representation (are there tools, body parts, motion, sound), location, presence of actors, emotional content, etc. The challenge is to predict all of these different labels on an instant-by-instant basis from the fMRI data. A few reasons why this is particularly interesting: This is beyond the current state of the art, but not inconceivably hard. This is a new type of experiment design current analysis methods cannot deal with. This is an opportunity to work with a heavily examined and preprocessed neuroimaging dataset. DARPA is offering prizes!

4 0.71481395 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations

Introduction: Machine learning algorithms have a much better chance of being widely adopted if they are implemented in some easy-to-use code. There are several important concerns associated with machine learning which stress programming languages on the ease-of-use vs. speed frontier. Speed The rate at which data sources are growing seems to be outstripping the rate at which computational power is growing, so it is important that we be able to eak out every bit of computational power. Garbage collected languages ( java , ocaml , perl and python ) often have several issues here. Garbage collection often implies that floating point numbers are “boxed”: every float is represented by a pointer to a float. Boxing can cause an order of magnitude slowdown because an extra nonlocalized memory reference is made, and accesses to main memory can are many CPU cycles long. Garbage collection often implies that considerably more memory is used than is necessary. This has a variable effect. I

5 0.67063475 420 hunch net-2010-12-26-NIPS 2010

Introduction: I enjoyed attending NIPS this year, with several things interesting me. For the conference itself: Peter Welinder , Steve Branson , Serge Belongie , and Pietro Perona , The Multidimensional Wisdom of Crowds . This paper is about using mechanical turk to get label information, with results superior to a majority vote approach. David McAllester , Tamir Hazan , and Joseph Keshet Direct Loss Minimization for Structured Prediction . This is about another technique for directly optimizing the loss in structured prediction, with an application to speech recognition. Mohammad Saberian and Nuno Vasconcelos Boosting Classifier Cascades . This is about an algorithm for simultaneously optimizing loss and computation in a classifier cascade construction. There were several other papers on cascades which are worth looking at if interested. Alan Fern and Prasad Tadepalli , A Computational Decision Theory for Interactive Assistants . This paper carves out some

6 0.64847708 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

7 0.62284642 18 hunch net-2005-02-12-ROC vs. Accuracy vs. AROC

8 0.39340886 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

9 0.39024556 343 hunch net-2009-02-18-Decision by Vetocracy

10 0.38719934 423 hunch net-2011-02-02-User preferences for search engines

11 0.37509722 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

12 0.37472516 141 hunch net-2005-12-17-Workshops as Franchise Conferences

13 0.37188581 191 hunch net-2006-07-08-MaxEnt contradicts Bayes Rule?

14 0.37104899 424 hunch net-2011-02-17-What does Watson mean?

15 0.36858734 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

16 0.36594462 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

17 0.36504444 115 hunch net-2005-09-26-Prediction Bounds as the Mathematics of Science

18 0.36444953 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?

19 0.36206624 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

20 0.36148578 128 hunch net-2005-11-05-The design of a computing cluster