hunch_net hunch_net-2013 hunch_net-2013-492 knowledge-graph by maker-knowledge-mining

492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4


meta infos for this blog

Source: html

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 At NIPS I’m giving a tutorial on Learning to Interact . [sent-1, score-0.271]

2 In essence this is about dealing with causality in a contextual bandit framework. [sent-2, score-0.189]

3 Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . [sent-4, score-0.46]

4 Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. [sent-6, score-0.674]

5 When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. [sent-10, score-0.227]

6 One of the issues is that machine learning itself was not automatic enough, while another is that creating a very low overhead process for doing learning reductions is vitally important. [sent-12, score-0.132]

7 These have been addressed well enough that we are starting to see compelling results. [sent-13, score-0.141]

8 Various changes: The internal learning reduction interface has been substantially improved. [sent-14, score-0.182]

9 It’s now pretty easy to write new learning reduction. [sent-15, score-0.086]

10 This is a very simple reduction which just binarizes the prediction. [sent-18, score-0.182]

11 More improvements are coming, but this is good enough that other people have started contributing reductions. [sent-19, score-0.229]

12 Zhen Qin had a very productive internship with Vaclav Petricek at eharmony resulting in several systemic modifications and some new reductions, including: A direct hash inversion implementation for use in debugging. [sent-20, score-0.701]

13 A holdout system which takes over for progressive validation when multiple passes over data are used. [sent-21, score-0.253]

14 An online bootstrap mechanism system which efficiently provides some understanding of prediction variations and which can sometimes effectively trade computational time for increased accuracy via ensembling. [sent-23, score-0.326]

15 This will be discussed at the biglearn workshop at NIPS. [sent-24, score-0.11]

16 A top-k reduction which chooses the top-k of any set of base instances. [sent-25, score-0.182]

17 Hal Daume has a new implementation of Searn (and Dagger , the codes are unified) which makes structured prediction solutions far more natural. [sent-26, score-0.122]

18 He has optimized this quite thoroughly (exercising the reduction stack in the process), resulting in this pretty graph. [sent-27, score-0.688]

19 Fully optimized code is typically rough, but this one is less than 100 lines . [sent-30, score-0.214]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('crf', 0.284), ('tutorial', 0.271), ('bareinboim', 0.213), ('pearl', 0.213), ('causality', 0.189), ('reduction', 0.182), ('enough', 0.141), ('break', 0.134), ('resulting', 0.134), ('reductions', 0.132), ('vw', 0.128), ('optimized', 0.122), ('implementation', 0.122), ('workshop', 0.11), ('productive', 0.095), ('mastery', 0.095), ('honest', 0.095), ('frustrated', 0.095), ('holdout', 0.095), ('vaclav', 0.095), ('eharmony', 0.095), ('theorist', 0.095), ('dagger', 0.095), ('code', 0.092), ('systemic', 0.088), ('stack', 0.088), ('modifications', 0.088), ('contributing', 0.088), ('pretty', 0.086), ('bootstrap', 0.083), ('unified', 0.083), ('skiing', 0.083), ('luckily', 0.083), ('trade', 0.083), ('skip', 0.083), ('bay', 0.083), ('provides', 0.081), ('progressive', 0.079), ('passes', 0.079), ('mine', 0.079), ('keeps', 0.079), ('hash', 0.079), ('variations', 0.079), ('nips', 0.077), ('including', 0.077), ('thoroughly', 0.076), ('daume', 0.076), ('concepts', 0.076), ('nontrivial', 0.076), ('edit', 0.073)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin

2 0.17245895 473 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

Introduction: A new version of VW is out . The primary changes are: Learning Reductions : I’ve wanted to get learning reductions working and we’ve finally done it. Not everything is implemented yet, but VW now supports direct: Multiclass Classification –oaa or –ect . Cost Sensitive Multiclass Classification –csoaa or –wap . Contextual Bandit Classification –cb . Sequential Structured Prediction –searn or –dagger In addition, it is now easy to build your own custom learning reductions for various plausible uses: feature diddling, custom structured prediction problems, or alternate learning reductions. This effort is far from done, but it is now in a generally useful state. Note that all learning reductions inherit the ability to do cluster parallel learning. Library interface : VW now has a basic library interface. The library provides most of the functionality of VW, with the limitation that it is monolithic and nonreentrant. These will be improved over

3 0.14994986 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

Introduction: Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. This tutorial should interest anyone trying to use machine learning on significant quantities of data, anyone interested in developing algorithms for such, and of course who has bragging rights to the fastest learning algorithm on planet earth (Also note the Modeling with Hadoop tutorial just before ours which deals with one way of trying to speed up learning algorithms. We have almost no

4 0.13581567 351 hunch net-2009-05-02-Wielding a New Abstraction

Introduction: This post is partly meant as an advertisement for the reductions tutorial Alina , Bianca , and I are planning to do at ICML . Please come, if you are interested. Many research programs can be thought of as finding and building new useful abstractions. The running example I’ll use is learning reductions where I have experience. The basic abstraction here is that we can build a learning algorithm capable of solving classification problems up to a small expected regret. This is used repeatedly to solve more complex problems. In working on a new abstraction, I think you typically run into many substantial problems of understanding, which make publishing particularly difficult. It is difficult to seriously discuss the reason behind or mechanism for abstraction in a conference paper with small page limits. People rarely see such discussions and hence have little basis on which to think about new abstractions. Another difficulty is that when building an abstraction, yo

5 0.12913397 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

Introduction: I just made version 6.1 of Vowpal Wabbit . Relative to 6.0 , there are few new features, but many refinements. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core. The online learning algorithms are more general, with support for l 1 (via a truncated gradient variant) and l 2 regularization, and a generalized form of variable metric learning. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary. This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation. As per tradition , we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the

6 0.1281524 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

7 0.12782763 103 hunch net-2005-08-18-SVM Adaptability

8 0.12683631 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

9 0.12285751 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

10 0.11912999 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

11 0.11388191 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

12 0.11361447 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

13 0.11098382 365 hunch net-2009-07-31-Vowpal Wabbit Open Source Project

14 0.10767895 236 hunch net-2007-03-15-Alternative Machine Learning Reductions Definitions

15 0.10519683 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.097587943 360 hunch net-2009-06-15-In Active Learning, the question changes

17 0.09325248 304 hunch net-2008-06-27-Reviewing Horror Stories

18 0.092703253 281 hunch net-2007-12-21-Vowpal Wabbit Code Release

19 0.092422836 177 hunch net-2006-05-05-An ICML reject

20 0.091214001 441 hunch net-2011-08-15-Vowpal Wabbit 6.0


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.217), (1, 0.034), (2, -0.077), (3, -0.052), (4, 0.033), (5, 0.085), (6, 0.021), (7, -0.07), (8, -0.131), (9, 0.09), (10, -0.109), (11, -0.086), (12, 0.03), (13, 0.104), (14, -0.038), (15, -0.104), (16, 0.027), (17, 0.025), (18, 0.007), (19, -0.115), (20, 0.001), (21, 0.006), (22, -0.026), (23, -0.039), (24, -0.032), (25, -0.01), (26, -0.06), (27, -0.034), (28, 0.005), (29, 0.102), (30, 0.059), (31, -0.07), (32, 0.056), (33, -0.003), (34, -0.043), (35, 0.11), (36, 0.004), (37, 0.039), (38, 0.028), (39, 0.14), (40, -0.032), (41, 0.109), (42, -0.114), (43, -0.091), (44, 0.002), (45, 0.02), (46, 0.042), (47, 0.039), (48, -0.05), (49, -0.032)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94800133 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin

2 0.80097002 473 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

Introduction: A new version of VW is out . The primary changes are: Learning Reductions : I’ve wanted to get learning reductions working and we’ve finally done it. Not everything is implemented yet, but VW now supports direct: Multiclass Classification –oaa or –ect . Cost Sensitive Multiclass Classification –csoaa or –wap . Contextual Bandit Classification –cb . Sequential Structured Prediction –searn or –dagger In addition, it is now easy to build your own custom learning reductions for various plausible uses: feature diddling, custom structured prediction problems, or alternate learning reductions. This effort is far from done, but it is now in a generally useful state. Note that all learning reductions inherit the ability to do cluster parallel learning. Library interface : VW now has a basic library interface. The library provides most of the functionality of VW, with the limitation that it is monolithic and nonreentrant. These will be improved over

3 0.70340383 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

Introduction: I’m releasing version 4.0 ( tarball ) of Vowpal Wabbit . The biggest change (by far) in this release is experimental support for cluster parallelism, with notable help from Daniel Hsu . I also took advantage of the major version number to introduce some incompatible changes, including switching to murmurhash 2 , and other alterations to cachefiles. You’ll need to delete and regenerate them. In addition, the precise specification for a “tag” (i.e. string that can be used to identify an example) changed—you can’t have a space between the tag and the ‘|’ at the beginning of the feature namespace. And, of course, we made it faster. For the future, I put up my todo list outlining the major future improvements I want to see in the code. I’m planning to discuss the current mechanism and results of the cluster parallel implementation at the large scale machine learning workshop at NIPS later this week. Several people have asked me to do a tutorial/walkthrough of VW, wh

4 0.61792701 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

Introduction: I just made version 6.1 of Vowpal Wabbit . Relative to 6.0 , there are few new features, but many refinements. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core. The online learning algorithms are more general, with support for l 1 (via a truncated gradient variant) and l 2 regularization, and a generalized form of variable metric learning. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary. This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation. As per tradition , we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the

5 0.60095352 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

Introduction: I’ve released version 5.0 of the Vowpal Wabbit online learning software. The major number has changed since the last release because I regard all earlier versions as obsolete—there are several new algorithms & features including substantial changes and upgrades to the default learning algorithm. The biggest changes are new algorithms: Nikos and I improved the default algorithm. The basic update rule still uses gradient descent, but the size of the update is carefully controlled so that it’s impossible to overrun the label. In addition, the normalization has changed. Computationally, these changes are virtually free and yield better results, sometimes much better. Less careful updates can be reenabled with –loss_function classic, although results are still not identical to previous due to normalization changes. Nikos also implemented the per-feature learning rates as per these two papers . Often, this works better than the default algorithm. It isn’t the defa

6 0.56154567 441 hunch net-2011-08-15-Vowpal Wabbit 6.0

7 0.55724168 365 hunch net-2009-07-31-Vowpal Wabbit Open Source Project

8 0.54510278 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

9 0.53738755 103 hunch net-2005-08-18-SVM Adaptability

10 0.5248577 351 hunch net-2009-05-02-Wielding a New Abstraction

11 0.51406687 436 hunch net-2011-06-22-Ultra LDA

12 0.51334196 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

13 0.48469132 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

14 0.48188052 327 hunch net-2008-11-16-Observations on Linearity for Reductions to Regression

15 0.47463602 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification

16 0.46420282 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

17 0.46327773 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

18 0.46090284 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

19 0.44969231 337 hunch net-2009-01-21-Nearly all natural problems require nonlinearity

20 0.44486794 412 hunch net-2010-09-28-Machined Learnings


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(0, 0.055), (3, 0.026), (13, 0.261), (27, 0.236), (30, 0.036), (38, 0.063), (48, 0.012), (53, 0.038), (55, 0.086), (94, 0.094)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96279013 212 hunch net-2006-10-04-Health of Conferences Wiki

Introduction: Aaron Hertzmann points out the health of conferences wiki , which has a great deal of information about how many different conferences function.

2 0.93723124 117 hunch net-2005-10-03-Not ICML

Introduction: Alex Smola showed me this ICML 2006 webpage. This is NOT the ICML we know, but rather some people at “Enformatika”. Investigation shows that they registered with an anonymous yahoo email account from dotregistrar.com the “Home of the $6.79 wholesale domain!” and their nameservers are by Turkticaret , a Turkish internet company. It appears the website has since been altered to “ ICNL ” (the above link uses the google cache). They say that imitation is the sincerest form of flattery, so the organizers of the real ICML 2006 must feel quite flattered.

3 0.93433112 386 hunch net-2010-01-13-Sam Roweis died

Introduction: and I can’t help but remember him. I first met Sam as an undergraduate at Caltech where he was TA for Hopfield ‘s class, and again when I visited Gatsby , when he invited me to visit Toronto , and at too many conferences to recount. His personality was a combination of enthusiastic and thoughtful, with a great ability to phrase a problem so it’s solution must be understood. With respect to my own work, Sam was the one who advised me to make my first tutorial , leading to others, and to other things, all of which I’m grateful to him for. In fact, my every interaction with Sam was positive, and that was his way. His death is being called a suicide which is so incompatible with my understanding of Sam that it strains my credibility. But we know that his many responsibilities were great, and it is well understood that basically all sane researchers have legions of inner doubts. Having been depressed now and then myself, it’s helpful to understand at least intellectually

4 0.93021917 137 hunch net-2005-12-09-Machine Learning Thoughts

Introduction: I added a link to Olivier Bousquet’s machine learning thoughts blog. Several of the posts may be of interest.

same-blog 5 0.87586135 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin

6 0.81525159 214 hunch net-2006-10-13-David Pennock starts Oddhead

7 0.80607259 406 hunch net-2010-08-22-KDD 2010

8 0.80097657 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

9 0.72808313 312 hunch net-2008-08-04-Electoralmarkets.com

10 0.70357251 133 hunch net-2005-11-28-A question of quantification

11 0.70296985 345 hunch net-2009-03-08-Prediction Science

12 0.70155096 95 hunch net-2005-07-14-What Learning Theory might do

13 0.70143199 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

14 0.69943976 351 hunch net-2009-05-02-Wielding a New Abstraction

15 0.69853681 259 hunch net-2007-08-19-Choice of Metrics

16 0.69847077 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

17 0.69814676 79 hunch net-2005-06-08-Question: “When is the right time to insert the loss function?”

18 0.69690847 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

19 0.69580662 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

20 0.69536072 41 hunch net-2005-03-15-The State of Tight Bounds