hunch_net hunch_net-2009 hunch_net-2009-381 knowledge-graph by maker-knowledge-mining

381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy


meta infos for this blog

Source: html

Introduction: I’m releasing version 4.0 ( tarball ) of Vowpal Wabbit . The biggest change (by far) in this release is experimental support for cluster parallelism, with notable help from Daniel Hsu . I also took advantage of the major version number to introduce some incompatible changes, including switching to murmurhash 2 , and other alterations to cachefiles. You’ll need to delete and regenerate them. In addition, the precise specification for a “tag” (i.e. string that can be used to identify an example) changed—you can’t have a space between the tag and the ‘|’ at the beginning of the feature namespace. And, of course, we made it faster. For the future, I put up my todo list outlining the major future improvements I want to see in the code. I’m planning to discuss the current mechanism and results of the cluster parallel implementation at the large scale machine learning workshop at NIPS later this week. Several people have asked me to do a tutorial/walkthrough of VW, wh


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The biggest change (by far) in this release is experimental support for cluster parallelism, with notable help from Daniel Hsu . [sent-3, score-0.753]

2 I also took advantage of the major version number to introduce some incompatible changes, including switching to murmurhash 2 , and other alterations to cachefiles. [sent-4, score-1.167]

3 In addition, the precise specification for a “tag” (i. [sent-6, score-0.229]

4 string that can be used to identify an example) changed—you can’t have a space between the tag and the ‘|’ at the beginning of the feature namespace. [sent-8, score-0.77]

5 For the future, I put up my todo list outlining the major future improvements I want to see in the code. [sent-10, score-0.717]

6 I’m planning to discuss the current mechanism and results of the cluster parallel implementation at the large scale machine learning workshop at NIPS later this week. [sent-11, score-0.926]

7 Several people have asked me to do a tutorial/walkthrough of VW, which is arranged for friday 2pm in the workshop room—no skiing for me Friday. [sent-12, score-0.664]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('tag', 0.376), ('major', 0.237), ('cluster', 0.216), ('outlining', 0.167), ('delete', 0.167), ('string', 0.155), ('alterations', 0.155), ('incompatible', 0.155), ('arranged', 0.155), ('skiing', 0.146), ('version', 0.144), ('switching', 0.139), ('introduce', 0.134), ('releasing', 0.134), ('friday', 0.134), ('parallelism', 0.134), ('workshop', 0.13), ('specification', 0.129), ('biggest', 0.129), ('identify', 0.129), ('join', 0.129), ('future', 0.126), ('notable', 0.121), ('took', 0.118), ('interests', 0.118), ('release', 0.115), ('hsu', 0.115), ('vw', 0.113), ('beginning', 0.11), ('changed', 0.11), ('implementation', 0.108), ('daniel', 0.104), ('vowpal', 0.102), ('wabbit', 0.102), ('precise', 0.1), ('parallel', 0.1), ('later', 0.099), ('asked', 0.099), ('planning', 0.099), ('discuss', 0.096), ('room', 0.096), ('put', 0.094), ('improvements', 0.093), ('experimental', 0.092), ('changes', 0.089), ('addition', 0.087), ('advantage', 0.085), ('change', 0.08), ('current', 0.078), ('course', 0.077)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

Introduction: I’m releasing version 4.0 ( tarball ) of Vowpal Wabbit . The biggest change (by far) in this release is experimental support for cluster parallelism, with notable help from Daniel Hsu . I also took advantage of the major version number to introduce some incompatible changes, including switching to murmurhash 2 , and other alterations to cachefiles. You’ll need to delete and regenerate them. In addition, the precise specification for a “tag” (i.e. string that can be used to identify an example) changed—you can’t have a space between the tag and the ‘|’ at the beginning of the feature namespace. And, of course, we made it faster. For the future, I put up my todo list outlining the major future improvements I want to see in the code. I’m planning to discuss the current mechanism and results of the cluster parallel implementation at the large scale machine learning workshop at NIPS later this week. Several people have asked me to do a tutorial/walkthrough of VW, wh

2 0.20057952 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

Introduction: I just made version 6.1 of Vowpal Wabbit . Relative to 6.0 , there are few new features, but many refinements. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core. The online learning algorithms are more general, with support for l 1 (via a truncated gradient variant) and l 2 regularization, and a generalized form of variable metric learning. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary. This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation. As per tradition , we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the

3 0.15341425 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

Introduction: I’ve released version 5.0 of the Vowpal Wabbit online learning software. The major number has changed since the last release because I regard all earlier versions as obsolete—there are several new algorithms & features including substantial changes and upgrades to the default learning algorithm. The biggest changes are new algorithms: Nikos and I improved the default algorithm. The basic update rule still uses gradient descent, but the size of the update is carefully controlled so that it’s impossible to overrun the label. In addition, the normalization has changed. Computationally, these changes are virtually free and yield better results, sometimes much better. Less careful updates can be reenabled with –loss_function classic, although results are still not identical to previous due to normalization changes. Nikos also implemented the per-feature learning rates as per these two papers . Often, this works better than the default algorithm. It isn’t the defa

4 0.12453533 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

Introduction: Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. Please join us if you are parallel-interested. And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. 17 for presentation at the workshop.

5 0.11912999 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin

6 0.10803989 473 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

7 0.10768677 346 hunch net-2009-03-18-Parallel ML primitives

8 0.10421567 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

9 0.10007671 281 hunch net-2007-12-21-Vowpal Wabbit Code Release

10 0.079844125 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

11 0.078177914 384 hunch net-2009-12-24-Top graduates this season

12 0.072966278 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

13 0.072828233 365 hunch net-2009-07-31-Vowpal Wabbit Open Source Project

14 0.072441131 234 hunch net-2007-02-22-Create Your Own ICML Workshop

15 0.070485368 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS

16 0.069105268 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

17 0.068124302 375 hunch net-2009-10-26-NIPS workshops

18 0.06706354 285 hunch net-2008-01-23-Why Workshop?

19 0.066337809 465 hunch net-2012-05-12-ICML accepted papers and early registration

20 0.06478247 354 hunch net-2009-05-17-Server Update


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.133), (1, -0.022), (2, -0.111), (3, -0.032), (4, 0.065), (5, 0.112), (6, -0.079), (7, -0.047), (8, -0.109), (9, 0.075), (10, -0.084), (11, -0.068), (12, 0.058), (13, 0.029), (14, -0.012), (15, -0.118), (16, -0.044), (17, 0.102), (18, -0.028), (19, -0.105), (20, -0.005), (21, 0.078), (22, -0.071), (23, 0.012), (24, -0.04), (25, 0.038), (26, -0.002), (27, -0.007), (28, -0.003), (29, 0.019), (30, 0.065), (31, -0.008), (32, -0.001), (33, -0.084), (34, 0.008), (35, 0.016), (36, 0.024), (37, 0.053), (38, -0.051), (39, 0.033), (40, -0.048), (41, 0.014), (42, -0.096), (43, -0.011), (44, 0.019), (45, 0.024), (46, -0.011), (47, -0.006), (48, 0.053), (49, 0.011)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97881114 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

Introduction: I’m releasing version 4.0 ( tarball ) of Vowpal Wabbit . The biggest change (by far) in this release is experimental support for cluster parallelism, with notable help from Daniel Hsu . I also took advantage of the major version number to introduce some incompatible changes, including switching to murmurhash 2 , and other alterations to cachefiles. You’ll need to delete and regenerate them. In addition, the precise specification for a “tag” (i.e. string that can be used to identify an example) changed—you can’t have a space between the tag and the ‘|’ at the beginning of the feature namespace. And, of course, we made it faster. For the future, I put up my todo list outlining the major future improvements I want to see in the code. I’m planning to discuss the current mechanism and results of the cluster parallel implementation at the large scale machine learning workshop at NIPS later this week. Several people have asked me to do a tutorial/walkthrough of VW, wh

2 0.77553052 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

Introduction: I just made version 6.1 of Vowpal Wabbit . Relative to 6.0 , there are few new features, but many refinements. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core. The online learning algorithms are more general, with support for l 1 (via a truncated gradient variant) and l 2 regularization, and a generalized form of variable metric learning. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary. This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation. As per tradition , we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the

3 0.66776097 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin

4 0.66680747 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

Introduction: I’ve released version 5.0 of the Vowpal Wabbit online learning software. The major number has changed since the last release because I regard all earlier versions as obsolete—there are several new algorithms & features including substantial changes and upgrades to the default learning algorithm. The biggest changes are new algorithms: Nikos and I improved the default algorithm. The basic update rule still uses gradient descent, but the size of the update is carefully controlled so that it’s impossible to overrun the label. In addition, the normalization has changed. Computationally, these changes are virtually free and yield better results, sometimes much better. Less careful updates can be reenabled with –loss_function classic, although results are still not identical to previous due to normalization changes. Nikos also implemented the per-feature learning rates as per these two papers . Often, this works better than the default algorithm. It isn’t the defa

5 0.65084469 441 hunch net-2011-08-15-Vowpal Wabbit 6.0

Introduction: I just released Vowpal Wabbit 6.0 . Since the last version: VW is now 2-3 orders of magnitude faster at linear learning, primarily thanks to Alekh . Given the baseline, this is loads of fun, allowing us to easily deal with terafeature datasets, and dwarfing the scale of any other open source projects. The core improvement here comes from effective parallelization over kilonode clusters (either Hadoop or not). This code is highly scalable, so it even helps with clusters of size 2 (and doesn’t hurt for clusters of size 1). The core allreduce technique appears widely and easily reused—we’ve already used it to parallelize Conjugate Gradient, LBFGS, and two variants of online learning. We’ll be documenting how to do this more thoroughly, but for now “README_cluster” and associated scripts should provide a good starting point. The new LBFGS code from Miro seems to commonly dominate the existing conjugate gradient code in time/quality tradeoffs. The new matrix factoriz

6 0.63927126 473 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

7 0.56947404 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

8 0.56788564 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

9 0.52078283 365 hunch net-2009-07-31-Vowpal Wabbit Open Source Project

10 0.51974028 281 hunch net-2007-12-21-Vowpal Wabbit Code Release

11 0.51844358 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

12 0.51145583 436 hunch net-2011-06-22-Ultra LDA

13 0.50445187 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

14 0.48280227 346 hunch net-2009-03-18-Parallel ML primitives

15 0.44458842 229 hunch net-2007-01-26-Parallel Machine Learning Problems

16 0.44296595 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

17 0.44143733 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

18 0.42462629 490 hunch net-2013-11-09-Graduates and Postdocs

19 0.41091421 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

20 0.40078726 128 hunch net-2005-11-05-The design of a computing cluster


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.224), (53, 0.031), (55, 0.05), (93, 0.399), (94, 0.164), (95, 0.026)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.89318836 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

Introduction: I’m releasing version 4.0 ( tarball ) of Vowpal Wabbit . The biggest change (by far) in this release is experimental support for cluster parallelism, with notable help from Daniel Hsu . I also took advantage of the major version number to introduce some incompatible changes, including switching to murmurhash 2 , and other alterations to cachefiles. You’ll need to delete and regenerate them. In addition, the precise specification for a “tag” (i.e. string that can be used to identify an example) changed—you can’t have a space between the tag and the ‘|’ at the beginning of the feature namespace. And, of course, we made it faster. For the future, I put up my todo list outlining the major future improvements I want to see in the code. I’m planning to discuss the current mechanism and results of the cluster parallel implementation at the large scale machine learning workshop at NIPS later this week. Several people have asked me to do a tutorial/walkthrough of VW, wh

2 0.85229337 112 hunch net-2005-09-14-The Predictionist Viewpoint

Introduction: Virtually every discipline of significant human endeavor has a way explaining itself as fundamental and important. In all the cases I know of, they are both right (they are vital) and wrong (they are not solely vital). Politics. This is the one that everyone is familiar with at the moment. “What could be more important than the process of making decisions?” Science and Technology. This is the one that we-the-academics are familiar with. “The loss of modern science and technology would be catastrophic.” Military. “Without the military, a nation will be invaded and destroyed.” (insert your favorite here) Within science and technology, the same thing happens again. Mathematics. “What could be more important than a precise language for establishing truths?” Physics. “Nothing is more fundamental than the laws which govern the universe. Understanding them is the key to understanding everything else.” Biology. “Without life, we wouldn’t be here, so clearly the s

3 0.7871868 363 hunch net-2009-07-09-The Machine Learning Forum

Introduction: Dear Fellow Machine Learners, For the past year or so I have become increasingly frustrated with the peer review system in our field. I constantly get asked to review papers in which I have no interest. At the same time, as an action editor in JMLR, I constantly have to harass people to review papers. When I send papers to conferences and to journals I often get rejected with reviews that, at least in my mind, make no sense. Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it… I decided to try an do something to improve the situation. I started a new web site, which I decided to call “The machine learning forum” the URL is http://themachinelearningforum.org The main idea behind this web site is to remove anonymity from the review process. In this site, all opinions are attributed to the actual person that expressed them. I expect that this will improve the quality of the reviews. An obvious other effect is that there wil

4 0.56780422 276 hunch net-2007-12-10-Learning Track of International Planning Competition

Introduction: The International Planning Competition (IPC) is a biennial event organized in the context of the International Conference on Automated Planning and Scheduling (ICAPS). This year, for the first time, there will a learning track of the competition. For more information you can go to the competition web-site . The competitions are typically organized around a number of planning domains that can vary from year to year, where a planning domain is simply a class of problems that share a common action schema—e.g. Blocksworld is a well-known planning domain that contains a problem instance each possible initial tower configuration and goal configuration. Some other domains have included Logistics, Airport, Freecell, PipesWorld, and many others . For each domain the competition includes a number of problems (say 40-50) and the planners are run on each problem with a time limit for each problem (around 30 minutes). The problems are hard enough that many problems are not solved within th

5 0.56366926 229 hunch net-2007-01-26-Parallel Machine Learning Problems

Introduction: Parallel machine learning is a subject rarely addressed at machine learning conferences. Nevertheless, it seems likely to increase in importance because: Data set sizes appear to be growing substantially faster than computation. Essentially, this happens because more and more sensors of various sorts are being hooked up to the internet. Serial speedups of processors seem are relatively stalled. The new trend is to make processors more powerful by making them multicore . Both AMD and Intel are making dual core designs standard, with plans for more parallelism in the future. IBM’s Cell processor has (essentially) 9 cores. Modern graphics chips can have an order of magnitude more separate execution units. The meaning of ‘core’ varies a bit from processor to processor, but the overall trend seems quite clear. So, how do we parallelize machine learning algorithms? The simplest and most common technique is to simply run the same learning algorithm with di

6 0.5621748 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

7 0.55795783 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

8 0.55755198 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

9 0.55730432 120 hunch net-2005-10-10-Predictive Search is Coming

10 0.55589551 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

11 0.55196851 43 hunch net-2005-03-18-Binomial Weighting

12 0.54907882 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

13 0.54677659 95 hunch net-2005-07-14-What Learning Theory might do

14 0.54668576 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

15 0.54666156 237 hunch net-2007-04-02-Contextual Scaling

16 0.54622442 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

17 0.54505861 252 hunch net-2007-07-01-Watchword: Online Learning

18 0.54476321 371 hunch net-2009-09-21-Netflix finishes (and starts)

19 0.54421556 253 hunch net-2007-07-06-Idempotent-capable Predictors

20 0.54397076 351 hunch net-2009-05-02-Wielding a New Abstraction