hunch_net hunch_net-2013 hunch_net-2013-490 knowledge-graph by maker-knowledge-mining

490 hunch net-2013-11-09-Graduates and Postdocs


meta infos for this blog

Source: html

Introduction: Several strong graduates are on the job market this year. Alekh Agarwal made the most scalable public learning algorithm as an intern two years ago. He has a deep and broad understanding of optimization and learning as well as the ability and will to make things happen programming-wise. I’ve been privileged to have Alekh visiting me in NY where he will be sorely missed. John Duchi created Adagrad which is a commonly helpful improvement over online gradient descent that is seeing wide adoption, including in Vowpal Wabbit . He has a similarly deep and broad understanding of optimization and learning with significant industry experience at Google . Alekh and John have often coauthored together. Stephane Ross visited me a year ago over the summer, implementing many new algorithms and working out the first scale free online update rule which is now the default in Vowpal Wabbit. Stephane is not on the market—Google robbed the cradle successfully I’m sure that


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Several strong graduates are on the job market this year. [sent-1, score-0.249]

2 Alekh Agarwal made the most scalable public learning algorithm as an intern two years ago. [sent-2, score-0.087]

3 He has a deep and broad understanding of optimization and learning as well as the ability and will to make things happen programming-wise. [sent-3, score-0.419]

4 I’ve been privileged to have Alekh visiting me in NY where he will be sorely missed. [sent-4, score-0.2]

5 John Duchi created Adagrad which is a commonly helpful improvement over online gradient descent that is seeing wide adoption, including in Vowpal Wabbit . [sent-5, score-0.079]

6 He has a similarly deep and broad understanding of optimization and learning with significant industry experience at Google . [sent-6, score-0.51]

7 Stephane Ross visited me a year ago over the summer, implementing many new algorithms and working out the first scale free online update rule which is now the default in Vowpal Wabbit. [sent-8, score-0.308]

8 Stephane is not on the market—Google robbed the cradle successfully I’m sure that he will do great things. [sent-9, score-0.091]

9 Anna Choromanska visited me this summer, where we worked on extreme multiclass classification . [sent-10, score-0.154]

10 She is very good at focusing on a problem and grinding it into submission both in theory and in practice—I can see why she wins awards for her work. [sent-11, score-0.281]

11 I also wanted to mention some postdoc openings in machine learning. [sent-13, score-0.408]

12 In New York Leon Bottou , Miro Dudik , and I are looking for someone . [sent-14, score-0.268]

13 In New England, Sham Kakade and Adam Kalai are looking for someone . [sent-16, score-0.268]

14 Also in the New York area, Daniel Hsu and Tong Zhang may both be considering a postdoc with no particular deadline. [sent-18, score-0.218]

15 In England, Peter Flach is looking for two postdocs on a health & machine learning project with a deadline of December 2 . [sent-19, score-0.506]

16 I consider machine learning for healthcare of critical importance in the future. [sent-20, score-0.109]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('december', 0.303), ('alekh', 0.272), ('anna', 0.245), ('postdoc', 0.218), ('stephane', 0.218), ('england', 0.19), ('looking', 0.168), ('deadline', 0.166), ('market', 0.154), ('visited', 0.154), ('broad', 0.138), ('vowpal', 0.133), ('google', 0.123), ('york', 0.12), ('summer', 0.116), ('john', 0.115), ('openings', 0.109), ('privileged', 0.109), ('healthcare', 0.109), ('optimization', 0.105), ('duchi', 0.101), ('miro', 0.101), ('someone', 0.1), ('deep', 0.099), ('awards', 0.095), ('agarwal', 0.095), ('wins', 0.095), ('graduates', 0.095), ('postdocs', 0.095), ('adoption', 0.095), ('focusing', 0.091), ('visiting', 0.091), ('successfully', 0.091), ('ny', 0.091), ('industry', 0.091), ('scalable', 0.087), ('kalai', 0.084), ('future', 0.082), ('bottou', 0.081), ('tong', 0.081), ('mention', 0.081), ('kakade', 0.079), ('seeing', 0.079), ('zhang', 0.079), ('new', 0.077), ('peter', 0.077), ('implementing', 0.077), ('health', 0.077), ('understanding', 0.077), ('adam', 0.075)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 490 hunch net-2013-11-09-Graduates and Postdocs

Introduction: Several strong graduates are on the job market this year. Alekh Agarwal made the most scalable public learning algorithm as an intern two years ago. He has a deep and broad understanding of optimization and learning as well as the ability and will to make things happen programming-wise. I’ve been privileged to have Alekh visiting me in NY where he will be sorely missed. John Duchi created Adagrad which is a commonly helpful improvement over online gradient descent that is seeing wide adoption, including in Vowpal Wabbit . He has a similarly deep and broad understanding of optimization and learning with significant industry experience at Google . Alekh and John have often coauthored together. Stephane Ross visited me a year ago over the summer, implementing many new algorithms and working out the first scale free online update rule which is now the default in Vowpal Wabbit. Stephane is not on the market—Google robbed the cradle successfully I’m sure that

2 0.14049685 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

Introduction: The New York Machine Learning Symposium is October 19 with a 2 page abstract deadline due September 13 via email with subject “Machine Learning Poster Submission” sent to physicalscience@nyas.org. Everyone is welcome to submit. Last year’s attendance was 246 and I expect more this year. The primary experiment for ICML 2013 is multiple paper submission deadlines with rolling review cycles. The key dates are October 1, December 15, and February 15. This is an attempt to shift ICML further towards a journal style review process and reduce peak load. The “not for proceedings” experiment from this year’s ICML is not continuing. Edit: Fixed second ICML deadline.

3 0.1226131 481 hunch net-2013-04-15-NEML II

Introduction: Adam Kalai points out the New England Machine Learning Day May 1 at MSR New England. There is a poster session with abstracts due April 19. I understand last year’s NEML went well and it’s great to meet your neighbors at regional workshops like this.

4 0.10999987 473 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

Introduction: A new version of VW is out . The primary changes are: Learning Reductions : I’ve wanted to get learning reductions working and we’ve finally done it. Not everything is implemented yet, but VW now supports direct: Multiclass Classification –oaa or –ect . Cost Sensitive Multiclass Classification –csoaa or –wap . Contextual Bandit Classification –cb . Sequential Structured Prediction –searn or –dagger In addition, it is now easy to build your own custom learning reductions for various plausible uses: feature diddling, custom structured prediction problems, or alternate learning reductions. This effort is far from done, but it is now in a generally useful state. Note that all learning reductions inherit the ability to do cluster parallel learning. Library interface : VW now has a basic library interface. The library provides most of the functionality of VW, with the limitation that it is monolithic and nonreentrant. These will be improved over

5 0.10939591 369 hunch net-2009-08-27-New York Area Machine Learning Events

Introduction: Several events are happening in the NY area. Barriers in Computational Learning Theory Workshop, Aug 28. That’s tomorrow near Princeton. I’m looking forward to speaking at this one on “Getting around Barriers in Learning Theory”, but several other talks are of interest, particularly to the CS theory inclined. Claudia Perlich is running the INFORMS Data Mining Contest with a deadline of Sept. 25. This is a contest using real health record data (they partnered with HealthCare Intelligence ) to predict transfers and mortality. In the current US health care reform debate, the case studies of high costs we hear strongly suggest machine learning & statistics can save many billions. The Singularity Summit October 3&4 . This is for the AIists out there. Several of the talks look interesting, although unfortunately I’ll miss it for ALT . Predictive Analytics World, Oct 20-21 . This is stretching the definition of “New York Area” a bit, but the train to DC is reasonable.

6 0.10361937 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

7 0.10173482 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

8 0.10156055 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

9 0.097763345 355 hunch net-2009-05-19-CI Fellows

10 0.097044721 477 hunch net-2013-01-01-Deep Learning 2012

11 0.096452214 329 hunch net-2008-11-28-A Bumper Crop of Machine Learning Graduates

12 0.093481719 281 hunch net-2007-12-21-Vowpal Wabbit Code Release

13 0.088539481 441 hunch net-2011-08-15-Vowpal Wabbit 6.0

14 0.085617401 186 hunch net-2006-06-24-Online convex optimization at COLT

15 0.084760465 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

16 0.082631335 384 hunch net-2009-12-24-Top graduates this season

17 0.082127988 267 hunch net-2007-10-17-Online as the new adjective

18 0.080897123 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

19 0.080734015 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

20 0.079038933 448 hunch net-2011-10-24-2011 ML symposium and the bears


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.172), (1, -0.047), (2, -0.124), (3, -0.065), (4, 0.046), (5, -0.005), (6, -0.084), (7, -0.034), (8, -0.159), (9, -0.009), (10, 0.043), (11, -0.083), (12, 0.026), (13, -0.118), (14, 0.023), (15, 0.016), (16, 0.054), (17, -0.026), (18, 0.013), (19, 0.002), (20, 0.056), (21, -0.003), (22, -0.002), (23, -0.019), (24, -0.068), (25, 0.001), (26, 0.023), (27, 0.032), (28, -0.072), (29, -0.022), (30, 0.09), (31, -0.014), (32, -0.081), (33, 0.013), (34, 0.038), (35, 0.019), (36, -0.0), (37, -0.088), (38, -0.117), (39, 0.106), (40, 0.012), (41, 0.019), (42, -0.089), (43, 0.001), (44, 0.035), (45, 0.11), (46, 0.1), (47, -0.007), (48, 0.031), (49, -0.078)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95271039 490 hunch net-2013-11-09-Graduates and Postdocs

Introduction: Several strong graduates are on the job market this year. Alekh Agarwal made the most scalable public learning algorithm as an intern two years ago. He has a deep and broad understanding of optimization and learning as well as the ability and will to make things happen programming-wise. I’ve been privileged to have Alekh visiting me in NY where he will be sorely missed. John Duchi created Adagrad which is a commonly helpful improvement over online gradient descent that is seeing wide adoption, including in Vowpal Wabbit . He has a similarly deep and broad understanding of optimization and learning with significant industry experience at Google . Alekh and John have often coauthored together. Stephane Ross visited me a year ago over the summer, implementing many new algorithms and working out the first scale free online update rule which is now the default in Vowpal Wabbit. Stephane is not on the market—Google robbed the cradle successfully I’m sure that

2 0.60381061 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

Introduction: May 16 in Cambridge , is the New England Machine Learning Day , a first regional workshop/symposium on machine learning. To present a poster, submit an abstract by May 5 . May 19 in New York , STOC is coming to town and rather surprisingly having workshops which should be quite a bit of fun. I’ll be speaking at Algorithms for Distributed and Streaming Data .

3 0.57013953 481 hunch net-2013-04-15-NEML II

Introduction: Adam Kalai points out the New England Machine Learning Day May 1 at MSR New England. There is a poster session with abstracts due April 19. I understand last year’s NEML went well and it’s great to meet your neighbors at regional workshops like this.

4 0.56367636 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

Introduction: I’ve released version 5.0 of the Vowpal Wabbit online learning software. The major number has changed since the last release because I regard all earlier versions as obsolete—there are several new algorithms & features including substantial changes and upgrades to the default learning algorithm. The biggest changes are new algorithms: Nikos and I improved the default algorithm. The basic update rule still uses gradient descent, but the size of the update is carefully controlled so that it’s impossible to overrun the label. In addition, the normalization has changed. Computationally, these changes are virtually free and yield better results, sometimes much better. Less careful updates can be reenabled with –loss_function classic, although results are still not identical to previous due to normalization changes. Nikos also implemented the per-feature learning rates as per these two papers . Often, this works better than the default algorithm. It isn’t the defa

5 0.53798449 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

Introduction: I just made version 6.1 of Vowpal Wabbit . Relative to 6.0 , there are few new features, but many refinements. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core. The online learning algorithms are more general, with support for l 1 (via a truncated gradient variant) and l 2 regularization, and a generalized form of variable metric learning. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary. This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation. As per tradition , we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the

6 0.53504384 329 hunch net-2008-11-28-A Bumper Crop of Machine Learning Graduates

7 0.51938546 281 hunch net-2007-12-21-Vowpal Wabbit Code Release

8 0.51596701 460 hunch net-2012-03-24-David Waltz

9 0.51227689 473 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

10 0.50017709 436 hunch net-2011-06-22-Ultra LDA

11 0.49888009 415 hunch net-2010-10-28-NY ML Symposium 2010

12 0.48827761 477 hunch net-2013-01-01-Deep Learning 2012

13 0.48359659 441 hunch net-2011-08-15-Vowpal Wabbit 6.0

14 0.47461125 178 hunch net-2006-05-08-Big machine learning

15 0.47053859 246 hunch net-2007-06-13-Not Posting

16 0.44937876 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

17 0.44931534 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

18 0.4468289 443 hunch net-2011-09-03-Fall Machine Learning Events

19 0.44473791 346 hunch net-2009-03-18-Parallel ML primitives

20 0.44054928 357 hunch net-2009-05-30-Many ways to Learn this summer


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(7, 0.02), (9, 0.359), (10, 0.014), (27, 0.103), (38, 0.022), (51, 0.013), (53, 0.106), (55, 0.139), (86, 0.025), (94, 0.095), (95, 0.016)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.89036137 490 hunch net-2013-11-09-Graduates and Postdocs

Introduction: Several strong graduates are on the job market this year. Alekh Agarwal made the most scalable public learning algorithm as an intern two years ago. He has a deep and broad understanding of optimization and learning as well as the ability and will to make things happen programming-wise. I’ve been privileged to have Alekh visiting me in NY where he will be sorely missed. John Duchi created Adagrad which is a commonly helpful improvement over online gradient descent that is seeing wide adoption, including in Vowpal Wabbit . He has a similarly deep and broad understanding of optimization and learning with significant industry experience at Google . Alekh and John have often coauthored together. Stephane Ross visited me a year ago over the summer, implementing many new algorithms and working out the first scale free online update rule which is now the default in Vowpal Wabbit. Stephane is not on the market—Google robbed the cradle successfully I’m sure that

2 0.87479961 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

Introduction: This is a reminder that many deadlines for summer conference registration are coming up, and attendance is a very good idea. It’s entirely reasonable for anyone to visit a conference once, even when they don’t have a paper. For students, visiting a conference is almost a ‘must’—there is no where else that a broad cross-section of research is on display. Workshops are also a very good idea. ICML has 11 , KDD has 9 , and AAAI has 19 . Workshops provide an opportunity to get a good understanding of some current area of research. They are probably the forum most conducive to starting new lines of research because they are so interactive. Tutorials are a good way to gain some understanding of a long-standing direction of research. They are generally more coherent than workshops. ICML has 7 and AAAI has 15 .

3 0.86862361 399 hunch net-2010-05-20-Google Predict

Introduction: Slashdot points out Google Predict . I’m not privy to the details, but this has the potential to be extremely useful, as in many applications simply having an easy mechanism to apply existing learning algorithms can be extremely helpful. This differs goalwise from MLcomp —instead of public comparisons for research purposes, it’s about private utilization of good existing algorithms. It also differs infrastructurally, since a system designed to do this is much less awkward than using Amazon’s cloud computing. The latter implies that datasets several order of magnitude larger can be handled up to limits imposed by network and storage.

4 0.79688805 361 hunch net-2009-06-24-Interesting papers at UAICMOLT 2009

Introduction: Here’s a list of papers that I found interesting at ICML / COLT / UAI in 2009. Elad Hazan and Comandur Seshadhri Efficient learning algorithms for changing environments at ICML. This paper shows how to adapt learning algorithms that compete with fixed predictors to compete with changing policies. The definition of regret they deal with seems particularly useful in many situation. Hal Daume , Unsupervised Search-based Structured Prediction at ICML. This paper shows a technique for reducing unsupervised learning to supervised learning which (a) make a fast unsupervised learning algorithm and (b) makes semisupervised learning both easy and highly effective. There were two papers with similar results on active learning in the KWIK framework for linear regression, both reducing the sample complexity to . One was Nicolo Cesa-Bianchi , Claudio Gentile , and Francesco Orabona Robust Bounds for Classification via Selective Sampling at ICML and the other was Thoma

5 0.7529062 330 hunch net-2008-12-07-A NIPS paper

Introduction: I’m skipping NIPS this year in favor of Ada , but I wanted to point out this paper by Andriy Mnih and Geoff Hinton . The basic claim of the paper is that by carefully but automatically constructing a binary tree over words, it’s possible to predict words well with huge computational resource savings over unstructured approaches. I’m interested in this beyond the application to word prediction because it is relevant to the general normalization problem: If you want to predict the probability of one of a large number of events, often you must compute a predicted score for all the events and then normalize, a computationally inefficient operation. The problem comes up in many places using probabilistic models, but I’ve run into it with high-dimensional regression. There are a couple workarounds for this computational bug: Approximate. There are many ways. Often the approximations are uncontrolled (i.e. can be arbitrarily bad), and hence finicky in application. Avoid. Y

6 0.75121307 157 hunch net-2006-02-18-Multiplication of Learned Probabilities is Dangerous

7 0.57532299 403 hunch net-2010-07-18-ICML & COLT 2010

8 0.52212119 116 hunch net-2005-09-30-Research in conferences

9 0.50246781 207 hunch net-2006-09-12-Incentive Compatible Reviewing

10 0.48865807 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

11 0.48755658 160 hunch net-2006-03-02-Why do people count for learning?

12 0.48503798 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.48474759 437 hunch net-2011-07-10-ICML 2011 and the future

14 0.48401958 141 hunch net-2005-12-17-Workshops as Franchise Conferences

15 0.48366433 40 hunch net-2005-03-13-Avoiding Bad Reviewing

16 0.48091939 463 hunch net-2012-05-02-ICML: Behind the Scenes

17 0.48006287 423 hunch net-2011-02-02-User preferences for search engines

18 0.4775528 454 hunch net-2012-01-30-ICML Posters and Scope

19 0.47678423 140 hunch net-2005-12-14-More NIPS Papers II

20 0.47185832 395 hunch net-2010-04-26-Compassionate Reviewing