hunch_net hunch_net-2011 hunch_net-2011-448 knowledge-graph by maker-knowledge-mining

448 hunch net-2011-10-24-2011 ML symposium and the bears


meta infos for this blog

Source: html

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Attendance was 268, significantly larger than last year . [sent-2, score-0.218]

2 My impression was that the event mostly still fit the space, although it was crowded. [sent-3, score-0.454]

3 If anyone has suggestions for next year, speak up. [sent-4, score-0.282]

4 The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). [sent-5, score-1.064]

5 Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. [sent-6, score-0.239]

6 By coincidence all the invited talks were (at least potentially) about faster learning algorithms. [sent-7, score-0.276]

7 Leon Bottou spoke on single pass online learning via averaged SGD . [sent-9, score-0.552]

8 Yoav Freund talked about parameter-free hedging . [sent-10, score-0.359]

9 In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious thought needs to go in this direction. [sent-11, score-0.35]

10 Unrelated, I found quite a bit of truth in Paul’s talking bears and Xtranormal always adds a dash of funny. [sent-12, score-0.406]

11 My impression is that the ML job market has only become hotter since 4 years ago . [sent-13, score-0.415]

12 Anyone who is well trained can find work, with the key limiting factor being “well trained”. [sent-14, score-0.433]

13 In this environment, efforts to make ML more automatic and more easily applied are greatly appreciated. [sent-15, score-0.186]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('trained', 0.241), ('talked', 0.221), ('yoav', 0.221), ('video', 0.201), ('ml', 0.187), ('impression', 0.179), ('mostly', 0.161), ('talks', 0.148), ('hotter', 0.138), ('hiring', 0.138), ('hedging', 0.138), ('averaged', 0.138), ('coincidence', 0.128), ('unrelated', 0.128), ('spoke', 0.128), ('year', 0.125), ('boyd', 0.121), ('stephen', 0.121), ('anyone', 0.114), ('still', 0.114), ('adds', 0.111), ('games', 0.111), ('bears', 0.111), ('sgd', 0.107), ('via', 0.105), ('bottou', 0.103), ('freund', 0.103), ('paul', 0.1), ('find', 0.099), ('market', 0.098), ('efforts', 0.095), ('environment', 0.095), ('leon', 0.095), ('award', 0.095), ('limiting', 0.093), ('pass', 0.093), ('play', 0.093), ('went', 0.093), ('truth', 0.093), ('last', 0.093), ('talking', 0.091), ('submitted', 0.091), ('automatic', 0.091), ('attendance', 0.089), ('online', 0.088), ('speak', 0.086), ('exponential', 0.084), ('cool', 0.083), ('symposium', 0.083), ('suggestions', 0.082)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

2 0.15643609 447 hunch net-2011-10-10-ML Symposium and ICML details

Introduction: Everyone should have received notice for NY ML Symposium abstracts. Check carefully, as one was lost by our system. The event itself is October 21, next week. Leon Bottou , Stephen Boyd , and Yoav Freund are giving the invited talks this year, and there are many spotlights on local work spread throughout the day. Chris Wiggins has setup 6(!) ML-interested startups to follow the symposium, which should be of substantial interest to the employment interested. I also wanted to give an update on ICML 2012 . Unlike last year, our deadline is coordinated with AIStat (which is due this Friday). The paper deadline for ICML has been pushed back to February 24 which should allow significant time for finishing up papers after the winter break. Other details may interest people as well: We settled on using CMT after checking out the possibilities. I wasn’t looking for this, because I’ve often found CMT clunky in terms of easy access to the right information. Nevert

3 0.11894104 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

Introduction: The New York ML symposium was last Friday. There were 303 registrations, up a bit from last year . I particularly enjoyed talks by Bill Freeman on vision and ML, Jon Lenchner on strategy in Jeopardy, and Tara N. Sainath and Brian Kingsbury on deep learning for speech recognition . If anyone has suggestions or thoughts for next year, please speak up. I also attended Strata + Hadoop World for the first time. This is primarily a trade conference rather than an academic conference, but I found it pretty interesting as a first time attendee. This is ground zero for the Big data buzzword, and I see now why. It’s about data, and the word “big” is so ambiguous that everyone can lay claim to it. There were essentially zero academic talks. Instead, the focus was on war stories, product announcements, and education. The general level of education is much lower—explaining Machine Learning to the SQL educated is the primary operating point. Nevertheless that’s happening, a

4 0.11688111 377 hunch net-2009-11-09-NYAS ML Symposium this year.

Introduction: The NYAS ML symposium grew again this year to 170 participants, despite the need to outsmart or otherwise tunnel through a crowd . Perhaps the most distinct talk was by Bob Bell on various aspects of the Netflix prize competition. I also enjoyed several student posters including Matt Hoffman ‘s cool examples of blind source separation for music. I’m somewhat surprised how much the workshop has grown, as it is now comparable in size to a small conference, although in style more similar to a workshop. At some point as an event grows, it becomes owned by the community rather than the organizers, so if anyone has suggestions on improving it, speak up and be heard.

5 0.10982668 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

6 0.10149378 385 hunch net-2009-12-27-Interesting things at NIPS 2009

7 0.099067137 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

8 0.098259673 410 hunch net-2010-09-17-New York Area Machine Learning Events

9 0.095214911 77 hunch net-2005-05-29-Maximum Margin Mismatch?

10 0.093174323 270 hunch net-2007-11-02-The Machine Learning Award goes to …

11 0.092305556 334 hunch net-2009-01-07-Interesting Papers at SODA 2009

12 0.090068579 444 hunch net-2011-09-07-KDD and MUCMD 2011

13 0.087785065 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

14 0.08714895 240 hunch net-2007-04-21-Videolectures.net

15 0.08605057 452 hunch net-2012-01-04-Why ICML? and the summer conferences

16 0.085386015 378 hunch net-2009-11-15-The Other Online Learning

17 0.084765166 329 hunch net-2008-11-28-A Bumper Crop of Machine Learning Graduates

18 0.082691275 437 hunch net-2011-07-10-ICML 2011 and the future

19 0.082261018 346 hunch net-2009-03-18-Parallel ML primitives

20 0.079744972 406 hunch net-2010-08-22-KDD 2010


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.196), (1, -0.053), (2, -0.095), (3, -0.021), (4, 0.021), (5, 0.009), (6, -0.1), (7, -0.058), (8, -0.137), (9, -0.104), (10, 0.077), (11, 0.0), (12, 0.056), (13, -0.032), (14, 0.069), (15, 0.033), (16, -0.001), (17, -0.001), (18, 0.129), (19, 0.027), (20, 0.046), (21, 0.037), (22, 0.057), (23, 0.042), (24, 0.116), (25, -0.091), (26, -0.063), (27, 0.045), (28, 0.099), (29, 0.037), (30, -0.066), (31, -0.069), (32, -0.029), (33, -0.028), (34, -0.023), (35, 0.01), (36, 0.062), (37, 0.044), (38, -0.029), (39, -0.038), (40, -0.048), (41, 0.013), (42, 0.03), (43, -0.006), (44, 0.05), (45, -0.031), (46, 0.014), (47, 0.089), (48, 0.048), (49, 0.014)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96997702 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

2 0.66199309 415 hunch net-2010-10-28-NY ML Symposium 2010

Introduction: About 200 people attended the 2010 NYAS ML Symposium this year. (It was about 170 last year .) I particularly enjoyed several talks. Yann has a new live demo of (limited) real-time object recognition learning. Sanjoy gave a fairly convincing and comprehensible explanation of why a modified form of single-linkage clustering is consistent in higher dimensions, and why consistency is a critical feature for clustering algorithms. I’m curious how well this algorithm works in practice. Matt Hoffman ‘s poster covering online LDA seemed pretty convincing to me as an algorithmic improvement. This year, we allocated more time towards posters & poster spotlights. For next year, we are considering some further changes. The format has traditionally been 4 invited Professor speakers, with posters and poster spotlight for students. Demand from other parties to participate is growing, for example from postdocs and startups in the area. Another growing concern is the fa

3 0.65633935 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

Introduction: I’ve been wanting to attend the NYC ML Meetup for some time and hope to make it next week on the 25th . Rob Schapire is talking about “Playing Repeated Games”, which in my experience is far more relevant to machine learning than the title might indicate.

4 0.61051661 447 hunch net-2011-10-10-ML Symposium and ICML details

Introduction: Everyone should have received notice for NY ML Symposium abstracts. Check carefully, as one was lost by our system. The event itself is October 21, next week. Leon Bottou , Stephen Boyd , and Yoav Freund are giving the invited talks this year, and there are many spotlights on local work spread throughout the day. Chris Wiggins has setup 6(!) ML-interested startups to follow the symposium, which should be of substantial interest to the employment interested. I also wanted to give an update on ICML 2012 . Unlike last year, our deadline is coordinated with AIStat (which is due this Friday). The paper deadline for ICML has been pushed back to February 24 which should allow significant time for finishing up papers after the winter break. Other details may interest people as well: We settled on using CMT after checking out the possibilities. I wasn’t looking for this, because I’ve often found CMT clunky in terms of easy access to the right information. Nevert

5 0.61009759 377 hunch net-2009-11-09-NYAS ML Symposium this year.

Introduction: The NYAS ML symposium grew again this year to 170 participants, despite the need to outsmart or otherwise tunnel through a crowd . Perhaps the most distinct talk was by Bob Bell on various aspects of the Netflix prize competition. I also enjoyed several student posters including Matt Hoffman ‘s cool examples of blind source separation for music. I’m somewhat surprised how much the workshop has grown, as it is now comparable in size to a small conference, although in style more similar to a workshop. At some point as an event grows, it becomes owned by the community rather than the organizers, so if anyone has suggestions on improving it, speak up and be heard.

6 0.57533282 489 hunch net-2013-09-20-No NY ML Symposium in 2013, and some good news

7 0.57523358 410 hunch net-2010-09-17-New York Area Machine Learning Events

8 0.56412381 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

9 0.54137546 322 hunch net-2008-10-20-New York’s ML Day

10 0.54056841 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

11 0.52330542 313 hunch net-2008-08-18-Radford Neal starts a blog

12 0.51199061 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

13 0.50275421 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

14 0.48752338 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

15 0.47496402 378 hunch net-2009-11-15-The Other Online Learning

16 0.46866569 487 hunch net-2013-07-24-ICML 2012 videos lost

17 0.46847063 77 hunch net-2005-05-29-Maximum Margin Mismatch?

18 0.45695052 270 hunch net-2007-11-02-The Machine Learning Award goes to …

19 0.45642963 290 hunch net-2008-02-27-The Stats Handicap

20 0.45626768 316 hunch net-2008-09-04-Fall ML Conferences


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.134), (53, 0.015), (55, 0.693), (94, 0.035), (95, 0.028)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.9971537 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

Introduction: The New York Machine Learning Symposium is October 19 with a 2 page abstract deadline due September 13 via email with subject “Machine Learning Poster Submission” sent to physicalscience@nyas.org. Everyone is welcome to submit. Last year’s attendance was 246 and I expect more this year. The primary experiment for ICML 2013 is multiple paper submission deadlines with rolling review cycles. The key dates are October 1, December 15, and February 15. This is an attempt to shift ICML further towards a journal style review process and reduce peak load. The “not for proceedings” experiment from this year’s ICML is not continuing. Edit: Fixed second ICML deadline.

2 0.99417168 446 hunch net-2011-10-03-Monday announcements

Introduction: Various people want to use hunch.net to announce things. I’ve generally resisted this because I feared hunch becoming a pure announcement zone while I am much more interested contentful posts and discussion personally. Nevertheless there is clearly some value and announcements are easy, so I’m planning to summarize announcements on Mondays. D. Sculley points out an interesting Semisupervised feature learning competition, with a deadline of October 17. Lihong Li points out the webscope user interaction dataset which is the first high quality exploration dataset I’m aware of that is publicly available. Seth Rogers points out CrossValidated which looks similar in conception to metaoptimize , but directly using the stackoverflow interface and with a bit more of a statistics twist.

3 0.99361432 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

Introduction: The results have been posted , with CMU first , Stanford second , and Virginia Tech Third . Considering that this was an open event (at least for people in the US), this was a very strong showing for research at universities (instead of defense contractors, for example). Some details should become public at the NIPS workshops . Slashdot has a post with many comments.

4 0.99007577 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

Introduction: Reviewers and students are sometimes greatly concerned by the distinction between: An open set and a closed set . A Supremum and a Maximum . An event which happens with probability 1 and an event that always happens. I don’t appreciate this distinction in machine learning & learning theory. All machine learning takes place (by definition) on a machine where every parameter has finite precision. Consequently, every set is closed, a maximal element always exists, and probability 1 events always happen. The fundamental issue here is that substantial parts of mathematics don’t appear well-matched to computation in the physical world, because the mathematics has concerns which are unphysical. This mismatched mathematics makes irrelevant distinctions. We can ask “what mathematics is appropriate to computation?” Andrej has convinced me that a pretty good answer to this question is constructive mathematics . So, here’s a basic challenge: Can anyone name a situati

same-blog 5 0.98807323 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

6 0.98756349 20 hunch net-2005-02-15-ESPgame and image labeling

7 0.97975618 326 hunch net-2008-11-11-COLT CFP

8 0.97975618 465 hunch net-2012-05-12-ICML accepted papers and early registration

9 0.96156353 90 hunch net-2005-07-07-The Limits of Learning Theory

10 0.94182819 331 hunch net-2008-12-12-Summer Conferences

11 0.91646242 270 hunch net-2007-11-02-The Machine Learning Award goes to …

12 0.90082556 387 hunch net-2010-01-19-Deadline Season, 2010

13 0.89790434 395 hunch net-2010-04-26-Compassionate Reviewing

14 0.88795257 453 hunch net-2012-01-28-Why COLT?

15 0.85437763 65 hunch net-2005-05-02-Reviewing techniques for conferences

16 0.82954043 356 hunch net-2009-05-24-2009 ICML discussion site

17 0.82439184 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

18 0.81718647 216 hunch net-2006-11-02-2006 NIPS workshops

19 0.80557466 46 hunch net-2005-03-24-The Role of Workshops

20 0.79904807 452 hunch net-2012-01-04-Why ICML? and the summer conferences