hunch_net hunch_net-2009 hunch_net-2009-377 knowledge-graph by maker-knowledge-mining

377 hunch net-2009-11-09-NYAS ML Symposium this year.


meta infos for this blog

Source: html

Introduction: The NYAS ML symposium grew again this year to 170 participants, despite the need to outsmart or otherwise tunnel through a crowd . Perhaps the most distinct talk was by Bob Bell on various aspects of the Netflix prize competition. I also enjoyed several student posters including Matt Hoffman ‘s cool examples of blind source separation for music. I’m somewhat surprised how much the workshop has grown, as it is now comparable in size to a small conference, although in style more similar to a workshop. At some point as an event grows, it becomes owned by the community rather than the organizers, so if anyone has suggestions on improving it, speak up and be heard.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The NYAS ML symposium grew again this year to 170 participants, despite the need to outsmart or otherwise tunnel through a crowd . [sent-1, score-0.743]

2 Perhaps the most distinct talk was by Bob Bell on various aspects of the Netflix prize competition. [sent-2, score-0.643]

3 I also enjoyed several student posters including Matt Hoffman ‘s cool examples of blind source separation for music. [sent-3, score-1.13]

4 I’m somewhat surprised how much the workshop has grown, as it is now comparable in size to a small conference, although in style more similar to a workshop. [sent-4, score-1.004]

5 At some point as an event grows, it becomes owned by the community rather than the organizers, so if anyone has suggestions on improving it, speak up and be heard. [sent-5, score-0.929]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('separation', 0.232), ('hoffman', 0.215), ('matt', 0.215), ('nyas', 0.203), ('bell', 0.193), ('crowd', 0.193), ('grows', 0.193), ('grown', 0.179), ('comparable', 0.179), ('heard', 0.179), ('bob', 0.179), ('posters', 0.169), ('distinct', 0.16), ('surprised', 0.153), ('prize', 0.153), ('netflix', 0.15), ('aspects', 0.144), ('speak', 0.144), ('organizers', 0.144), ('despite', 0.142), ('cool', 0.139), ('symposium', 0.139), ('suggestions', 0.137), ('participants', 0.135), ('event', 0.133), ('improving', 0.133), ('blind', 0.133), ('student', 0.133), ('enjoyed', 0.129), ('somewhat', 0.114), ('otherwise', 0.113), ('becomes', 0.111), ('size', 0.111), ('style', 0.111), ('community', 0.106), ('ml', 0.105), ('source', 0.101), ('talk', 0.101), ('anyone', 0.096), ('including', 0.094), ('workshop', 0.09), ('similar', 0.087), ('need', 0.086), ('various', 0.085), ('although', 0.08), ('small', 0.079), ('conference', 0.078), ('perhaps', 0.072), ('year', 0.07), ('point', 0.069)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 377 hunch net-2009-11-09-NYAS ML Symposium this year.

Introduction: The NYAS ML symposium grew again this year to 170 participants, despite the need to outsmart or otherwise tunnel through a crowd . Perhaps the most distinct talk was by Bob Bell on various aspects of the Netflix prize competition. I also enjoyed several student posters including Matt Hoffman ‘s cool examples of blind source separation for music. I’m somewhat surprised how much the workshop has grown, as it is now comparable in size to a small conference, although in style more similar to a workshop. At some point as an event grows, it becomes owned by the community rather than the organizers, so if anyone has suggestions on improving it, speak up and be heard.

2 0.16344744 415 hunch net-2010-10-28-NY ML Symposium 2010

Introduction: About 200 people attended the 2010 NYAS ML Symposium this year. (It was about 170 last year .) I particularly enjoyed several talks. Yann has a new live demo of (limited) real-time object recognition learning. Sanjoy gave a fairly convincing and comprehensible explanation of why a modified form of single-linkage clustering is consistent in higher dimensions, and why consistency is a critical feature for clustering algorithms. I’m curious how well this algorithm works in practice. Matt Hoffman ‘s poster covering online LDA seemed pretty convincing to me as an algorithmic improvement. This year, we allocated more time towards posters & poster spotlights. For next year, we are considering some further changes. The format has traditionally been 4 invited Professor speakers, with posters and poster spotlight for students. Demand from other parties to participate is growing, for example from postdocs and startups in the area. Another growing concern is the fa

3 0.15026163 371 hunch net-2009-09-21-Netflix finishes (and starts)

Introduction: I attended the Netflix prize ceremony this morning. The press conference part is covered fine elsewhere , with the basic outcome being that BellKor’s Pragmatic Chaos won over The Ensemble by 15-20 minutes , because they were tied in performance on the ultimate holdout set. I’m sure the individual participants will have many chances to speak about the solution. One of these is Bell at the NYAS ML symposium on Nov. 6 . Several additional details may interest ML people. The degree of overfitting exhibited by the difference in performance on the leaderboard test set and the ultimate hold out set was small, but determining at .02 to .03%. A tie was possible, because the rules cut off measurements below the fourth digit based on significance concerns. In actuality, of course, the scores do differ before rounding, but everyone I spoke to claimed not to know how. The complete dataset has been released on UCI , so each team could compute their own score to whatever accu

4 0.12922458 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

5 0.12508316 489 hunch net-2013-09-20-No NY ML Symposium in 2013, and some good news

Introduction: There will be no New York ML Symposium this year. The core issue is that NYAS is disorganized by people leaving, pushing back the date, with the current candidate a spring symposium on March 28. Gunnar and I were outvoted here—we were gung ho on organizing a fall symposium, but the rest of the committee wants to wait. In some good news, most of the ICML 2012 videos have been restored from a deep backup.

6 0.11688111 448 hunch net-2011-10-24-2011 ML symposium and the bears

7 0.11197491 336 hunch net-2009-01-19-Netflix prize within epsilon

8 0.10464884 437 hunch net-2011-07-10-ICML 2011 and the future

9 0.097110815 80 hunch net-2005-06-10-Workshops are not Conferences

10 0.094803661 410 hunch net-2010-09-17-New York Area Machine Learning Events

11 0.09235958 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

12 0.092080548 141 hunch net-2005-12-17-Workshops as Franchise Conferences

13 0.087713152 395 hunch net-2010-04-26-Compassionate Reviewing

14 0.085386537 452 hunch net-2012-01-04-Why ICML? and the summer conferences

15 0.083887912 151 hunch net-2006-01-25-1 year

16 0.08293999 239 hunch net-2007-04-18-$50K Spock Challenge

17 0.082703508 322 hunch net-2008-10-20-New York’s ML Day

18 0.082517639 93 hunch net-2005-07-13-“Sister Conference” presentations

19 0.079045057 430 hunch net-2011-04-11-The Heritage Health Prize

20 0.07684347 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.134), (1, -0.092), (2, -0.084), (3, -0.027), (4, -0.024), (5, 0.046), (6, -0.08), (7, -0.046), (8, -0.06), (9, -0.122), (10, -0.015), (11, 0.101), (12, 0.056), (13, 0.039), (14, 0.024), (15, -0.016), (16, -0.02), (17, 0.07), (18, 0.015), (19, 0.068), (20, -0.125), (21, -0.085), (22, 0.098), (23, 0.034), (24, 0.081), (25, -0.098), (26, -0.149), (27, 0.08), (28, 0.079), (29, 0.074), (30, -0.097), (31, 0.043), (32, -0.058), (33, -0.096), (34, -0.077), (35, -0.064), (36, 0.038), (37, -0.011), (38, -0.043), (39, -0.013), (40, 0.029), (41, -0.056), (42, 0.055), (43, -0.047), (44, 0.013), (45, 0.055), (46, 0.026), (47, 0.13), (48, -0.031), (49, -0.048)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99246156 377 hunch net-2009-11-09-NYAS ML Symposium this year.

Introduction: The NYAS ML symposium grew again this year to 170 participants, despite the need to outsmart or otherwise tunnel through a crowd . Perhaps the most distinct talk was by Bob Bell on various aspects of the Netflix prize competition. I also enjoyed several student posters including Matt Hoffman ‘s cool examples of blind source separation for music. I’m somewhat surprised how much the workshop has grown, as it is now comparable in size to a small conference, although in style more similar to a workshop. At some point as an event grows, it becomes owned by the community rather than the organizers, so if anyone has suggestions on improving it, speak up and be heard.

2 0.61510122 415 hunch net-2010-10-28-NY ML Symposium 2010

Introduction: About 200 people attended the 2010 NYAS ML Symposium this year. (It was about 170 last year .) I particularly enjoyed several talks. Yann has a new live demo of (limited) real-time object recognition learning. Sanjoy gave a fairly convincing and comprehensible explanation of why a modified form of single-linkage clustering is consistent in higher dimensions, and why consistency is a critical feature for clustering algorithms. I’m curious how well this algorithm works in practice. Matt Hoffman ‘s poster covering online LDA seemed pretty convincing to me as an algorithmic improvement. This year, we allocated more time towards posters & poster spotlights. For next year, we are considering some further changes. The format has traditionally been 4 invited Professor speakers, with posters and poster spotlight for students. Demand from other parties to participate is growing, for example from postdocs and startups in the area. Another growing concern is the fa

3 0.54796612 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

4 0.54326695 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

5 0.5254854 80 hunch net-2005-06-10-Workshops are not Conferences

Introduction: … and you should use that fact. A workshop differs from a conference in that it is about a focused group of people worrying about a focused topic. It also differs in that a workshop is typically a “one-time affair” rather than a series. (The Snowbird learning workshop counts as a conference in this respect.) A common failure mode of both organizers and speakers at a workshop is to treat it as a conference. This is “ok”, but it is not really taking advantage of the situation. Here are some things I’ve learned: For speakers: A smaller audience means it can be more interactive. Interactive means a better chance to avoid losing your audience and a more interesting presentation (because you can adapt to your audience). Greater focus amongst the participants means you can get to the heart of the matter more easily, and discuss tradeoffs more carefully. Unlike conferences, relevance is more valued than newness. For organizers: Not everything needs to be in a conference st

6 0.47996205 489 hunch net-2013-09-20-No NY ML Symposium in 2013, and some good news

7 0.46989053 336 hunch net-2009-01-19-Netflix prize within epsilon

8 0.46549511 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

9 0.44842795 371 hunch net-2009-09-21-Netflix finishes (and starts)

10 0.44477132 322 hunch net-2008-10-20-New York’s ML Day

11 0.39536947 141 hunch net-2005-12-17-Workshops as Franchise Conferences

12 0.39217398 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

13 0.37803358 362 hunch net-2009-06-26-Netflix nearly done

14 0.37754637 294 hunch net-2008-04-12-Blog compromised

15 0.37589416 234 hunch net-2007-02-22-Create Your Own ICML Workshop

16 0.37302458 410 hunch net-2010-09-17-New York Area Machine Learning Events

17 0.37272444 447 hunch net-2011-10-10-ML Symposium and ICML details

18 0.3671546 93 hunch net-2005-07-13-“Sister Conference” presentations

19 0.36285263 416 hunch net-2010-10-29-To Vidoelecture or not

20 0.34427568 430 hunch net-2011-04-11-The Heritage Health Prize


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.013), (17, 0.4), (27, 0.077), (53, 0.056), (55, 0.203), (95, 0.133)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.88717794 377 hunch net-2009-11-09-NYAS ML Symposium this year.

Introduction: The NYAS ML symposium grew again this year to 170 participants, despite the need to outsmart or otherwise tunnel through a crowd . Perhaps the most distinct talk was by Bob Bell on various aspects of the Netflix prize competition. I also enjoyed several student posters including Matt Hoffman ‘s cool examples of blind source separation for music. I’m somewhat surprised how much the workshop has grown, as it is now comparable in size to a small conference, although in style more similar to a workshop. At some point as an event grows, it becomes owned by the community rather than the organizers, so if anyone has suggestions on improving it, speak up and be heard.

2 0.80693424 313 hunch net-2008-08-18-Radford Neal starts a blog

Introduction: here on statistics, ML, CS, and other things he knows well.

3 0.57656562 366 hunch net-2009-08-03-Carbon in Computer Science Research

Introduction: Al Gore ‘s film and gradually more assertive and thorough science has managed to mostly shift the debate on climate change from “Is it happening?” to “What should be done?” In that context, it’s worthwhile to think a bit about what can be done within computer science research. There are two things we can think about: Doing Research At a cartoon level, computer science research consists of some combination of commuting to&from; work, writing programs, running them on computers, writing papers, and presenting them at conferences. A typical computer has a power usage on the order of 100 Watts, which works out to 2.4 kiloWatt-hours/day. Looking up David MacKay ‘s reference on power usage per person , it becomes clear that this is a relatively minor part of the lifestyle, although it could become substantial if many more computers are required. Much larger costs are associated with commuting (which is in common with many people) and attending conferences. Since local commuti

4 0.51538044 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

Introduction: May 16 in Cambridge , is the New England Machine Learning Day , a first regional workshop/symposium on machine learning. To present a poster, submit an abstract by May 5 . May 19 in New York , STOC is coming to town and rather surprisingly having workshops which should be quite a bit of fun. I’ll be speaking at Algorithms for Distributed and Streaming Data .

5 0.47602445 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

6 0.4742147 331 hunch net-2008-12-12-Summer Conferences

7 0.47407469 90 hunch net-2005-07-07-The Limits of Learning Theory

8 0.4695451 253 hunch net-2007-07-06-Idempotent-capable Predictors

9 0.46616197 216 hunch net-2006-11-02-2006 NIPS workshops

10 0.46546263 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

11 0.46530086 448 hunch net-2011-10-24-2011 ML symposium and the bears

12 0.46423748 46 hunch net-2005-03-24-The Role of Workshops

13 0.46234781 270 hunch net-2007-11-02-The Machine Learning Award goes to …

14 0.46153063 395 hunch net-2010-04-26-Compassionate Reviewing

15 0.46119696 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

16 0.46107504 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

17 0.4588767 20 hunch net-2005-02-15-ESPgame and image labeling

18 0.45614228 159 hunch net-2006-02-27-The Peekaboom Dataset

19 0.45607013 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

20 0.45524076 453 hunch net-2012-01-28-Why COLT?