hunch_net hunch_net-2006 hunch_net-2006-203 knowledge-graph by maker-knowledge-mining

203 hunch net-2006-08-18-Report of MLSS 2006 Taipei


meta infos for this blog

Source: html

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. [sent-2, score-0.867]

2 In the end of MLSS we distributed a survey form for participants to fill in. [sent-6, score-0.602]

3 The first question is designed to find out how our participants learned about MLSS 2006 Taipei. [sent-8, score-0.645]

4 Unfortunately, most of the participants learned about MLSS from their advisors and it is difficult for us to track how their advisors learned about MLSS. [sent-9, score-0.819]

5 Asked about why they attended MLSS, as expected, about 2/3 replied that they wanted to use ML and 1/3 replied that they wanted to do ML research. [sent-15, score-0.731]

6 Most of participants attended all talks, which is consistent with our record. [sent-16, score-0.535]

7 Asked about what makes it difficult for them to understand the talks, about half replied mathematics, about a quarter replied “no examples” and less than a quarter replied English. [sent-18, score-0.953]

8 Finally, all talk topics were mentioned as being helpful by our participants, especially those talks that are of more introductory nature, such as graphical models by Sam Roweis, SVM by Chih-Jen Lin, and Boosting by Gunnar Ratsch, while talks with many theorems and proofs are less popular. [sent-19, score-0.527]

9 A quick fix for this problem is to provide Web pointers to previous MLSS slides and video and urge registered participants to take a look at them in advance to prepare themselves before attending MLSS. [sent-23, score-0.736]

10 Then our participants would like organizers to design more activities to encourage interaction with speakers and among participants. [sent-28, score-0.98]

11 We could have let our speakers “expose” to participants more often than staying in a cozy VIP lounge. [sent-30, score-0.874]

12 We could have also provided online and physical chat board for participants to expose their contact IDs. [sent-31, score-0.646]

13 It turned out that our speakers were so good that they covered and adapted to others related talks and made the entire program appear like a carefully designed coherent one. [sent-34, score-0.771]

14 So most participants liked the program and only one complaint was about this part. [sent-35, score-0.548]

15 One cluster of participants is looking for new research topics on ML or trying to enhance their understanding of some advanced topics on ML. [sent-37, score-0.757]

16 If MLSS is designed for them, speakers can present their latest or even ongoing research results. [sent-38, score-0.514]

17 To them, speakers need to present more examples, show them applications, and present mature results. [sent-42, score-0.485]

18 We also designed a graduate credit program to give registered students a preview and prerequisite math background. [sent-46, score-0.534]

19 I think we could have done a better job helping our participants understand the nature of the summer school and be prepared. [sent-48, score-0.746]

20 Finally, on behalf of the steering committee, I would like to take this chance to thank Alex Smola, Bernhard Scholkoph and John Langford for their help to put together this excellent lineup of speakers and the positive examples they established in previous MLSS series for us to learn from. [sent-49, score-0.988]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('participants', 0.478), ('speakers', 0.331), ('mlss', 0.273), ('replied', 0.257), ('talks', 0.195), ('lineup', 0.154), ('ml', 0.144), ('organizers', 0.114), ('designed', 0.106), ('expose', 0.103), ('finally', 0.092), ('quarter', 0.091), ('prerequisite', 0.091), ('topics', 0.081), ('advisors', 0.08), ('wanted', 0.08), ('job', 0.08), ('present', 0.077), ('registered', 0.076), ('established', 0.076), ('together', 0.074), ('students', 0.07), ('program', 0.07), ('related', 0.069), ('better', 0.068), ('could', 0.065), ('slides', 0.065), ('survey', 0.065), ('chance', 0.064), ('svm', 0.063), ('series', 0.062), ('international', 0.062), ('math', 0.062), ('learned', 0.061), ('us', 0.059), ('distributed', 0.059), ('advance', 0.059), ('graduate', 0.059), ('cluster', 0.059), ('trying', 0.058), ('previous', 0.058), ('record', 0.058), ('would', 0.057), ('web', 0.057), ('attended', 0.057), ('graphical', 0.056), ('school', 0.055), ('asked', 0.054), ('mostly', 0.053), ('examples', 0.053)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

2 0.23825438 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

3 0.18379641 80 hunch net-2005-06-10-Workshops are not Conferences

Introduction: … and you should use that fact. A workshop differs from a conference in that it is about a focused group of people worrying about a focused topic. It also differs in that a workshop is typically a “one-time affair” rather than a series. (The Snowbird learning workshop counts as a conference in this respect.) A common failure mode of both organizers and speakers at a workshop is to treat it as a conference. This is “ok”, but it is not really taking advantage of the situation. Here are some things I’ve learned: For speakers: A smaller audience means it can be more interactive. Interactive means a better chance to avoid losing your audience and a more interesting presentation (because you can adapt to your audience). Greater focus amongst the participants means you can get to the heart of the matter more easily, and discuss tradeoffs more carefully. Unlike conferences, relevance is more valued than newness. For organizers: Not everything needs to be in a conference st

4 0.15681313 322 hunch net-2008-10-20-New York’s ML Day

Introduction: I’m not as naturally exuberant as Muthu 2 or David about CS/Econ day, but I believe it and ML day were certainly successful. At the CS/Econ day, I particularly enjoyed Toumas Sandholm’s talk which showed a commanding depth of understanding and application in automated auctions. For the machine learning day, I enjoyed several talks and posters (I better, I helped pick them.). What stood out to me was number of people attending: 158 registered, a level qualifying as “scramble to find seats”. My rule of thumb for workshops/conferences is that the number of attendees is often something like the number of submissions. That isn’t the case here, where there were just 4 invited speakers and 30-or-so posters. Presumably, the difference is due to a critical mass of Machine Learning interested people in the area and the ease of their attendance. Are there other areas where a local Machine Learning day would fly? It’s easy to imagine something working out in the San Franci

5 0.13768262 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

Introduction: Chicago ’05 ended a couple of weeks ago. This was the sixth Machine Learning Summer School , and the second one that used a wiki . (The first was Berder ’04, thanks to Gunnar Raetsch.) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. They can even be used as the meeting’s webpage, as a permanent record of its participants’ collaborations — see for example the wiki/website for last year’s NVO Summer School . A basic wiki is a collection of editable webpages, maintained by software called a wiki engine . The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. It uses PHP and MySQL, but doesn’t require you to know either. Tikiwiki has far more features than most wikis, as it is really a full Content Management System . (My thanks to Sebastian Stark for pointing this out.) Here are the features we found most useful: Bulletin boa

6 0.12922458 377 hunch net-2009-11-09-NYAS ML Symposium this year.

7 0.12460285 261 hunch net-2007-08-28-Live ML Class

8 0.11118764 234 hunch net-2007-02-22-Create Your Own ICML Workshop

9 0.11012911 437 hunch net-2011-07-10-ICML 2011 and the future

10 0.10982668 448 hunch net-2011-10-24-2011 ML symposium and the bears

11 0.10914357 130 hunch net-2005-11-16-MLSS 2006

12 0.10503451 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

13 0.10500566 369 hunch net-2009-08-27-New York Area Machine Learning Events

14 0.10290992 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

15 0.099663787 371 hunch net-2009-09-21-Netflix finishes (and starts)

16 0.097937033 357 hunch net-2009-05-30-Many ways to Learn this summer

17 0.095958479 403 hunch net-2010-07-18-ICML & COLT 2010

18 0.090427347 449 hunch net-2011-11-26-Giving Thanks

19 0.089892417 141 hunch net-2005-12-17-Workshops as Franchise Conferences

20 0.084524162 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.206), (1, -0.082), (2, -0.122), (3, 0.001), (4, -0.034), (5, 0.04), (6, -0.015), (7, -0.026), (8, -0.068), (9, -0.1), (10, 0.039), (11, -0.012), (12, 0.086), (13, 0.037), (14, 0.091), (15, -0.034), (16, -0.016), (17, 0.204), (18, 0.067), (19, 0.157), (20, 0.044), (21, -0.052), (22, 0.097), (23, -0.094), (24, 0.052), (25, 0.019), (26, 0.026), (27, -0.017), (28, 0.054), (29, 0.079), (30, -0.124), (31, -0.046), (32, 0.067), (33, -0.121), (34, -0.01), (35, -0.072), (36, -0.048), (37, -0.023), (38, 0.017), (39, -0.085), (40, 0.013), (41, -0.006), (42, -0.072), (43, 0.029), (44, -0.034), (45, 0.085), (46, 0.052), (47, 0.104), (48, -0.089), (49, 0.013)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97345465 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

2 0.77830827 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

3 0.70115989 322 hunch net-2008-10-20-New York’s ML Day

Introduction: I’m not as naturally exuberant as Muthu 2 or David about CS/Econ day, but I believe it and ML day were certainly successful. At the CS/Econ day, I particularly enjoyed Toumas Sandholm’s talk which showed a commanding depth of understanding and application in automated auctions. For the machine learning day, I enjoyed several talks and posters (I better, I helped pick them.). What stood out to me was number of people attending: 158 registered, a level qualifying as “scramble to find seats”. My rule of thumb for workshops/conferences is that the number of attendees is often something like the number of submissions. That isn’t the case here, where there were just 4 invited speakers and 30-or-so posters. Presumably, the difference is due to a critical mass of Machine Learning interested people in the area and the ease of their attendance. Are there other areas where a local Machine Learning day would fly? It’s easy to imagine something working out in the San Franci

4 0.70052266 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

Introduction: Chicago ’05 ended a couple of weeks ago. This was the sixth Machine Learning Summer School , and the second one that used a wiki . (The first was Berder ’04, thanks to Gunnar Raetsch.) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. They can even be used as the meeting’s webpage, as a permanent record of its participants’ collaborations — see for example the wiki/website for last year’s NVO Summer School . A basic wiki is a collection of editable webpages, maintained by software called a wiki engine . The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. It uses PHP and MySQL, but doesn’t require you to know either. Tikiwiki has far more features than most wikis, as it is really a full Content Management System . (My thanks to Sebastian Stark for pointing this out.) Here are the features we found most useful: Bulletin boa

5 0.59351438 80 hunch net-2005-06-10-Workshops are not Conferences

Introduction: … and you should use that fact. A workshop differs from a conference in that it is about a focused group of people worrying about a focused topic. It also differs in that a workshop is typically a “one-time affair” rather than a series. (The Snowbird learning workshop counts as a conference in this respect.) A common failure mode of both organizers and speakers at a workshop is to treat it as a conference. This is “ok”, but it is not really taking advantage of the situation. Here are some things I’ve learned: For speakers: A smaller audience means it can be more interactive. Interactive means a better chance to avoid losing your audience and a more interesting presentation (because you can adapt to your audience). Greater focus amongst the participants means you can get to the heart of the matter more easily, and discuss tradeoffs more carefully. Unlike conferences, relevance is more valued than newness. For organizers: Not everything needs to be in a conference st

6 0.58854973 377 hunch net-2009-11-09-NYAS ML Symposium this year.

7 0.5095467 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

8 0.50331783 249 hunch net-2007-06-21-Presentation Preparation

9 0.49625167 261 hunch net-2007-08-28-Live ML Class

10 0.48857939 146 hunch net-2006-01-06-MLTV

11 0.48753196 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

12 0.4858942 449 hunch net-2011-11-26-Giving Thanks

13 0.48437968 357 hunch net-2009-05-30-Many ways to Learn this summer

14 0.47209749 448 hunch net-2011-10-24-2011 ML symposium and the bears

15 0.46505141 273 hunch net-2007-11-16-MLSS 2008

16 0.45743579 4 hunch net-2005-01-26-Summer Schools

17 0.44608951 69 hunch net-2005-05-11-Visa Casualties

18 0.4459416 88 hunch net-2005-07-01-The Role of Impromptu Talks

19 0.44191653 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

20 0.43682185 415 hunch net-2010-10-28-NY ML Symposium 2010


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(4, 0.019), (13, 0.02), (27, 0.135), (35, 0.015), (38, 0.043), (48, 0.04), (49, 0.015), (53, 0.062), (55, 0.128), (67, 0.025), (92, 0.29), (94, 0.065), (95, 0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.92305219 362 hunch net-2009-06-26-Netflix nearly done

Introduction: A $1M qualifying result was achieved on the public Netflix test set by a 3-way ensemble team . This is just in time for Yehuda ‘s presentation at KDD , which I’m sure will be one of the best attended ever. This isn’t quite over—there are a few days for another super-conglomerate team to come together and there is some small chance that the performance is nonrepresentative of the final test set, but I expect not. Regardless of the final outcome, the biggest lesson for ML from the Netflix contest has been the formidable performance edge of ensemble methods.

2 0.9158321 272 hunch net-2007-11-14-BellKor wins Netflix

Introduction: … but only the little prize. The BellKor team focused on integrating predictions from many different methods. The base methods consist of: Nearest Neighbor Methods Matrix Factorization Methods (asymmetric and symmetric) Linear Regression on various feature spaces Restricted Boltzman Machines The final predictor was an ensemble (as was reasonable to expect), although it’s a little bit more complicated than just a weighted average—it’s essentially a customized learning algorithm. Base approaches (1)-(3) seem like relatively well-known approaches (although I haven’t seen the asymmetric factorization variant before). RBMs are the new approach. The writeup is pretty clear for more details. The contestants are close to reaching the big prize, but the last 1.5% is probably at least as hard as what’s been done. A few new structurally different methods for making predictions may need to be discovered and added into the mixture. In other words, research may be require

3 0.90473294 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

Introduction: Last year about this time, we received a conditional accept for the searn paper , which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper , leading to unhappiness for all. Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is: If an SPC asks for a revision that is inappropriate, the correct action is to contact the chairs as soon as the decision is made, clearly explaining what the problem is, so we can decide whether or not to over-rule the SPC. As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the ov

same-blog 4 0.87827349 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

5 0.67663354 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

6 0.63639975 463 hunch net-2012-05-02-ICML: Behind the Scenes

7 0.62506837 75 hunch net-2005-05-28-Running A Machine Learning Summer School

8 0.61007434 461 hunch net-2012-04-09-ICML author feedback is open

9 0.60444117 293 hunch net-2008-03-23-Interactive Machine Learning

10 0.60024887 141 hunch net-2005-12-17-Workshops as Franchise Conferences

11 0.59902668 452 hunch net-2012-01-04-Why ICML? and the summer conferences

12 0.58053046 453 hunch net-2012-01-28-Why COLT?

13 0.57809377 207 hunch net-2006-09-12-Incentive Compatible Reviewing

14 0.57535094 466 hunch net-2012-06-05-ICML acceptance statistics

15 0.57370049 116 hunch net-2005-09-30-Research in conferences

16 0.57150209 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

17 0.56917143 464 hunch net-2012-05-03-Microsoft Research, New York City

18 0.56812352 403 hunch net-2010-07-18-ICML & COLT 2010

19 0.56618118 40 hunch net-2005-03-13-Avoiding Bad Reviewing

20 0.56490958 318 hunch net-2008-09-26-The SODA Program Committee