hunch_net hunch_net-2008 hunch_net-2008-285 knowledge-graph by maker-knowledge-mining

285 hunch net-2008-01-23-Why Workshop?


meta infos for this blog

Source: html

Introduction: I second the call for workshops at ICML/COLT/UAI . Several times before , details of why and how to run a workshop have been mentioned. There is a simple reason to prefer workshops here: attendance. The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. In addition to those that happened a similar number were rejected. The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I second the call for workshops at ICML/COLT/UAI . [sent-1, score-0.73]

2 Several times before , details of why and how to run a workshop have been mentioned. [sent-2, score-0.532]

3 There is a simple reason to prefer workshops here: attendance. [sent-3, score-0.805]

4 The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. [sent-4, score-1.305]

5 In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. [sent-5, score-1.261]

6 In addition to those that happened a similar number were rejected. [sent-6, score-0.486]

7 The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. [sent-7, score-0.95]

8 Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule. [sent-8, score-0.709]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('workshops', 0.522), ('workshop', 0.245), ('addition', 0.219), ('benefited', 0.195), ('helsinki', 0.195), ('schedule', 0.184), ('official', 0.184), ('nips', 0.17), ('overload', 0.168), ('consequences', 0.168), ('placed', 0.168), ('intrinsic', 0.168), ('attendees', 0.162), ('strange', 0.162), ('colocation', 0.157), ('aside', 0.145), ('happened', 0.139), ('wasn', 0.136), ('attended', 0.131), ('icml', 0.131), ('call', 0.126), ('prefer', 0.122), ('times', 0.107), ('optimal', 0.101), ('greatly', 0.101), ('getting', 0.101), ('deep', 0.096), ('directly', 0.094), ('details', 0.092), ('run', 0.088), ('reason', 0.086), ('relatively', 0.084), ('second', 0.082), ('interest', 0.078), ('similar', 0.078), ('simple', 0.075), ('last', 0.07), ('year', 0.063), ('best', 0.06), ('number', 0.05), ('several', 0.04), ('example', 0.04), ('learning', 0.012)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 285 hunch net-2008-01-23-Why Workshop?

Introduction: I second the call for workshops at ICML/COLT/UAI . Several times before , details of why and how to run a workshop have been mentioned. There is a simple reason to prefer workshops here: attendance. The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. In addition to those that happened a similar number were rejected. The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule.

2 0.36352009 46 hunch net-2005-03-24-The Role of Workshops

Introduction: A good workshop is often far more interesting than the papers at a conference. This happens because a workshop has a much tighter focus than a conference. Since you choose the workshops fitting your interest, the increased relevance can greatly enhance the level of your interest and attention. Roughly speaking, a workshop program consists of elements related to a subject of your interest. The main conference program consists of elements related to someone’s interest (which is rarely your own). Workshops are more about doing research while conferences are more about presenting research. Several conferences have associated workshop programs, some with deadlines due shortly. ICML workshops Due April 1 IJCAI workshops Deadlines Vary KDD workshops Not yet finalized Anyone going to these conferences should examine the workshops and see if any are of interest. (If none are, then maybe you should organize one next year.)

3 0.32377276 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

Introduction: I’m the workshops chair for ICML this year. As such, I would like to personally encourage people to consider running a workshop. My general view of workshops is that they are excellent as opportunities to discuss and develop research directions—some of my best work has come from collaborations at workshops and several workshops have substantially altered my thinking about various problems. My experience running workshops is that setting them up and making them fly often appears much harder than it actually is, and the workshops often come off much better than expected in the end. Submissions are due January 18, two weeks before papers. Similarly, Ben Taskar is looking for good tutorials , which is complementary. Workshops are about exploring a subject, while a tutorial is about distilling it down into an easily taught essence, a vital part of the research process. Tutorials are due February 13, two weeks after papers.

4 0.29099491 216 hunch net-2006-11-02-2006 NIPS workshops

Introduction: I expect the NIPS 2006 workshops to be quite interesting, and recommend going for anyone interested in machine learning research. (Most or all of the workshops webpages can be found two links deep.)

5 0.25686044 141 hunch net-2005-12-17-Workshops as Franchise Conferences

Introduction: Founding a successful new conference is extraordinarily difficult. As a conference founder, you must manage to attract a significant number of good papers—enough to entice the participants into participating next year and to (generally) to grow the conference. For someone choosing to participate in a new conference, there is a very significant decision to make: do you send a paper to some new conference with no guarantee that the conference will work out? Or do you send it to another (possibly less related) conference that you are sure will work? The conference founding problem is a joint agreement problem with a very significant barrier. Workshops are a way around this problem, and workshops attached to conferences are a particularly effective means for this. A workshop at a conference is sure to have people available to speak and attend and is sure to have a large audience available. Presenting work at a workshop is not generally exclusive: it can also be presented at a confe

6 0.22154887 266 hunch net-2007-10-15-NIPS workshops extended to 3 days

7 0.21652168 71 hunch net-2005-05-14-NIPS

8 0.19986349 113 hunch net-2005-09-19-NIPS Workshops

9 0.19812435 264 hunch net-2007-09-30-NIPS workshops are out.

10 0.18916802 375 hunch net-2009-10-26-NIPS workshops

11 0.18639858 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

12 0.17325449 387 hunch net-2010-01-19-Deadline Season, 2010

13 0.14863358 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

14 0.14087984 437 hunch net-2011-07-10-ICML 2011 and the future

15 0.13663638 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

16 0.12854683 234 hunch net-2007-02-22-Create Your Own ICML Workshop

17 0.11168655 403 hunch net-2010-07-18-ICML & COLT 2010

18 0.10916367 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule

19 0.10752184 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS

20 0.1065912 481 hunch net-2013-04-15-NEML II


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.142), (1, -0.19), (2, -0.16), (3, -0.285), (4, 0.062), (5, 0.293), (6, 0.225), (7, 0.078), (8, 0.15), (9, 0.131), (10, -0.071), (11, 0.083), (12, -0.035), (13, -0.071), (14, -0.027), (15, 0.089), (16, -0.045), (17, -0.065), (18, 0.024), (19, 0.034), (20, -0.039), (21, 0.035), (22, 0.008), (23, 0.012), (24, -0.001), (25, 0.001), (26, 0.015), (27, 0.031), (28, 0.057), (29, 0.056), (30, -0.003), (31, 0.022), (32, 0.006), (33, 0.102), (34, -0.043), (35, 0.045), (36, 0.038), (37, -0.018), (38, 0.008), (39, -0.027), (40, -0.029), (41, -0.02), (42, 0.009), (43, -0.023), (44, 0.042), (45, -0.005), (46, -0.05), (47, 0.014), (48, 0.023), (49, 0.012)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99357426 285 hunch net-2008-01-23-Why Workshop?

Introduction: I second the call for workshops at ICML/COLT/UAI . Several times before , details of why and how to run a workshop have been mentioned. There is a simple reason to prefer workshops here: attendance. The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. In addition to those that happened a similar number were rejected. The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule.

2 0.89835572 46 hunch net-2005-03-24-The Role of Workshops

Introduction: A good workshop is often far more interesting than the papers at a conference. This happens because a workshop has a much tighter focus than a conference. Since you choose the workshops fitting your interest, the increased relevance can greatly enhance the level of your interest and attention. Roughly speaking, a workshop program consists of elements related to a subject of your interest. The main conference program consists of elements related to someone’s interest (which is rarely your own). Workshops are more about doing research while conferences are more about presenting research. Several conferences have associated workshop programs, some with deadlines due shortly. ICML workshops Due April 1 IJCAI workshops Deadlines Vary KDD workshops Not yet finalized Anyone going to these conferences should examine the workshops and see if any are of interest. (If none are, then maybe you should organize one next year.)

3 0.87367576 266 hunch net-2007-10-15-NIPS workshops extended to 3 days

Introduction: (Unofficially, at least.) The Deep Learning Workshop is being held the afternoon before the rest of the workshops in Vancouver, BC. Separate registration is needed, and open. What’s happening fundamentally here is that there are too many interesting workshops to fit into 2 days. Perhaps we can get it officially expanded to 3 days next year.

4 0.84670734 216 hunch net-2006-11-02-2006 NIPS workshops

Introduction: I expect the NIPS 2006 workshops to be quite interesting, and recommend going for anyone interested in machine learning research. (Most or all of the workshops webpages can be found two links deep.)

5 0.81138182 71 hunch net-2005-05-14-NIPS

Introduction: NIPS is the big winter conference of learning. Paper due date: June 3rd. (Tweaked thanks to Fei Sha .) Location: Vancouver (main program) Dec. 5-8 and Whistler (workshops) Dec 9-10, BC, Canada NIPS is larger than all of the other learning conferences, partly because it’s the only one at that time of year. I recommend the workshops which are often quite interesting and energetic.

6 0.80929178 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

7 0.76933616 113 hunch net-2005-09-19-NIPS Workshops

8 0.6737659 264 hunch net-2007-09-30-NIPS workshops are out.

9 0.6428563 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

10 0.63148957 141 hunch net-2005-12-17-Workshops as Franchise Conferences

11 0.54982245 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

12 0.5496648 375 hunch net-2009-10-26-NIPS workshops

13 0.47947329 387 hunch net-2010-01-19-Deadline Season, 2010

14 0.46076557 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

15 0.45534623 144 hunch net-2005-12-28-Yet more nips thoughts

16 0.41272101 481 hunch net-2013-04-15-NEML II

17 0.39361814 433 hunch net-2011-04-23-ICML workshops due

18 0.36809382 234 hunch net-2007-02-22-Create Your Own ICML Workshop

19 0.35465991 437 hunch net-2011-07-10-ICML 2011 and the future

20 0.35332409 443 hunch net-2011-09-03-Fall Machine Learning Events


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.113), (53, 0.039), (55, 0.247), (82, 0.42), (94, 0.037)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90003973 285 hunch net-2008-01-23-Why Workshop?

Introduction: I second the call for workshops at ICML/COLT/UAI . Several times before , details of why and how to run a workshop have been mentioned. There is a simple reason to prefer workshops here: attendance. The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. In addition to those that happened a similar number were rejected. The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule.

2 0.71155429 439 hunch net-2011-08-01-Interesting papers at COLT 2011

Introduction: Since John did not attend COLT this year, I have been volunteered to report back on the hot stuff at this year’s meeting. The conference seemed to have pretty high quality stuff this year, and I found plenty of interesting papers on all the three days. I’m gonna pick some of my favorites going through the program in a chronological order. The first session on matrices seemed interesting for two reasons. First, the papers were quite nice. But more interestingly, this is a topic that has had a lot of presence in Statistics and Compressed sensing literature recently. So it was good to see high-dimensional matrices finally make their entry at COLT. The paper of Ohad and Shai on Collaborative Filtering with the Trace Norm: Learning, Bounding, and Transducing provides non-trivial guarantees on trace norm regularization in an agnostic setup, while Rina and Nati show how Rademacher averages can be used to get sharper results for matrix completion problems in their paper Concentr

3 0.54753792 90 hunch net-2005-07-07-The Limits of Learning Theory

Introduction: Suppose we had an infinitely powerful mathematician sitting in a room and proving theorems about learning. Could he solve machine learning? The answer is “no”. This answer is both obvious and sometimes underappreciated. There are several ways to conclude that some bias is necessary in order to succesfully learn. For example, suppose we are trying to solve classification. At prediction time, we observe some features X and want to make a prediction of either 0 or 1 . Bias is what makes us prefer one answer over the other based on past experience. In order to learn we must: Have a bias. Always predicting 0 is as likely as 1 is useless. Have the “right” bias. Predicting 1 when the answer is 0 is also not helpful. The implication of “have a bias” is that we can not design effective learning algorithms with “a uniform prior over all possibilities”. The implication of “have the ‘right’ bias” is that our mathematician fails since “right” is defined wi

4 0.54387742 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

5 0.54245883 20 hunch net-2005-02-15-ESPgame and image labeling

Introduction: Luis von Ahn has been running the espgame for awhile now. The espgame provides a picture to two randomly paired people across the web, and asks them to agree on a label. It hasn’t managed to label the web yet, but it has produced a large dataset of (image, label) pairs. I organized the dataset so you could explore the implied bipartite graph (requires much bandwidth). Relative to other image datasets, this one is quite large—67000 images, 358,000 labels (average of 5/image with variation from 1 to 19), and 22,000 unique labels (one every 3 images). The dataset is also very ‘natural’, consisting of images spidered from the internet. The multiple label characteristic is intriguing because ‘learning to learn’ and metalearning techniques may be applicable. The ‘natural’ quality means that this dataset varies greatly in difficulty from easy (predicting “red”) to hard (predicting “funny”) and potentially more rewarding to tackle. The open problem here is, of course, to make

6 0.54226154 270 hunch net-2007-11-02-The Machine Learning Award goes to …

7 0.54015821 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

8 0.53828806 395 hunch net-2010-04-26-Compassionate Reviewing

9 0.5371809 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

10 0.53676438 446 hunch net-2011-10-03-Monday announcements

11 0.53205204 331 hunch net-2008-12-12-Summer Conferences

12 0.52867758 453 hunch net-2012-01-28-Why COLT?

13 0.51915938 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

14 0.50839514 452 hunch net-2012-01-04-Why ICML? and the summer conferences

15 0.50671053 237 hunch net-2007-04-02-Contextual Scaling

16 0.50179929 387 hunch net-2010-01-19-Deadline Season, 2010

17 0.50133377 65 hunch net-2005-05-02-Reviewing techniques for conferences

18 0.49169102 326 hunch net-2008-11-11-COLT CFP

19 0.49169102 465 hunch net-2012-05-12-ICML accepted papers and early registration

20 0.48987976 40 hunch net-2005-03-13-Avoiding Bad Reviewing