hunch_net hunch_net-2008 hunch_net-2008-331 knowledge-graph by maker-knowledge-mining

331 hunch net-2008-12-12-Summer Conferences


meta infos for this blog

Source: html

Introduction: Here’s a handy table for the summer conferences. Conference Deadline Reviewer Targeting Double Blind Author Feedback Location Date ICML ( wrong ICML ) January 26 Yes Yes Yes Montreal, Canada June 14-17 COLT February 13 No No Yes Montreal June 19-21 UAI March 13 No Yes No Montreal June 19-21 KDD February 2/6 No No No Paris, France June 28-July 1 Reviewer targeting is new this year. The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. In my experience, this is a reasonable idea to test. Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. Of the two, I believe author feedback is more important, as I’ve seen it make a difference. However, I still consider double blind reviewing a net wi


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. [sent-3, score-0.799]

2 In my experience, this is a reasonable idea to test. [sent-4, score-0.092]

3 Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. [sent-5, score-0.732]

4 Of the two, I believe author feedback is more important, as I’ve seen it make a difference. [sent-6, score-0.478]

5 However, I still consider double blind reviewing a net win, as it’s a substantial public commitment to fairness. [sent-7, score-0.956]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('montreal', 0.408), ('yes', 0.363), ('june', 0.359), ('blind', 0.234), ('double', 0.221), ('targeting', 0.218), ('february', 0.188), ('feedback', 0.186), ('author', 0.183), ('uai', 0.147), ('reviewer', 0.138), ('canada', 0.136), ('fairness', 0.136), ('qualified', 0.136), ('commitment', 0.113), ('colt', 0.112), ('table', 0.109), ('net', 0.105), ('win', 0.099), ('handy', 0.096), ('experimenting', 0.094), ('idea', 0.092), ('january', 0.092), ('date', 0.09), ('icml', 0.085), ('march', 0.083), ('location', 0.083), ('poor', 0.082), ('allowing', 0.074), ('summer', 0.073), ('happen', 0.07), ('public', 0.069), ('kdd', 0.069), ('deadline', 0.069), ('authors', 0.068), ('decisions', 0.067), ('reviewers', 0.063), ('reviewing', 0.063), ('wrong', 0.06), ('seen', 0.059), ('still', 0.056), ('experience', 0.054), ('go', 0.052), ('believe', 0.05), ('however', 0.049), ('hope', 0.049), ('substantial', 0.048), ('consider', 0.047), ('results', 0.046), ('conference', 0.046)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 331 hunch net-2008-12-12-Summer Conferences

Introduction: Here’s a handy table for the summer conferences. Conference Deadline Reviewer Targeting Double Blind Author Feedback Location Date ICML ( wrong ICML ) January 26 Yes Yes Yes Montreal, Canada June 14-17 COLT February 13 No No Yes Montreal June 19-21 UAI March 13 No Yes No Montreal June 19-21 KDD February 2/6 No No No Paris, France June 28-July 1 Reviewer targeting is new this year. The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. In my experience, this is a reasonable idea to test. Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. Of the two, I believe author feedback is more important, as I’ve seen it make a difference. However, I still consider double blind reviewing a net wi

2 0.3782765 226 hunch net-2007-01-04-2007 Summer Machine Learning Conferences

Introduction: It’s conference season once again. Conference Due? When? Where? double blind? author feedback? Workshops? AAAI February 1/6 (and 27) July 22-26 Vancouver, British Columbia Yes Yes Done UAI February 28/March 2 July 19-22 Vancouver, British Columbia No No No COLT January 16 June 13-15 San Diego, California (with FCRC ) No No No ICML February 7/9 June 20-24 Corvallis, Oregon Yes Yes February 16 KDD February 23/28 August 12-15 San Jose, California Yes No? February 28 The geowinner this year is the west coast of North America. Last year ‘s geowinner was the Northeastern US, and the year before it was mostly Europe. It’s notable how tightly the conferences cluster, even when they don’t colocate.

3 0.37249291 326 hunch net-2008-11-11-COLT CFP

Introduction: Adam Klivans , points out the COLT call for papers . The important points are: Due Feb 13. Montreal, June 18-21. This year, there is author feedback.

4 0.3096334 387 hunch net-2010-01-19-Deadline Season, 2010

Introduction: Many conference deadlines are coming soon. Deadline Double Blind / Author Feedback Time/Place ICML January 18((workshops) / February 1 (Papers) / February 13 (Tutorials) Y/Y Haifa, Israel, June 21-25 KDD February 1(Workshops) / February 2&5 (Papers) / February 26 (Tutorials & Panels)) / April 17 (Demos) N/S Washington DC, July 25-28 COLT January 18 (Workshops) / February 19 (Papers) N/S Haifa, Israel, June 25-29 UAI March 11 (Papers) N?/Y Catalina Island, California, July 8-11 ICML continues to experiment with the reviewing process, although perhaps less so than last year. The S “sort-of” for COLT is because author feedback occurs only after decisions are made. KDD is notable for being the most comprehensive in terms of {Tutorials, Workshops, Challenges, Panels, Papers (two tracks), Demos}. The S for KDD is because there is sometimes author feedback at the decision of the SPC. The (past) January 18 de

5 0.29739371 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

6 0.27791017 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

7 0.25808507 452 hunch net-2012-01-04-Why ICML? and the summer conferences

8 0.25058186 145 hunch net-2005-12-29-Deadline Season

9 0.24014942 116 hunch net-2005-09-30-Research in conferences

10 0.22403347 11 hunch net-2005-02-02-Paper Deadlines

11 0.21492314 395 hunch net-2010-04-26-Compassionate Reviewing

12 0.19854367 40 hunch net-2005-03-13-Avoiding Bad Reviewing

13 0.19623023 184 hunch net-2006-06-15-IJCAI is out of season

14 0.16810535 437 hunch net-2011-07-10-ICML 2011 and the future

15 0.16434196 71 hunch net-2005-05-14-NIPS

16 0.15568629 484 hunch net-2013-06-16-Representative Reviewing

17 0.14827539 453 hunch net-2012-01-28-Why COLT?

18 0.13040453 357 hunch net-2009-05-30-Many ways to Learn this summer

19 0.1210029 343 hunch net-2009-02-18-Decision by Vetocracy

20 0.11819804 207 hunch net-2006-09-12-Incentive Compatible Reviewing


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.16), (1, -0.326), (2, 0.198), (3, -0.184), (4, -0.031), (5, -0.3), (6, -0.092), (7, 0.015), (8, 0.055), (9, 0.079), (10, -0.098), (11, -0.013), (12, 0.055), (13, -0.005), (14, -0.137), (15, 0.011), (16, 0.005), (17, -0.097), (18, -0.011), (19, -0.096), (20, -0.105), (21, 0.086), (22, 0.023), (23, 0.018), (24, 0.113), (25, -0.087), (26, 0.023), (27, 0.137), (28, -0.068), (29, 0.014), (30, -0.096), (31, -0.07), (32, 0.055), (33, -0.049), (34, 0.048), (35, -0.02), (36, -0.02), (37, -0.049), (38, -0.03), (39, -0.087), (40, 0.022), (41, 0.054), (42, -0.059), (43, 0.0), (44, 0.019), (45, 0.07), (46, 0.058), (47, 0.082), (48, -0.021), (49, -0.0)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99001366 331 hunch net-2008-12-12-Summer Conferences

Introduction: Here’s a handy table for the summer conferences. Conference Deadline Reviewer Targeting Double Blind Author Feedback Location Date ICML ( wrong ICML ) January 26 Yes Yes Yes Montreal, Canada June 14-17 COLT February 13 No No Yes Montreal June 19-21 UAI March 13 No Yes No Montreal June 19-21 KDD February 2/6 No No No Paris, France June 28-July 1 Reviewer targeting is new this year. The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. In my experience, this is a reasonable idea to test. Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. Of the two, I believe author feedback is more important, as I’ve seen it make a difference. However, I still consider double blind reviewing a net wi

2 0.85191911 226 hunch net-2007-01-04-2007 Summer Machine Learning Conferences

Introduction: It’s conference season once again. Conference Due? When? Where? double blind? author feedback? Workshops? AAAI February 1/6 (and 27) July 22-26 Vancouver, British Columbia Yes Yes Done UAI February 28/March 2 July 19-22 Vancouver, British Columbia No No No COLT January 16 June 13-15 San Diego, California (with FCRC ) No No No ICML February 7/9 June 20-24 Corvallis, Oregon Yes Yes February 16 KDD February 23/28 August 12-15 San Jose, California Yes No? February 28 The geowinner this year is the west coast of North America. Last year ‘s geowinner was the Northeastern US, and the year before it was mostly Europe. It’s notable how tightly the conferences cluster, even when they don’t colocate.

3 0.76213109 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

Introduction: Machine learning always welcomes the new year with paper deadlines for summer conferences. This year, we have: Conference Paper Deadline When/Where Double blind? Author Feedback? Notes ICML February 1 June 28-July 2, Bellevue, Washington, USA Y Y Weak colocation with ACL COLT February 11 July 9-July 11, Budapest, Hungary N N colocated with FOCM KDD February 11/18 August 21-24, San Diego, California, USA N N UAI March 18 July 14-17, Barcelona, Spain Y N The larger conferences are on the west coast in the United States, while the smaller ones are in Europe.

4 0.74840951 387 hunch net-2010-01-19-Deadline Season, 2010

Introduction: Many conference deadlines are coming soon. Deadline Double Blind / Author Feedback Time/Place ICML January 18((workshops) / February 1 (Papers) / February 13 (Tutorials) Y/Y Haifa, Israel, June 21-25 KDD February 1(Workshops) / February 2&5 (Papers) / February 26 (Tutorials & Panels)) / April 17 (Demos) N/S Washington DC, July 25-28 COLT January 18 (Workshops) / February 19 (Papers) N/S Haifa, Israel, June 25-29 UAI March 11 (Papers) N?/Y Catalina Island, California, July 8-11 ICML continues to experiment with the reviewing process, although perhaps less so than last year. The S “sort-of” for COLT is because author feedback occurs only after decisions are made. KDD is notable for being the most comprehensive in terms of {Tutorials, Workshops, Challenges, Panels, Papers (two tracks), Demos}. The S for KDD is because there is sometimes author feedback at the decision of the SPC. The (past) January 18 de

5 0.69707006 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

6 0.65160376 11 hunch net-2005-02-02-Paper Deadlines

7 0.6404599 145 hunch net-2005-12-29-Deadline Season

8 0.6360659 326 hunch net-2008-11-11-COLT CFP

9 0.5812791 184 hunch net-2006-06-15-IJCAI is out of season

10 0.56132287 116 hunch net-2005-09-30-Research in conferences

11 0.53255463 395 hunch net-2010-04-26-Compassionate Reviewing

12 0.50024873 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.47811165 40 hunch net-2005-03-13-Avoiding Bad Reviewing

14 0.42409694 173 hunch net-2006-04-17-Rexa is live

15 0.41746843 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule

16 0.40691257 453 hunch net-2012-01-28-Why COLT?

17 0.39331681 468 hunch net-2012-06-29-ICML survey and comments

18 0.39166746 437 hunch net-2011-07-10-ICML 2011 and the future

19 0.38350359 130 hunch net-2005-11-16-MLSS 2006

20 0.35657895 484 hunch net-2013-06-16-Representative Reviewing


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.117), (36, 0.196), (53, 0.128), (55, 0.418)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9604519 331 hunch net-2008-12-12-Summer Conferences

Introduction: Here’s a handy table for the summer conferences. Conference Deadline Reviewer Targeting Double Blind Author Feedback Location Date ICML ( wrong ICML ) January 26 Yes Yes Yes Montreal, Canada June 14-17 COLT February 13 No No Yes Montreal June 19-21 UAI March 13 No Yes No Montreal June 19-21 KDD February 2/6 No No No Paris, France June 28-July 1 Reviewer targeting is new this year. The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. In my experience, this is a reasonable idea to test. Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. Of the two, I believe author feedback is more important, as I’ve seen it make a difference. However, I still consider double blind reviewing a net wi

2 0.89144921 90 hunch net-2005-07-07-The Limits of Learning Theory

Introduction: Suppose we had an infinitely powerful mathematician sitting in a room and proving theorems about learning. Could he solve machine learning? The answer is “no”. This answer is both obvious and sometimes underappreciated. There are several ways to conclude that some bias is necessary in order to succesfully learn. For example, suppose we are trying to solve classification. At prediction time, we observe some features X and want to make a prediction of either 0 or 1 . Bias is what makes us prefer one answer over the other based on past experience. In order to learn we must: Have a bias. Always predicting 0 is as likely as 1 is useless. Have the “right” bias. Predicting 1 when the answer is 0 is also not helpful. The implication of “have a bias” is that we can not design effective learning algorithms with “a uniform prior over all possibilities”. The implication of “have the ‘right’ bias” is that our mathematician fails since “right” is defined wi

3 0.89050901 20 hunch net-2005-02-15-ESPgame and image labeling

Introduction: Luis von Ahn has been running the espgame for awhile now. The espgame provides a picture to two randomly paired people across the web, and asks them to agree on a label. It hasn’t managed to label the web yet, but it has produced a large dataset of (image, label) pairs. I organized the dataset so you could explore the implied bipartite graph (requires much bandwidth). Relative to other image datasets, this one is quite large—67000 images, 358,000 labels (average of 5/image with variation from 1 to 19), and 22,000 unique labels (one every 3 images). The dataset is also very ‘natural’, consisting of images spidered from the internet. The multiple label characteristic is intriguing because ‘learning to learn’ and metalearning techniques may be applicable. The ‘natural’ quality means that this dataset varies greatly in difficulty from easy (predicting “red”) to hard (predicting “funny”) and potentially more rewarding to tackle. The open problem here is, of course, to make

4 0.88853526 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

Introduction: The results have been posted , with CMU first , Stanford second , and Virginia Tech Third . Considering that this was an open event (at least for people in the US), this was a very strong showing for research at universities (instead of defense contractors, for example). Some details should become public at the NIPS workshops . Slashdot has a post with many comments.

5 0.88058144 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

6 0.87960351 446 hunch net-2011-10-03-Monday announcements

7 0.8774156 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

8 0.87158954 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

9 0.85491765 387 hunch net-2010-01-19-Deadline Season, 2010

10 0.85216177 270 hunch net-2007-11-02-The Machine Learning Award goes to …

11 0.84724134 326 hunch net-2008-11-11-COLT CFP

12 0.84724134 465 hunch net-2012-05-12-ICML accepted papers and early registration

13 0.84348691 395 hunch net-2010-04-26-Compassionate Reviewing

14 0.81752926 356 hunch net-2009-05-24-2009 ICML discussion site

15 0.80796969 453 hunch net-2012-01-28-Why COLT?

16 0.79003251 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

17 0.7814244 65 hunch net-2005-05-02-Reviewing techniques for conferences

18 0.78076488 452 hunch net-2012-01-04-Why ICML? and the summer conferences

19 0.76371098 71 hunch net-2005-05-14-NIPS

20 0.7615118 159 hunch net-2006-02-27-The Peekaboom Dataset