hunch_net hunch_net-2010 hunch_net-2010-395 knowledge-graph by maker-knowledge-mining

395 hunch net-2010-04-26-Compassionate Reviewing


meta infos for this blog

Source: html

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. [sent-1, score-0.693]

2 Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. [sent-4, score-0.576]

3 If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. [sent-8, score-0.411]

4 And yet, I’m sure that most reviewers genuinely believe they can predict what will and will not be useful in the longer term. [sent-11, score-0.225]

5 When academics have conversations about reviewing, the presumption of participants in each conversation is that they all share about the same beliefs about what will be useful, and what will take off. [sent-13, score-0.446]

6 Such conversations rarely go into specifics, because the specifics are boring in particular, technical, and because their is a real chance of disagreement on the specifics themselves. [sent-14, score-0.547]

7 When double blind reviewing was first being considered for ICML, I remember speaking about the experience in the Crypto community, where in my estimate the reviewing was both fairer and less happy. [sent-15, score-1.385]

8 Without double blind reviewing, it is common to have an “in” crowd who everyone respects and whose papers are virtually always accepted. [sent-17, score-0.793]

9 With double blind reviewing, everyone suffers substantial rejections. [sent-19, score-0.609]

10 From a viewpoint external to the community, when the reviewing is poor and the viewpoint of people in the community highly contradictory, nothing good happens. [sent-21, score-0.927]

11 most people) viewing the acrimony choose some other way to solve problems, proposals don’t get funded, and the community itself tends to fracture. [sent-24, score-0.288]

12 For example, in cryptography, TCC (not double blind) has started, presumably because the top theory people got tired of having their papers rejected at Crypto (double blind). [sent-25, score-0.476]

13 What seems to be lost with double blind reviewing is some amount of compassion, unfairly allocated. [sent-27, score-1.026]

14 In a double blind system, any given paper is plausibly from someone you don’t know, and since most papers go nowhere, plausibly not going anywhere. [sent-28, score-0.823]

15 Some time ago, I discussed how I thought motivation should be the responsibility of the reviewer . [sent-30, score-0.265]

16 ” In a healthy community, reviewers will actively understand why a piece of work is or is not important, filling in and extending the motivation as they consider the problem. [sent-35, score-0.4]

17 Reducing reviewing load is certainly helpful, but it is not sufficient alone, because many people naturally interpret a reduced reviewing load as time to work on other things. [sent-39, score-1.184]

18 For example, the two-phase reviewing process that ICML currently uses might save 0. [sent-41, score-0.421]

19 5 reviews/paper, while guaranteeing that for half of the papers, the deciding review is done hastily with no author feedback, a recipe for mistakes. [sent-42, score-0.344]

20 A natural conversation helps (the current method of single round response tends to be very stilted). [sent-45, score-0.277]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('reviewing', 0.421), ('blind', 0.279), ('double', 0.264), ('specifics', 0.193), ('community', 0.19), ('conversations', 0.161), ('compassion', 0.157), ('longer', 0.128), ('papers', 0.126), ('viewpoint', 0.115), ('academics', 0.111), ('crypto', 0.111), ('conversation', 0.104), ('happy', 0.104), ('responsibility', 0.104), ('tends', 0.098), ('reviewers', 0.097), ('half', 0.096), ('load', 0.096), ('motivation', 0.09), ('people', 0.086), ('actively', 0.085), ('fair', 0.085), ('plausibly', 0.077), ('helps', 0.075), ('reviewer', 0.071), ('contradictory', 0.07), ('realizes', 0.07), ('witness', 0.07), ('presumption', 0.07), ('disparity', 0.07), ('everyone', 0.066), ('question', 0.065), ('filling', 0.064), ('recipe', 0.064), ('extending', 0.064), ('nowhere', 0.064), ('manager', 0.064), ('flow', 0.064), ('interpret', 0.064), ('author', 0.062), ('amount', 0.062), ('review', 0.061), ('guaranteeing', 0.061), ('aaron', 0.061), ('suboptimal', 0.061), ('disagreed', 0.061), ('career', 0.058), ('grounds', 0.058), ('crowd', 0.058)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

2 0.35897681 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

3 0.33740813 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

4 0.32001153 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

5 0.2846576 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

6 0.27962169 116 hunch net-2005-09-30-Research in conferences

7 0.24836966 207 hunch net-2006-09-12-Incentive Compatible Reviewing

8 0.23326327 437 hunch net-2011-07-10-ICML 2011 and the future

9 0.21492314 331 hunch net-2008-12-12-Summer Conferences

10 0.20960131 343 hunch net-2009-02-18-Decision by Vetocracy

11 0.2055268 454 hunch net-2012-01-30-ICML Posters and Scope

12 0.20308201 315 hunch net-2008-09-03-Bidding Problems

13 0.177967 318 hunch net-2008-09-26-The SODA Program Committee

14 0.16313802 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

15 0.16193335 461 hunch net-2012-04-09-ICML author feedback is open

16 0.15393205 453 hunch net-2012-01-28-Why COLT?

17 0.1539277 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

18 0.15334915 333 hunch net-2008-12-27-Adversarial Academia

19 0.15228869 468 hunch net-2012-06-29-ICML survey and comments

20 0.14869365 304 hunch net-2008-06-27-Reviewing Horror Stories


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.291), (1, -0.247), (2, 0.272), (3, 0.125), (4, -0.015), (5, 0.012), (6, -0.007), (7, 0.031), (8, 0.041), (9, 0.033), (10, -0.06), (11, -0.002), (12, 0.095), (13, -0.01), (14, -0.083), (15, 0.027), (16, -0.029), (17, -0.053), (18, -0.001), (19, -0.017), (20, 0.005), (21, -0.053), (22, -0.005), (23, 0.025), (24, 0.11), (25, -0.073), (26, -0.036), (27, 0.085), (28, -0.108), (29, 0.066), (30, -0.022), (31, -0.016), (32, -0.012), (33, -0.021), (34, -0.041), (35, -0.069), (36, -0.022), (37, -0.121), (38, -0.045), (39, -0.002), (40, 0.01), (41, -0.055), (42, 0.047), (43, 0.051), (44, 0.043), (45, 0.029), (46, 0.017), (47, -0.0), (48, -0.041), (49, -0.008)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98231459 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

2 0.91498899 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

3 0.89936447 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

4 0.88638568 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

5 0.7928012 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.78089142 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

7 0.77419943 207 hunch net-2006-09-12-Incentive Compatible Reviewing

8 0.77268225 315 hunch net-2008-09-03-Bidding Problems

9 0.74049878 461 hunch net-2012-04-09-ICML author feedback is open

10 0.72692835 382 hunch net-2009-12-09-Future Publication Models @ NIPS

11 0.68264574 437 hunch net-2011-07-10-ICML 2011 and the future

12 0.66552216 468 hunch net-2012-06-29-ICML survey and comments

13 0.65716338 343 hunch net-2009-02-18-Decision by Vetocracy

14 0.65524578 318 hunch net-2008-09-26-The SODA Program Committee

15 0.63919884 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

16 0.63646442 331 hunch net-2008-12-12-Summer Conferences

17 0.62604862 363 hunch net-2009-07-09-The Machine Learning Forum

18 0.61136693 288 hunch net-2008-02-10-Complexity Illness

19 0.61019087 333 hunch net-2008-12-27-Adversarial Academia

20 0.60979122 452 hunch net-2012-01-04-Why ICML? and the summer conferences


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.023), (27, 0.189), (38, 0.04), (48, 0.039), (53, 0.055), (55, 0.466), (68, 0.018), (94, 0.063), (95, 0.032)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98973101 90 hunch net-2005-07-07-The Limits of Learning Theory

Introduction: Suppose we had an infinitely powerful mathematician sitting in a room and proving theorems about learning. Could he solve machine learning? The answer is “no”. This answer is both obvious and sometimes underappreciated. There are several ways to conclude that some bias is necessary in order to succesfully learn. For example, suppose we are trying to solve classification. At prediction time, we observe some features X and want to make a prediction of either 0 or 1 . Bias is what makes us prefer one answer over the other based on past experience. In order to learn we must: Have a bias. Always predicting 0 is as likely as 1 is useless. Have the “right” bias. Predicting 1 when the answer is 0 is also not helpful. The implication of “have a bias” is that we can not design effective learning algorithms with “a uniform prior over all possibilities”. The implication of “have the ‘right’ bias” is that our mathematician fails since “right” is defined wi

2 0.98866642 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

3 0.98286241 20 hunch net-2005-02-15-ESPgame and image labeling

Introduction: Luis von Ahn has been running the espgame for awhile now. The espgame provides a picture to two randomly paired people across the web, and asks them to agree on a label. It hasn’t managed to label the web yet, but it has produced a large dataset of (image, label) pairs. I organized the dataset so you could explore the implied bipartite graph (requires much bandwidth). Relative to other image datasets, this one is quite large—67000 images, 358,000 labels (average of 5/image with variation from 1 to 19), and 22,000 unique labels (one every 3 images). The dataset is also very ‘natural’, consisting of images spidered from the internet. The multiple label characteristic is intriguing because ‘learning to learn’ and metalearning techniques may be applicable. The ‘natural’ quality means that this dataset varies greatly in difficulty from easy (predicting “red”) to hard (predicting “funny”) and potentially more rewarding to tackle. The open problem here is, of course, to make

4 0.98126423 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

Introduction: Reviewers and students are sometimes greatly concerned by the distinction between: An open set and a closed set . A Supremum and a Maximum . An event which happens with probability 1 and an event that always happens. I don’t appreciate this distinction in machine learning & learning theory. All machine learning takes place (by definition) on a machine where every parameter has finite precision. Consequently, every set is closed, a maximal element always exists, and probability 1 events always happen. The fundamental issue here is that substantial parts of mathematics don’t appear well-matched to computation in the physical world, because the mathematics has concerns which are unphysical. This mismatched mathematics makes irrelevant distinctions. We can ask “what mathematics is appropriate to computation?” Andrej has convinced me that a pretty good answer to this question is constructive mathematics . So, here’s a basic challenge: Can anyone name a situati

5 0.97552174 270 hunch net-2007-11-02-The Machine Learning Award goes to …

Introduction: Perhaps the biggest CS prize for research is the Turing Award , which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards). In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost. In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this: There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around. Machine Learning work doesn’t last Almost every paper is fo

6 0.97514164 446 hunch net-2011-10-03-Monday announcements

7 0.97506267 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

same-blog 8 0.97466117 395 hunch net-2010-04-26-Compassionate Reviewing

9 0.96029705 331 hunch net-2008-12-12-Summer Conferences

10 0.95954865 453 hunch net-2012-01-28-Why COLT?

11 0.95296156 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

12 0.92413718 65 hunch net-2005-05-02-Reviewing techniques for conferences

13 0.91183913 452 hunch net-2012-01-04-Why ICML? and the summer conferences

14 0.90528214 326 hunch net-2008-11-11-COLT CFP

15 0.90528214 465 hunch net-2012-05-12-ICML accepted papers and early registration

16 0.90526843 387 hunch net-2010-01-19-Deadline Season, 2010

17 0.8780899 40 hunch net-2005-03-13-Avoiding Bad Reviewing

18 0.875045 46 hunch net-2005-03-24-The Role of Workshops

19 0.86754215 116 hunch net-2005-09-30-Research in conferences

20 0.8644461 356 hunch net-2009-05-24-2009 ICML discussion site