hunch_net hunch_net-2005 hunch_net-2005-116 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev
sentIndex sentText sentNum sentScore
1 Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research? [sent-3, score-0.624]
2 Comments The average quality of comments across conferences can vary dramatically. [sent-6, score-0.604]
3 At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. [sent-7, score-0.448]
4 At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. [sent-8, score-0.499]
5 Detailed feedback can give authors significant guidance about how to improve research. [sent-9, score-0.293]
6 Blind Virtually all conferences offer single blind review where authors do not know reviewers. [sent-11, score-0.808]
7 Some also provide double blind review where reviewers do not know authors. [sent-12, score-0.715]
8 The intention with double blind reviewing is to make the conference more approachable to first-time authors. [sent-13, score-0.71]
9 Author Feedback Author feedback is a mechanism where authors can provide feedback to reviewers (and, to some extent, complain). [sent-14, score-0.737]
10 Providing an author feedback mechanism provides an opportunity for the worst reviewing errors to be corrected. [sent-15, score-0.507]
11 Conditional Accepts A conditional accept is some form of “we will accept this paper if conditions X,Y, and Z are met”. [sent-16, score-0.382]
12 A conditional accept allows reviewers to demand different experiments or other details they need in order to make a decision. [sent-17, score-0.489]
13 Papers/PC member How many papers can one person actually review well? [sent-19, score-0.345]
14 When there is an incredible load of papers to review, it becomes very tempting to make snap decisions without a thorough attempt at understanding. [sent-20, score-0.371]
15 working on new research and the speed of the review process itself. [sent-25, score-0.311]
16 Also keep in mind that measurements of “impact” are inherently “trailing indicators” which are not necessarily relevant to the way the conference is currently run. [sent-30, score-0.304]
17 average citations Citeseer has been used to estimate the average impact of a conference’s papers here using the average number of citations per paper. [sent-31, score-1.447]
18 max citations A number of people believe that the maximum number of citations given to any one paper is a strong indicator of the success of the conference. [sent-32, score-0.841]
19 Conference Comments blindness author feedback conditional accepts Reviews/PC member log(average citations per paper+1) max citations ICML Sometimes Helpful Double Yes Yes 8 2. [sent-36, score-1.518]
20 Keep in mind that the above is a very incomplete list (it only includes the conferences that I interacted with) and feel free to add details in the comments. [sent-47, score-0.386]
wordName wordTfidf (topN-words)
[('citations', 0.294), ('conferences', 0.212), ('reviewing', 0.193), ('feedback', 0.19), ('single', 0.182), ('average', 0.174), ('helpful', 0.173), ('blind', 0.159), ('conditional', 0.156), ('snap', 0.156), ('review', 0.152), ('double', 0.15), ('yes', 0.148), ('comments', 0.144), ('conference', 0.139), ('reviewers', 0.129), ('provide', 0.125), ('extent', 0.125), ('author', 0.124), ('accepts', 0.121), ('max', 0.121), ('member', 0.121), ('accept', 0.113), ('tradition', 0.111), ('sometimes', 0.108), ('options', 0.107), ('authors', 0.103), ('impact', 0.102), ('detailed', 0.098), ('per', 0.097), ('details', 0.091), ('reference', 0.088), ('spent', 0.088), ('mind', 0.083), ('keep', 0.082), ('process', 0.08), ('speed', 0.079), ('providing', 0.078), ('extreme', 0.075), ('attempt', 0.075), ('across', 0.074), ('papers', 0.072), ('intention', 0.069), ('decisions', 0.068), ('number', 0.066), ('announcing', 0.064), ('roles', 0.064), ('ccc', 0.064), ('citeseer', 0.064), ('complain', 0.064)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 116 hunch net-2005-09-30-Research in conferences
Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev
2 0.31485173 40 hunch net-2005-03-13-Avoiding Bad Reviewing
Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a
3 0.30060193 65 hunch net-2005-05-02-Reviewing techniques for conferences
Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem
4 0.27962169 395 hunch net-2010-04-26-Compassionate Reviewing
Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine
5 0.27610016 484 hunch net-2013-06-16-Representative Reviewing
Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo
6 0.25733292 318 hunch net-2008-09-26-The SODA Program Committee
7 0.24431685 437 hunch net-2011-07-10-ICML 2011 and the future
8 0.24148722 452 hunch net-2012-01-04-Why ICML? and the summer conferences
9 0.24014942 331 hunch net-2008-12-12-Summer Conferences
10 0.21328501 468 hunch net-2012-06-29-ICML survey and comments
11 0.20634365 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
12 0.20238566 343 hunch net-2009-02-18-Decision by Vetocracy
13 0.18582772 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making
14 0.1851207 207 hunch net-2006-09-12-Incentive Compatible Reviewing
15 0.18415692 466 hunch net-2012-06-05-ICML acceptance statistics
16 0.18292062 453 hunch net-2012-01-28-Why COLT?
17 0.17766879 461 hunch net-2012-04-09-ICML author feedback is open
18 0.17131402 454 hunch net-2012-01-30-ICML Posters and Scope
19 0.16911963 315 hunch net-2008-09-03-Bidding Problems
20 0.16894238 304 hunch net-2008-06-27-Reviewing Horror Stories
topicId topicWeight
[(0, 0.294), (1, -0.273), (2, 0.298), (3, 0.089), (4, -0.012), (5, 0.004), (6, -0.0), (7, -0.018), (8, 0.046), (9, 0.079), (10, -0.084), (11, 0.003), (12, 0.082), (13, -0.023), (14, -0.081), (15, -0.017), (16, -0.038), (17, -0.048), (18, -0.02), (19, -0.012), (20, 0.008), (21, -0.072), (22, 0.05), (23, 0.023), (24, 0.085), (25, -0.037), (26, -0.041), (27, 0.101), (28, -0.064), (29, 0.021), (30, -0.024), (31, 0.023), (32, -0.034), (33, -0.084), (34, 0.101), (35, 0.057), (36, -0.033), (37, -0.042), (38, -0.069), (39, 0.02), (40, -0.047), (41, -0.018), (42, 0.083), (43, 0.034), (44, -0.012), (45, 0.033), (46, 0.07), (47, 0.028), (48, -0.02), (49, -0.036)]
simIndex simValue blogId blogTitle
same-blog 1 0.98901165 116 hunch net-2005-09-30-Research in conferences
Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev
2 0.87238508 40 hunch net-2005-03-13-Avoiding Bad Reviewing
Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a
3 0.87178993 395 hunch net-2010-04-26-Compassionate Reviewing
Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine
4 0.84235144 65 hunch net-2005-05-02-Reviewing techniques for conferences
Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem
5 0.76328176 484 hunch net-2013-06-16-Representative Reviewing
Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo
6 0.75654876 207 hunch net-2006-09-12-Incentive Compatible Reviewing
7 0.73184931 315 hunch net-2008-09-03-Bidding Problems
8 0.72717994 485 hunch net-2013-06-29-The Benefits of Double-Blind Review
9 0.69581455 437 hunch net-2011-07-10-ICML 2011 and the future
10 0.69317812 461 hunch net-2012-04-09-ICML author feedback is open
11 0.68442029 318 hunch net-2008-09-26-The SODA Program Committee
12 0.67838925 363 hunch net-2009-07-09-The Machine Learning Forum
13 0.67309082 468 hunch net-2012-06-29-ICML survey and comments
14 0.6603688 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making
15 0.65726793 331 hunch net-2008-12-12-Summer Conferences
16 0.62357396 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
17 0.62073874 343 hunch net-2009-02-18-Decision by Vetocracy
18 0.61948383 382 hunch net-2009-12-09-Future Publication Models @ NIPS
19 0.59682643 463 hunch net-2012-05-02-ICML: Behind the Scenes
20 0.58575171 452 hunch net-2012-01-04-Why ICML? and the summer conferences
topicId topicWeight
[(3, 0.03), (20, 0.165), (27, 0.202), (38, 0.017), (48, 0.059), (53, 0.112), (55, 0.216), (80, 0.012), (94, 0.081), (95, 0.024)]
simIndex simValue blogId blogTitle
same-blog 1 0.94136924 116 hunch net-2005-09-30-Research in conferences
Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev
2 0.92362159 208 hunch net-2006-09-18-What is missing for online collaborative research?
Introduction: The internet has recently made the research process much smoother: papers are easy to obtain, citations are easy to follow, and unpublished “tutorials” are often available. Yet, new research fields can look very complicated to outsiders or newcomers. Every paper is like a small piece of an unfinished jigsaw puzzle: to understand just one publication, a researcher without experience in the field will typically have to follow several layers of citations, and many of the papers he encounters have a great deal of repeated information. Furthermore, from one publication to the next, notation and terminology may not be consistent which can further confuse the reader. But the internet is now proving to be an extremely useful medium for collaboration and knowledge aggregation. Online forums allow users to ask and answer questions and to share ideas. The recent phenomenon of Wikipedia provides a proof-of-concept for the “anyone can edit” system. Can such models be used to facilitate research a
3 0.88448918 464 hunch net-2012-05-03-Microsoft Research, New York City
Introduction: Yahoo! laid off people . Unlike every previous time there have been layoffs, this is serious for Yahoo! Research . We had advanced warning from Prabhakar through the simple act of leaving . Yahoo! Research was a world class organization that Prabhakar recruited much of personally, so it is deeply implausible that he would spontaneously decide to leave. My first thought when I saw the news was “Uhoh, Rob said that he knew it was serious when the head of ATnT Research left.” In this case it was even more significant, because Prabhakar recruited me on the premise that Y!R was an experiment in how research should be done: via a combination of high quality people and high engagement with the company. Prabhakar’s departure is a clear end to that experiment. The result is ambiguous from a business perspective. Y!R clearly was not capable of saving the company from its illnesses. I’m not privy to the internal accounting of impact and this is the kind of subject where there c
4 0.87123239 452 hunch net-2012-01-04-Why ICML? and the summer conferences
Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf
5 0.85815579 40 hunch net-2005-03-13-Avoiding Bad Reviewing
Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a
6 0.85326427 395 hunch net-2010-04-26-Compassionate Reviewing
7 0.84841239 437 hunch net-2011-07-10-ICML 2011 and the future
8 0.84112024 484 hunch net-2013-06-16-Representative Reviewing
9 0.83546615 318 hunch net-2008-09-26-The SODA Program Committee
10 0.83280486 270 hunch net-2007-11-02-The Machine Learning Award goes to …
11 0.83265382 453 hunch net-2012-01-28-Why COLT?
12 0.82684129 466 hunch net-2012-06-05-ICML acceptance statistics
13 0.82627279 454 hunch net-2012-01-30-ICML Posters and Scope
14 0.82506502 461 hunch net-2012-04-09-ICML author feedback is open
15 0.82339752 403 hunch net-2010-07-18-ICML & COLT 2010
16 0.8172096 96 hunch net-2005-07-21-Six Months
17 0.81519669 423 hunch net-2011-02-02-User preferences for search engines
18 0.81489748 468 hunch net-2012-06-29-ICML survey and comments
19 0.81489491 89 hunch net-2005-07-04-The Health of COLT
20 0.81148648 297 hunch net-2008-04-22-Taking the next step