hunch_net hunch_net-2007 hunch_net-2007-256 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: The prevailing wisdom in machine learning seems to be that motivating a paper is the responsibility of the author. I think this is a harmful view—instead, it’s healthier for the community to regard this as the responsibility of the reviewer. There are lots of reasons to prefer a reviewer-responsibility approach. Authors are the most biased possible source of information about the motivation of the paper. Systems which rely upon very biased sources of information are inherently unreliable. Authors are highly variable in their ability and desire to express motivation for their work. This adds greatly to variance on acceptance of an idea, and it can systematically discriminate or accentuate careers. It’s great if you have a career accentuated by awesome wording choice, but wise decision making by reviewers is important for the field. The motivation section in a paper doesn’t do anything in some sense—it’s there to get the paper in. Reading the motivation of a paper is of
sentIndex sentText sentNum sentScore
1 The prevailing wisdom in machine learning seems to be that motivating a paper is the responsibility of the author. [sent-1, score-0.724]
2 I think this is a harmful view—instead, it’s healthier for the community to regard this as the responsibility of the reviewer. [sent-2, score-0.484]
3 Authors are the most biased possible source of information about the motivation of the paper. [sent-4, score-0.585]
4 Authors are highly variable in their ability and desire to express motivation for their work. [sent-6, score-0.546]
5 The motivation section in a paper doesn’t do anything in some sense—it’s there to get the paper in. [sent-9, score-0.949]
6 Reading the motivation of a paper is of little use in helping the reader solve new problems. [sent-10, score-0.728]
7 The 30th paper on a subject should not require a motivation as if it’s the first paper on a subject, and requiring or expecting this of authors is an exercise in busy work by the research community. [sent-12, score-1.172]
8 Some caveats to make sure I’m understood: I’m not advocating the complete removal of a motivation section (motivectomy? [sent-13, score-0.741]
9 ), which would be absurd (and frankly harmful to your career). [sent-14, score-0.319]
10 A paragraph describing common examples where the problem addressed comes up is desirable for readers who are not specialists. [sent-15, score-0.317]
11 I regard discussion of motivations as quite important, and totally unsuited to the paper format. [sent-18, score-0.606]
12 It’s hard to imagine any worse method for discussion than one with a year-size latency where quasi-anonymous people are quasi-randomly paired and each attempts to accomplish several different tasks one of which happens to be a one-sided discussion of motivation. [sent-19, score-0.558]
13 A blog can work much better for this sort of thing, and I definitely invite discussion on motivational questions. [sent-20, score-0.371]
14 As an author, one clever technique is to pass serious discussion of motivation by reference. [sent-23, score-0.731]
15 “For a general discussion and motivation of this problem see []. [sent-24, score-0.83]
16 Until these alternative (and far better) formats for discussion are developed the problem of “who motivates” will always exist. [sent-28, score-0.344]
17 Have private discussions about motivation where you can. [sent-29, score-0.486]
18 Learn to take responsibility for motivation as a reviewer. [sent-31, score-0.639]
19 The first step is to disbelieve all the motivational parts of a paper by default. [sent-33, score-0.373]
20 Frankly, all of Machine Learning fails the popularity test in a wider sense, even though many people appreciate the fruits of machine learning on a daily basis. [sent-41, score-0.33]
wordName wordTfidf (topN-words)
[('motivation', 0.486), ('discussion', 0.245), ('paper', 0.179), ('increment', 0.154), ('prevailing', 0.154), ('responsibility', 0.153), ('harmful', 0.137), ('authors', 0.136), ('motivational', 0.126), ('paragraph', 0.126), ('popularity', 0.126), ('regard', 0.119), ('wisdom', 0.119), ('frankly', 0.119), ('career', 0.114), ('section', 0.105), ('biased', 0.099), ('problem', 0.099), ('fall', 0.097), ('addressed', 0.092), ('sure', 0.09), ('appreciate', 0.082), ('solution', 0.076), ('think', 0.075), ('fairly', 0.069), ('paired', 0.068), ('disbelieve', 0.068), ('wise', 0.068), ('discriminate', 0.068), ('expecting', 0.068), ('sneak', 0.068), ('useful', 0.066), ('subject', 0.064), ('arguing', 0.063), ('considerations', 0.063), ('reader', 0.063), ('daily', 0.063), ('waste', 0.063), ('absurd', 0.063), ('awesome', 0.063), ('motivates', 0.063), ('totally', 0.063), ('desire', 0.06), ('systematically', 0.06), ('removal', 0.06), ('motivating', 0.06), ('warning', 0.06), ('exercise', 0.06), ('skip', 0.06), ('machine', 0.059)]
simIndex simValue blogId blogTitle
same-blog 1 0.99999982 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer
Introduction: The prevailing wisdom in machine learning seems to be that motivating a paper is the responsibility of the author. I think this is a harmful view—instead, it’s healthier for the community to regard this as the responsibility of the reviewer. There are lots of reasons to prefer a reviewer-responsibility approach. Authors are the most biased possible source of information about the motivation of the paper. Systems which rely upon very biased sources of information are inherently unreliable. Authors are highly variable in their ability and desire to express motivation for their work. This adds greatly to variance on acceptance of an idea, and it can systematically discriminate or accentuate careers. It’s great if you have a career accentuated by awesome wording choice, but wise decision making by reviewers is important for the field. The motivation section in a paper doesn’t do anything in some sense—it’s there to get the paper in. Reading the motivation of a paper is of
2 0.1728583 454 hunch net-2012-01-30-ICML Posters and Scope
Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene
3 0.16313802 395 hunch net-2010-04-26-Compassionate Reviewing
Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine
4 0.1514075 484 hunch net-2013-06-16-Representative Reviewing
Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo
5 0.14372452 343 hunch net-2009-02-18-Decision by Vetocracy
Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi
6 0.12685062 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
7 0.12102854 304 hunch net-2008-06-27-Reviewing Horror Stories
8 0.12052813 98 hunch net-2005-07-27-Not goal metrics
9 0.11962032 453 hunch net-2012-01-28-Why COLT?
10 0.11797222 461 hunch net-2012-04-09-ICML author feedback is open
11 0.11192355 22 hunch net-2005-02-18-What it means to do research.
12 0.10937299 134 hunch net-2005-12-01-The Webscience Future
13 0.10877401 30 hunch net-2005-02-25-Why Papers?
14 0.10801256 332 hunch net-2008-12-23-Use of Learning Theory
15 0.10430016 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms
16 0.10304461 437 hunch net-2011-07-10-ICML 2011 and the future
17 0.10281704 452 hunch net-2012-01-04-Why ICML? and the summer conferences
18 0.10097772 65 hunch net-2005-05-02-Reviewing techniques for conferences
19 0.10071651 468 hunch net-2012-06-29-ICML survey and comments
20 0.10051665 333 hunch net-2008-12-27-Adversarial Academia
topicId topicWeight
[(0, 0.268), (1, -0.07), (2, 0.08), (3, 0.112), (4, 0.009), (5, 0.017), (6, 0.02), (7, -0.032), (8, 0.011), (9, -0.012), (10, 0.005), (11, -0.021), (12, -0.035), (13, 0.06), (14, 0.007), (15, 0.002), (16, -0.002), (17, 0.039), (18, 0.013), (19, 0.031), (20, -0.009), (21, 0.059), (22, -0.077), (23, -0.01), (24, 0.01), (25, -0.002), (26, 0.005), (27, 0.044), (28, -0.063), (29, -0.076), (30, -0.067), (31, -0.036), (32, -0.045), (33, 0.012), (34, -0.077), (35, -0.034), (36, 0.033), (37, -0.025), (38, 0.024), (39, 0.062), (40, 0.078), (41, -0.007), (42, -0.019), (43, 0.06), (44, 0.005), (45, -0.051), (46, -0.043), (47, -0.011), (48, -0.013), (49, -0.036)]
simIndex simValue blogId blogTitle
same-blog 1 0.97475755 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer
Introduction: The prevailing wisdom in machine learning seems to be that motivating a paper is the responsibility of the author. I think this is a harmful view—instead, it’s healthier for the community to regard this as the responsibility of the reviewer. There are lots of reasons to prefer a reviewer-responsibility approach. Authors are the most biased possible source of information about the motivation of the paper. Systems which rely upon very biased sources of information are inherently unreliable. Authors are highly variable in their ability and desire to express motivation for their work. This adds greatly to variance on acceptance of an idea, and it can systematically discriminate or accentuate careers. It’s great if you have a career accentuated by awesome wording choice, but wise decision making by reviewers is important for the field. The motivation section in a paper doesn’t do anything in some sense—it’s there to get the paper in. Reading the motivation of a paper is of
2 0.7566275 454 hunch net-2012-01-30-ICML Posters and Scope
Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene
3 0.75191736 288 hunch net-2008-02-10-Complexity Illness
Introduction: One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, becaus
4 0.73677444 98 hunch net-2005-07-27-Not goal metrics
Introduction: One of the confusing things about research is that progress is very hard to measure. One of the consequences of being in a hard-to-measure environment is that the wrong things are often measured. Lines of Code The classical example of this phenomenon is the old lines-of-code-produced metric for programming. It is easy to imagine systems for producing many lines of code with very little work that accomplish very little. Paper count In academia, a “paper count” is an analog of “lines of code”, and it suffers from the same failure modes. The obvious failure mode here is that we end up with a large number of uninteresting papers since people end up spending a lot of time optimizing this metric. Complexity Another metric, is “complexity” (in the eye of a reviewer) of a paper. There is a common temptation to make a method appear more complex than it is in order for reviewers to judge it worthy of publication. The failure mode here is unclean thinking. Simple effective m
5 0.7110545 343 hunch net-2009-02-18-Decision by Vetocracy
Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi
6 0.70507908 30 hunch net-2005-02-25-Why Papers?
7 0.69248956 52 hunch net-2005-04-04-Grounds for Rejection
8 0.68988508 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
9 0.6891681 233 hunch net-2007-02-16-The Forgetting
10 0.68321204 358 hunch net-2009-06-01-Multitask Poisoning
11 0.68215066 333 hunch net-2008-12-27-Adversarial Academia
12 0.68127042 395 hunch net-2010-04-26-Compassionate Reviewing
13 0.68075883 91 hunch net-2005-07-10-Thinking the Unthought
14 0.67130303 334 hunch net-2009-01-07-Interesting Papers at SODA 2009
15 0.66301769 42 hunch net-2005-03-17-Going all the Way, Sometimes
16 0.66216725 304 hunch net-2008-06-27-Reviewing Horror Stories
17 0.66191447 202 hunch net-2006-08-10-Precision is not accuracy
18 0.65530056 370 hunch net-2009-09-18-Necessary and Sufficient Research
19 0.65235287 315 hunch net-2008-09-03-Bidding Problems
20 0.65167552 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making
topicId topicWeight
[(10, 0.034), (27, 0.164), (38, 0.062), (41, 0.261), (42, 0.017), (53, 0.083), (55, 0.151), (64, 0.02), (83, 0.02), (94, 0.082), (95, 0.021)]
simIndex simValue blogId blogTitle
same-blog 1 0.84491104 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer
Introduction: The prevailing wisdom in machine learning seems to be that motivating a paper is the responsibility of the author. I think this is a harmful view—instead, it’s healthier for the community to regard this as the responsibility of the reviewer. There are lots of reasons to prefer a reviewer-responsibility approach. Authors are the most biased possible source of information about the motivation of the paper. Systems which rely upon very biased sources of information are inherently unreliable. Authors are highly variable in their ability and desire to express motivation for their work. This adds greatly to variance on acceptance of an idea, and it can systematically discriminate or accentuate careers. It’s great if you have a career accentuated by awesome wording choice, but wise decision making by reviewers is important for the field. The motivation section in a paper doesn’t do anything in some sense—it’s there to get the paper in. Reading the motivation of a paper is of
2 0.68743384 437 hunch net-2011-07-10-ICML 2011 and the future
Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI
3 0.68723935 454 hunch net-2012-01-30-ICML Posters and Scope
Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene
4 0.67723024 452 hunch net-2012-01-04-Why ICML? and the summer conferences
Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf
5 0.67687523 40 hunch net-2005-03-13-Avoiding Bad Reviewing
Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a
6 0.67306948 116 hunch net-2005-09-30-Research in conferences
7 0.6704796 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006
8 0.66751951 423 hunch net-2011-02-02-User preferences for search engines
9 0.66353011 297 hunch net-2008-04-22-Taking the next step
10 0.65975463 343 hunch net-2009-02-18-Decision by Vetocracy
11 0.6581561 89 hunch net-2005-07-04-The Health of COLT
12 0.65813559 207 hunch net-2006-09-12-Incentive Compatible Reviewing
13 0.65519601 403 hunch net-2010-07-18-ICML & COLT 2010
14 0.65425783 44 hunch net-2005-03-21-Research Styles in Machine Learning
15 0.6529817 395 hunch net-2010-04-26-Compassionate Reviewing
16 0.65240616 484 hunch net-2013-06-16-Representative Reviewing
17 0.6518181 98 hunch net-2005-07-27-Not goal metrics
18 0.65139264 22 hunch net-2005-02-18-What it means to do research.
19 0.65124607 225 hunch net-2007-01-02-Retrospective
20 0.64956868 464 hunch net-2012-05-03-Microsoft Research, New York City