hunch_net hunch_net-2008 hunch_net-2008-288 knowledge-graph by maker-knowledge-mining

288 hunch net-2008-02-10-Complexity Illness


meta infos for this blog

Source: html

Introduction: One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, becaus


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. [sent-1, score-0.792]

2 This is at least anecdotally true in my experience. [sent-2, score-0.185]

3 Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. [sent-3, score-0.449]

4 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? [sent-4, score-1.019]

5 Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. [sent-5, score-0.452]

6 The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. [sent-6, score-0.398]

7 It’s a fair bet that 90% of papers submitted are exactly at the page limit. [sent-7, score-0.772]

8 We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. [sent-8, score-0.204]

9 Journalong Has anyone been asked to review a 100 page journal paper? [sent-9, score-0.442]

10 Journal papers can be nice, because they give an author the opportunity to write without sharp deadlines or page limit constraints, but this can and does go awry. [sent-11, score-0.669]

11 Complexity illness is a burden on the community. [sent-12, score-0.351]

12 It means authors spend more time filling out papers, reviewers spend more time reviewing, and (most importantly) effort is misplaced on complex solutions over simple solutions, ultimately slowing (sometimes crippling) the long term impact of an academic community. [sent-13, score-1.165]

13 It’s difficult to imagine an author-driven solution to complexity illness, because the incentives are simply wrong. [sent-14, score-0.469]

14 Reviewing based on solution value rather than complexity is a good way for individual people to reduce the problem. [sent-15, score-0.287]

15 More generally, it would be great to have a system which explicitly encourages research without excessive complexity. [sent-16, score-0.458]

16 The best example of this seems to be education—it’s the great decomplexifier. [sent-17, score-0.099]

17 The process of teaching something greatly encourages teaching the simple solution, because that is what can be understood. [sent-18, score-0.46]

18 This seems to be true both of traditional education and less conventional means such as wikipedia articles. [sent-19, score-0.42]

19 I’m not sure exactly how to use this observation—Is there some way we can shift conference formats towards the process of creating teachable material? [sent-20, score-0.317]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('page', 0.289), ('illness', 0.243), ('spend', 0.219), ('minimum', 0.195), ('encourages', 0.18), ('complexity', 0.167), ('journal', 0.153), ('education', 0.153), ('math', 0.146), ('submitted', 0.143), ('teaching', 0.14), ('exactly', 0.134), ('solution', 0.12), ('papers', 0.112), ('solutions', 0.111), ('misplaced', 0.108), ('burden', 0.108), ('crippling', 0.108), ('slowing', 0.108), ('stereotypes', 0.108), ('paper', 0.103), ('reviewers', 0.101), ('reviewing', 0.101), ('filling', 0.1), ('violation', 0.1), ('evidenced', 0.1), ('teachable', 0.1), ('ultimately', 0.1), ('enduring', 0.1), ('publishable', 0.1), ('great', 0.099), ('effort', 0.099), ('author', 0.097), ('bet', 0.094), ('excessive', 0.094), ('anecdotally', 0.094), ('fears', 0.094), ('imagine', 0.092), ('true', 0.091), ('conventional', 0.09), ('incentives', 0.09), ('soda', 0.086), ('sharp', 0.086), ('wikipedia', 0.086), ('without', 0.085), ('formats', 0.083), ('length', 0.081), ('material', 0.081), ('intelligence', 0.079), ('seriously', 0.079)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 288 hunch net-2008-02-10-Complexity Illness

Introduction: One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, becaus

2 0.17044406 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.14729328 304 hunch net-2008-06-27-Reviewing Horror Stories

Introduction: Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past. Prediction Theory for Classification Tutorial . This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR . The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. App

4 0.14627956 134 hunch net-2005-12-01-The Webscience Future

Introduction: The internet has significantly effected the way we do research but it’s capabilities have not yet been fully realized. First, let’s acknowledge some known effects. Self-publishing By default, all researchers in machine learning (and more generally computer science and physics) place their papers online for anyone to download. The exact mechanism differs—physicists tend to use a central repository ( Arxiv ) while computer scientists tend to place the papers on their webpage. Arxiv has been slowly growing in subject breadth so it now sometimes used by computer scientists. Collaboration Email has enabled working remotely with coauthors. This has allowed collaborationis which would not otherwise have been possible and generally speeds research. Now, let’s look at attempts to go further. Blogs (like this one) allow public discussion about topics which are not easily categorized as “a new idea in machine learning” (like this topic). Organization of some subfield

5 0.13477285 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

6 0.13346207 343 hunch net-2009-02-18-Decision by Vetocracy

7 0.12966375 318 hunch net-2008-09-26-The SODA Program Committee

8 0.12830928 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

9 0.1273711 207 hunch net-2006-09-12-Incentive Compatible Reviewing

10 0.12619171 98 hunch net-2005-07-27-Not goal metrics

11 0.125571 315 hunch net-2008-09-03-Bidding Problems

12 0.12456862 437 hunch net-2011-07-10-ICML 2011 and the future

13 0.1199778 116 hunch net-2005-09-30-Research in conferences

14 0.11749064 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

15 0.11700009 233 hunch net-2007-02-16-The Forgetting

16 0.11344875 123 hunch net-2005-10-16-Complexity: It’s all in your head

17 0.11311243 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

18 0.11226729 454 hunch net-2012-01-30-ICML Posters and Scope

19 0.11147686 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

20 0.11078567 225 hunch net-2007-01-02-Retrospective


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.229), (1, -0.118), (2, 0.127), (3, 0.1), (4, -0.028), (5, 0.018), (6, 0.018), (7, -0.007), (8, -0.012), (9, 0.026), (10, 0.018), (11, 0.005), (12, 0.005), (13, -0.035), (14, 0.014), (15, -0.005), (16, -0.035), (17, 0.018), (18, 0.049), (19, 0.047), (20, -0.007), (21, 0.085), (22, -0.071), (23, 0.045), (24, 0.086), (25, -0.039), (26, 0.045), (27, 0.037), (28, -0.023), (29, -0.044), (30, 0.085), (31, 0.003), (32, 0.074), (33, 0.028), (34, -0.039), (35, -0.05), (36, 0.055), (37, 0.031), (38, 0.095), (39, 0.118), (40, 0.017), (41, 0.014), (42, 0.074), (43, -0.051), (44, -0.033), (45, -0.018), (46, 0.009), (47, -0.0), (48, 0.018), (49, 0.013)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97718972 288 hunch net-2008-02-10-Complexity Illness

Introduction: One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, becaus

2 0.74387169 98 hunch net-2005-07-27-Not goal metrics

Introduction: One of the confusing things about research is that progress is very hard to measure. One of the consequences of being in a hard-to-measure environment is that the wrong things are often measured. Lines of Code The classical example of this phenomenon is the old lines-of-code-produced metric for programming. It is easy to imagine systems for producing many lines of code with very little work that accomplish very little. Paper count In academia, a “paper count” is an analog of “lines of code”, and it suffers from the same failure modes. The obvious failure mode here is that we end up with a large number of uninteresting papers since people end up spending a lot of time optimizing this metric. Complexity Another metric, is “complexity” (in the eye of a reviewer) of a paper. There is a common temptation to make a method appear more complex than it is in order for reviewers to judge it worthy of publication. The failure mode here is unclean thinking. Simple effective m

3 0.73303866 233 hunch net-2007-02-16-The Forgetting

Introduction: How many papers do you remember from 2006? 2005? 2002? 1997? 1987? 1967? One way to judge this would be to look at the citations of the papers you write—how many came from which year? For myself, the answers on recent papers are: year 2006 2005 2002 1997 1987 1967 count 4 10 5 1 0 0 This spectrum is fairly typical of papers in general. There are many reasons that citations are focused on recent papers. The number of papers being published continues to grow. This is not a very significant effect, because the rate of publication has not grown nearly as fast. Dead men don’t reject your papers for not citing them. This reason seems lame, because it’s a distortion from the ideal of science. Nevertheless, it must be stated because the effect can be significant. In 1997, I started as a PhD student. Naturally, papers after 1997 are better remembered because they were absorbed in real time. A large fraction of people writing papers and a

4 0.72511065 30 hunch net-2005-02-25-Why Papers?

Introduction: Makc asked a good question in comments—”Why bother to make a paper, at all?” There are several reasons for writing papers which may not be immediately obvious to people not in academia. The basic idea is that papers have considerably more utility than the obvious “present an idea”. Papers are a formalized units of work. Academics (especially young ones) are often judged on the number of papers they produce. Papers have a formalized method of citing and crediting other—the bibliography. Academics (especially older ones) are often judged on the number of citations they receive. Papers enable a “more fair” anonymous review. Conferences receive many papers, from which a subset are selected. Discussion forums are inherently not anonymous for anyone who wants to build a reputation for good work. Papers are an excuse to meet your friends. Papers are the content of conferences, but much of what you do is talk to friends about interesting problems while there. Sometimes yo

5 0.6952002 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

6 0.66868889 134 hunch net-2005-12-01-The Webscience Future

7 0.64527816 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

8 0.64048958 318 hunch net-2008-09-26-The SODA Program Committee

9 0.63698155 333 hunch net-2008-12-27-Adversarial Academia

10 0.63410181 484 hunch net-2013-06-16-Representative Reviewing

11 0.63296562 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

12 0.63112724 395 hunch net-2010-04-26-Compassionate Reviewing

13 0.624982 304 hunch net-2008-06-27-Reviewing Horror Stories

14 0.62389541 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

15 0.6187616 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.60177237 40 hunch net-2005-03-13-Avoiding Bad Reviewing

17 0.60175967 208 hunch net-2006-09-18-What is missing for online collaborative research?

18 0.59491795 468 hunch net-2012-06-29-ICML survey and comments

19 0.59140342 231 hunch net-2007-02-10-Best Practices for Collaboration

20 0.58441794 485 hunch net-2013-06-29-The Benefits of Double-Blind Review


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.788), (38, 0.019), (53, 0.016), (55, 0.084)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99969971 288 hunch net-2008-02-10-Complexity Illness

Introduction: One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, becaus

2 0.99870825 247 hunch net-2007-06-14-Interesting Papers at COLT 2007

Introduction: Here are two papers that seem particularly interesting at this year’s COLT. Gilles Blanchard and François Fleuret , Occam’s Hammer . When we are interested in very tight bounds on the true error rate of a classifier, it is tempting to use a PAC-Bayes bound which can (empirically) be quite tight . A disadvantage of the PAC-Bayes bound is that it applies to a classifier which is randomized over a set of base classifiers rather than a single classifier. This paper shows that a similar bound can be proved which holds for a single classifier drawn from the set. The ability to safely use a single classifier is very nice. This technique applies generically to any base bound, so it has other applications covered in the paper. Adam Tauman Kalai . Learning Nested Halfspaces and Uphill Decision Trees . Classification PAC-learning, where you prove that any problem amongst some set is polytime learnable with respect to any distribution over the input X is extraordinarily ch

3 0.99765301 274 hunch net-2007-11-28-Computational Consequences of Classification

Introduction: In the regression vs classification debate , I’m adding a new “pro” to classification. It seems there are computational shortcuts available for classification which simply aren’t available for regression. This arises in several situations. In active learning it is sometimes possible to find an e error classifier with just log(e) labeled samples. Only much more modest improvements appear to be achievable for squared loss regression. The essential reason is that the loss function on many examples is flat with respect to large variations in the parameter spaces of a learned classifier, which implies that many of these classifiers do not need to be considered. In contrast, for squared loss regression, most substantial variations in the parameter space influence the loss at most points. In budgeted learning, where there is either a computational time constraint or a feature cost constraint, a classifier can sometimes be learned to very high accuracy under the constraints

4 0.99661905 308 hunch net-2008-07-06-To Dual or Not

Introduction: Yoram and Shai ‘s online learning tutorial at ICML brings up a question for me, “Why use the dual ?” The basic setting is learning a weight vector w i so that the function f(x)= sum i w i x i optimizes some convex loss function. The functional view of the dual is that instead of (or in addition to) keeping track of w i over the feature space, you keep track of a vector a j over the examples and define w i = sum j a j x ji . The above view of duality makes operating in the dual appear unnecessary, because in the end a weight vector is always used. The tutorial suggests that thinking about the dual gives a unified algorithmic font for deriving online learning algorithms. I haven’t worked with the dual representation much myself, but I have seen a few examples where it appears helpful. Noise When doing online optimization (i.e. online learning where you are allowed to look at individual examples multiple times), the dual representation may be helpfu

5 0.99657279 172 hunch net-2006-04-14-JMLR is a success

Introduction: In 2001, the “ Journal of Machine Learning Research ” was created in reaction to unadaptive publisher policies at MLJ . Essentially, with the creation of the internet, the bottleneck in publishing research shifted from publishing to research. The declaration of independence accompanying this move expresses the reasons why in greater detail. MLJ has strongly changed its policy in reaction to this. In particular, there is no longer an assignment of copyright to the publisher (*), and MLJ regularly sponsors many student “best paper awards” across several conferences with cash prizes. This is an advantage of MLJ over JMLR: MLJ can afford to sponsor cash prizes for the machine learning community. The remaining disadvantage is that reading papers in MLJ sometimes requires searching for the author’s website where the free version is available. In contrast, JMLR articles are freely available to everyone off the JMLR website. Whether or not this disadvantage cancels the advantage i

6 0.9944005 245 hunch net-2007-05-12-Loss Function Semantics

7 0.9938646 166 hunch net-2006-03-24-NLPers

8 0.9938646 246 hunch net-2007-06-13-Not Posting

9 0.9938646 418 hunch net-2010-12-02-Traffic Prediction Problem

10 0.99082237 400 hunch net-2010-06-13-The Good News on Exploration and Learning

11 0.98817801 45 hunch net-2005-03-22-Active learning

12 0.98073334 9 hunch net-2005-02-01-Watchword: Loss

13 0.98037577 304 hunch net-2008-06-27-Reviewing Horror Stories

14 0.97989875 352 hunch net-2009-05-06-Machine Learning to AI

15 0.97777146 341 hunch net-2009-02-04-Optimal Proxy Loss for Classification

16 0.9701733 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

17 0.95336348 293 hunch net-2008-03-23-Interactive Machine Learning

18 0.95050967 483 hunch net-2013-06-10-The Large Scale Learning class notes

19 0.94615459 244 hunch net-2007-05-09-The Missing Bound

20 0.93970096 67 hunch net-2005-05-06-Don’t mix the solution into the problem