hunch_net hunch_net-2007 hunch_net-2007-233 knowledge-graph by maker-knowledge-mining

233 hunch net-2007-02-16-The Forgetting


meta infos for this blog

Source: html

Introduction: How many papers do you remember from 2006? 2005? 2002? 1997? 1987? 1967? One way to judge this would be to look at the citations of the papers you write—how many came from which year? For myself, the answers on recent papers are: year 2006 2005 2002 1997 1987 1967 count 4 10 5 1 0 0 This spectrum is fairly typical of papers in general. There are many reasons that citations are focused on recent papers. The number of papers being published continues to grow. This is not a very significant effect, because the rate of publication has not grown nearly as fast. Dead men don’t reject your papers for not citing them. This reason seems lame, because it’s a distortion from the ideal of science. Nevertheless, it must be stated because the effect can be significant. In 1997, I started as a PhD student. Naturally, papers after 1997 are better remembered because they were absorbed in real time. A large fraction of people writing papers and a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 One way to judge this would be to look at the citations of the papers you write—how many came from which year? [sent-7, score-0.514]

2 For myself, the answers on recent papers are: year 2006 2005 2002 1997 1987 1967 count 4 10 5 1 0 0 This spectrum is fairly typical of papers in general. [sent-8, score-0.882]

3 There are many reasons that citations are focused on recent papers. [sent-9, score-0.349]

4 The number of papers being published continues to grow. [sent-10, score-0.324]

5 Dead men don’t reject your papers for not citing them. [sent-12, score-0.504]

6 Nevertheless, it must be stated because the effect can be significant. [sent-14, score-0.174]

7 Naturally, papers after 1997 are better remembered because they were absorbed in real time. [sent-16, score-0.324]

8 A large fraction of people writing papers and attending conferences haven’t been doing it for 10 years. [sent-17, score-0.407]

9 This is huge effect for any papers prior to 1995 (or so). [sent-19, score-0.498]

10 The ease of examining a paper greatly influences the ability of an author to read and understand it. [sent-20, score-0.349]

11 For example, when people forget, they reinvent, and sometimes they reinvent better. [sent-27, score-0.285]

12 Nevertheless, it seems like the effect of forgetting is bad overall, because it causes wasted effort. [sent-28, score-0.763]

13 There are two implications: For paper writers, it is very common to overestimate the value of a paper, even though we know that the impact of most papers is bounded in time. [sent-29, score-0.609]

14 Perhaps by looking at those older papers, we can get an idea of what is important in the long term. [sent-30, score-0.213]

15 For example, looking at my own older citations, simplicity is it. [sent-31, score-0.282]

16 Are the review criteria promoting the papers which a hope of survival? [sent-37, score-0.414]

17 Then, you merely had to stand on the shoulders of giants to succeed. [sent-40, score-0.56]

18 Now, it seems that even the ability to peer over the shoulders of people standing on the shoulders of giants might be helpful. [sent-41, score-1.174]

19 Nevertheless, it seems that much of this effort is getting wasted in forgetting, because we do not have the right mechanisms to remember the information. [sent-43, score-0.342]

20 Which is going to be the first conference to switch away from an ordered list of papers to something with structure? [sent-44, score-0.643]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('papers', 0.324), ('shoulders', 0.303), ('forgetting', 0.269), ('reinvent', 0.202), ('citations', 0.19), ('giants', 0.179), ('effect', 0.174), ('recent', 0.159), ('wasted', 0.157), ('older', 0.121), ('nevertheless', 0.114), ('remember', 0.112), ('paper', 0.107), ('looking', 0.092), ('conference', 0.09), ('citing', 0.09), ('examining', 0.09), ('overestimate', 0.09), ('standing', 0.09), ('causes', 0.09), ('distortion', 0.09), ('forgotten', 0.09), ('men', 0.09), ('privileged', 0.09), ('promoting', 0.09), ('impact', 0.088), ('people', 0.083), ('writers', 0.083), ('ordered', 0.083), ('giant', 0.083), ('promote', 0.083), ('teachable', 0.083), ('teach', 0.078), ('originally', 0.078), ('influences', 0.078), ('stand', 0.078), ('going', 0.077), ('spectrum', 0.075), ('survival', 0.075), ('ability', 0.074), ('seems', 0.073), ('dead', 0.072), ('phd', 0.072), ('stuck', 0.072), ('publication', 0.072), ('switch', 0.069), ('simplicity', 0.069), ('wouldn', 0.069), ('grown', 0.069), ('peer', 0.069)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999976 233 hunch net-2007-02-16-The Forgetting

Introduction: How many papers do you remember from 2006? 2005? 2002? 1997? 1987? 1967? One way to judge this would be to look at the citations of the papers you write—how many came from which year? For myself, the answers on recent papers are: year 2006 2005 2002 1997 1987 1967 count 4 10 5 1 0 0 This spectrum is fairly typical of papers in general. There are many reasons that citations are focused on recent papers. The number of papers being published continues to grow. This is not a very significant effect, because the rate of publication has not grown nearly as fast. Dead men don’t reject your papers for not citing them. This reason seems lame, because it’s a distortion from the ideal of science. Nevertheless, it must be stated because the effect can be significant. In 1997, I started as a PhD student. Naturally, papers after 1997 are better remembered because they were absorbed in real time. A large fraction of people writing papers and a

2 0.25182289 30 hunch net-2005-02-25-Why Papers?

Introduction: Makc asked a good question in comments—”Why bother to make a paper, at all?” There are several reasons for writing papers which may not be immediately obvious to people not in academia. The basic idea is that papers have considerably more utility than the obvious “present an idea”. Papers are a formalized units of work. Academics (especially young ones) are often judged on the number of papers they produce. Papers have a formalized method of citing and crediting other—the bibliography. Academics (especially older ones) are often judged on the number of citations they receive. Papers enable a “more fair” anonymous review. Conferences receive many papers, from which a subset are selected. Discussion forums are inherently not anonymous for anyone who wants to build a reputation for good work. Papers are an excuse to meet your friends. Papers are the content of conferences, but much of what you do is talk to friends about interesting problems while there. Sometimes yo

3 0.14952493 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

4 0.139098 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

5 0.13801695 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

6 0.13781886 98 hunch net-2005-07-27-Not goal metrics

7 0.13488646 325 hunch net-2008-11-10-ICML Reviewing Criteria

8 0.13421443 318 hunch net-2008-09-26-The SODA Program Committee

9 0.12898287 454 hunch net-2012-01-30-ICML Posters and Scope

10 0.12892784 52 hunch net-2005-04-04-Grounds for Rejection

11 0.12400945 343 hunch net-2009-02-18-Decision by Vetocracy

12 0.12255856 134 hunch net-2005-12-01-The Webscience Future

13 0.1220498 40 hunch net-2005-03-13-Avoiding Bad Reviewing

14 0.1194403 452 hunch net-2012-01-04-Why ICML? and the summer conferences

15 0.11923906 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

16 0.11779077 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

17 0.11718131 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

18 0.11700009 288 hunch net-2008-02-10-Complexity Illness

19 0.11176561 225 hunch net-2007-01-02-Retrospective

20 0.11089239 51 hunch net-2005-04-01-The Producer-Consumer Model of Research


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.244), (1, -0.13), (2, 0.11), (3, 0.099), (4, 0.011), (5, 0.018), (6, 0.017), (7, -0.006), (8, 0.041), (9, 0.021), (10, 0.027), (11, 0.038), (12, -0.043), (13, -0.029), (14, 0.051), (15, -0.028), (16, -0.005), (17, 0.084), (18, 0.049), (19, 0.002), (20, 0.049), (21, 0.002), (22, -0.028), (23, -0.031), (24, 0.029), (25, -0.043), (26, -0.039), (27, 0.024), (28, -0.084), (29, -0.133), (30, -0.007), (31, 0.086), (32, 0.041), (33, -0.051), (34, 0.103), (35, -0.07), (36, 0.024), (37, 0.059), (38, 0.074), (39, 0.105), (40, 0.009), (41, 0.065), (42, 0.037), (43, 0.033), (44, -0.035), (45, 0.058), (46, -0.046), (47, -0.043), (48, 0.073), (49, 0.006)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98663634 233 hunch net-2007-02-16-The Forgetting

Introduction: How many papers do you remember from 2006? 2005? 2002? 1997? 1987? 1967? One way to judge this would be to look at the citations of the papers you write—how many came from which year? For myself, the answers on recent papers are: year 2006 2005 2002 1997 1987 1967 count 4 10 5 1 0 0 This spectrum is fairly typical of papers in general. There are many reasons that citations are focused on recent papers. The number of papers being published continues to grow. This is not a very significant effect, because the rate of publication has not grown nearly as fast. Dead men don’t reject your papers for not citing them. This reason seems lame, because it’s a distortion from the ideal of science. Nevertheless, it must be stated because the effect can be significant. In 1997, I started as a PhD student. Naturally, papers after 1997 are better remembered because they were absorbed in real time. A large fraction of people writing papers and a

2 0.92534566 30 hunch net-2005-02-25-Why Papers?

Introduction: Makc asked a good question in comments—”Why bother to make a paper, at all?” There are several reasons for writing papers which may not be immediately obvious to people not in academia. The basic idea is that papers have considerably more utility than the obvious “present an idea”. Papers are a formalized units of work. Academics (especially young ones) are often judged on the number of papers they produce. Papers have a formalized method of citing and crediting other—the bibliography. Academics (especially older ones) are often judged on the number of citations they receive. Papers enable a “more fair” anonymous review. Conferences receive many papers, from which a subset are selected. Discussion forums are inherently not anonymous for anyone who wants to build a reputation for good work. Papers are an excuse to meet your friends. Papers are the content of conferences, but much of what you do is talk to friends about interesting problems while there. Sometimes yo

3 0.77628547 98 hunch net-2005-07-27-Not goal metrics

Introduction: One of the confusing things about research is that progress is very hard to measure. One of the consequences of being in a hard-to-measure environment is that the wrong things are often measured. Lines of Code The classical example of this phenomenon is the old lines-of-code-produced metric for programming. It is easy to imagine systems for producing many lines of code with very little work that accomplish very little. Paper count In academia, a “paper count” is an analog of “lines of code”, and it suffers from the same failure modes. The obvious failure mode here is that we end up with a large number of uninteresting papers since people end up spending a lot of time optimizing this metric. Complexity Another metric, is “complexity” (in the eye of a reviewer) of a paper. There is a common temptation to make a method appear more complex than it is in order for reviewers to judge it worthy of publication. The failure mode here is unclean thinking. Simple effective m

4 0.76865423 288 hunch net-2008-02-10-Complexity Illness

Introduction: One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, becaus

5 0.72551852 52 hunch net-2005-04-04-Grounds for Rejection

Introduction: It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. Hopefully, this will help us all write better papers. The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. Not doing so would severely harm the integrity of the conference. A paper rejected for this reason must be fixed. Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. The tactic sometimes succeeds with some reviewers (but not with me). As a reviewer, I sometimes get lost for stupid reasons. This is why an anonymized communication channel with the author can

6 0.69281816 1 hunch net-2005-01-19-Why I decided to run a weblog.

7 0.68336713 325 hunch net-2008-11-10-ICML Reviewing Criteria

8 0.67885894 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

9 0.67500758 318 hunch net-2008-09-26-The SODA Program Committee

10 0.6594336 134 hunch net-2005-12-01-The Webscience Future

11 0.65003258 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

12 0.64831597 146 hunch net-2006-01-06-MLTV

13 0.62853581 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

14 0.6259703 315 hunch net-2008-09-03-Bidding Problems

15 0.61398315 363 hunch net-2009-07-09-The Machine Learning Forum

16 0.61222929 304 hunch net-2008-06-27-Reviewing Horror Stories

17 0.60825741 116 hunch net-2005-09-30-Research in conferences

18 0.59840983 454 hunch net-2012-01-30-ICML Posters and Scope

19 0.59201825 208 hunch net-2006-09-18-What is missing for online collaborative research?

20 0.59166509 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.011), (27, 0.166), (38, 0.427), (42, 0.017), (53, 0.072), (55, 0.079), (94, 0.095), (95, 0.055)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97714496 125 hunch net-2005-10-20-Machine Learning in the News

Introduction: The New York Times had a short interview about machine learning in datamining being used pervasively by the IRS and large corporations to predict who to audit and who to target for various marketing campaigns. This is a big application area of machine learning. It can be harmful (learning + databases = another way to invade privacy) or beneficial (as google demonstrates, better targeting of marketing campaigns is far less annoying). This is yet more evidence that we can not rely upon “I’m just another fish in the school” logic for our expectations about treatment by government and large corporations.

2 0.96384609 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.

3 0.95757675 339 hunch net-2009-01-27-Key Scientific Challenges

Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…

4 0.95742357 181 hunch net-2006-05-23-What is the best regret transform reduction from multiclass to binary?

Introduction: This post is about an open problem in learning reductions. Background A reduction might transform a a multiclass prediction problem where there are k possible labels into a binary learning problem where there are only 2 possible labels. On this induced binary problem we might learn a binary classifier with some error rate e . After subtracting the minimum possible (Bayes) error rate b , we get a regret r = e – b . The PECOC (Probabilistic Error Correcting Output Code) reduction has the property that binary regret r implies multiclass regret at most 4r 0.5 . The problem This is not the “rightest” answer. Consider the k=2 case, where we reduce binary to binary. There exists a reduction (the identity) with the property that regret r implies regret r . This is substantially superior to the transform given by the PECOC reduction, which suggests that a better reduction may exist for general k . For example, we can not rule out the possibility that a reduction

5 0.95489156 83 hunch net-2005-06-18-Lower Bounds for Learning Reductions

Introduction: Learning reductions transform a solver of one type of learning problem into a solver of another type of learning problem. When we analyze these for robustness we can make statement of the form “Reduction R has the property that regret r (or loss) on subproblems of type A implies regret at most f ( r ) on the original problem of type B “. A lower bound for a learning reduction would have the form “for all reductions R , there exists a learning problem of type B and learning algorithm for problems of type A where regret r on induced problems implies at least regret f ( r ) for B “. The pursuit of lower bounds is often questionable because, unlike upper bounds, they do not yield practical algorithms. Nevertheless, they may be helpful as a tool for thinking about what is learnable and how learnable it is. This has already come up here and here . At the moment, there is no coherent theory of lower bounds for learning reductions, and we have little understa

6 0.9294771 170 hunch net-2006-04-06-Bounds greater than 1

same-blog 7 0.91323537 233 hunch net-2007-02-16-The Forgetting

8 0.88883072 353 hunch net-2009-05-08-Computability in Artificial Intelligence

9 0.84548718 236 hunch net-2007-03-15-Alternative Machine Learning Reductions Definitions

10 0.80417651 251 hunch net-2007-06-24-Interesting Papers at ICML 2007

11 0.721605 72 hunch net-2005-05-16-Regret minimizing vs error limiting reductions

12 0.70783371 26 hunch net-2005-02-21-Problem: Cross Validation

13 0.69216603 19 hunch net-2005-02-14-Clever Methods of Overfitting

14 0.68449485 239 hunch net-2007-04-18-$50K Spock Challenge

15 0.68435884 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?

16 0.67816275 82 hunch net-2005-06-17-Reopening RL->Classification

17 0.67228973 131 hunch net-2005-11-16-The Everything Ensemble Edge

18 0.65703595 162 hunch net-2006-03-09-Use of Notation

19 0.63617831 306 hunch net-2008-07-02-Proprietary Data in Academic Research?

20 0.63512504 439 hunch net-2011-08-01-Interesting papers at COLT 2011