hunch_net hunch_net-2008 hunch_net-2008-333 knowledge-graph by maker-knowledge-mining

333 hunch net-2008-12-27-Adversarial Academia


meta infos for this blog

Source: html

Introduction: One viewpoint on academia is that it is inherently adversarial: there are finite research dollars, positions, and students to work with, implying a zero-sum game between different participants. This is not a viewpoint that I want to promote, as I consider it flawed. However, I know several people believe strongly in this viewpoint, and I have found it to have substantial explanatory power. For example: It explains why your paper was rejected based on poor logic. The reviewer wasn’t concerned with research quality, but rather with rejecting a competitor. It explains why professors rarely work together. The goal of a non-tenured professor (at least) is to get tenure, and a case for tenure comes from a portfolio of work that is undisputably yours. It explains why new research programs are not quickly adopted. Adopting a competitor’s program is impossible, if your career is based on the competitor being wrong. Different academic groups subscribe to the adversarial viewp


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 One viewpoint on academia is that it is inherently adversarial: there are finite research dollars, positions, and students to work with, implying a zero-sum game between different participants. [sent-1, score-0.457]

2 This is not a viewpoint that I want to promote, as I consider it flawed. [sent-2, score-0.326]

3 However, I know several people believe strongly in this viewpoint, and I have found it to have substantial explanatory power. [sent-3, score-0.298]

4 For example: It explains why your paper was rejected based on poor logic. [sent-4, score-0.394]

5 The reviewer wasn’t concerned with research quality, but rather with rejecting a competitor. [sent-5, score-0.412]

6 It explains why new research programs are not quickly adopted. [sent-8, score-0.303]

7 Adopting a competitor’s program is impossible, if your career is based on the competitor being wrong. [sent-9, score-0.272]

8 Different academic groups subscribe to the adversarial viewpoint in different degrees. [sent-10, score-0.638]

9 There are substantial flaws in the adversarial viewpoint. [sent-17, score-0.39]

10 Contorting your viewpoint enough to make this true damages your ability to conduct research. [sent-20, score-0.483]

11 The previous two disadvantages apply even more strongly for a community—good ideas are more likely to be missed, change comes slowly, and often with steps backward. [sent-24, score-0.263]

12 Despite these disadvantages, there is a substantial advantage as well: you can materially protect and aid your career by rejecting papers, preventing grants, and generally discriminating against key people doing interesting but competitive work. [sent-27, score-0.539]

13 The adversarial viewpoint has a validity in proportion to the number of people subscribing to it. [sent-28, score-0.776]

14 For those of us who would like to deemphasize the adversarial viewpoint, what’s unclear is: how? [sent-29, score-0.397]

15 Arxiv functions as a universal timestamp which decreases the power of an adversarial reviewer. [sent-32, score-0.469]

16 In my experience as an author, if an anonymous reviewer wants to kill a paper they usually succeed. [sent-37, score-0.525]

17 Most area chairs or program chairs are more interested in avoiding conflict with the reviewer (who they picked and may consider a friend) than reading the paper to determine the illogic of the review (which is a difficult task that simply cannot be done for all papers). [sent-38, score-0.507]

18 NIPS experimented with a reputation system for reviewers last year, but I’m unclear on how well it worked, as an author’s score for a review and a reviewer’s score for the paper may be deeply correlated, revealing little additional information. [sent-39, score-0.601]

19 Public discussion of research can help with this, because very poor logic simply doesn’t stand up under public scrutiny. [sent-40, score-0.378]

20 While I hope to nudge people in this direction, it’s clear that most people aren’t yet comfortable with public discussion. [sent-41, score-0.316]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('viewpoint', 0.326), ('adversarial', 0.312), ('arxiv', 0.195), ('disadvantages', 0.186), ('explains', 0.172), ('reviewer', 0.151), ('competitor', 0.148), ('experience', 0.147), ('poor', 0.134), ('research', 0.131), ('rejecting', 0.13), ('career', 0.124), ('nips', 0.12), ('tenure', 0.119), ('public', 0.113), ('score', 0.098), ('review', 0.098), ('paper', 0.088), ('unclear', 0.085), ('chairs', 0.085), ('enough', 0.083), ('power', 0.083), ('substantial', 0.078), ('strongly', 0.077), ('monotonically', 0.074), ('conduct', 0.074), ('adopting', 0.074), ('expecting', 0.074), ('imls', 0.074), ('crippling', 0.074), ('kill', 0.074), ('adversarially', 0.074), ('decreases', 0.074), ('explanatory', 0.074), ('icml', 0.069), ('people', 0.069), ('proportion', 0.069), ('inherited', 0.069), ('mccallum', 0.069), ('promote', 0.069), ('preventing', 0.069), ('revealing', 0.069), ('protect', 0.069), ('author', 0.066), ('anonymous', 0.065), ('reputation', 0.065), ('activities', 0.065), ('dollars', 0.065), ('comfortable', 0.065), ('promotes', 0.065)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000004 333 hunch net-2008-12-27-Adversarial Academia

Introduction: One viewpoint on academia is that it is inherently adversarial: there are finite research dollars, positions, and students to work with, implying a zero-sum game between different participants. This is not a viewpoint that I want to promote, as I consider it flawed. However, I know several people believe strongly in this viewpoint, and I have found it to have substantial explanatory power. For example: It explains why your paper was rejected based on poor logic. The reviewer wasn’t concerned with research quality, but rather with rejecting a competitor. It explains why professors rarely work together. The goal of a non-tenured professor (at least) is to get tenure, and a case for tenure comes from a portfolio of work that is undisputably yours. It explains why new research programs are not quickly adopted. Adopting a competitor’s program is impossible, if your career is based on the competitor being wrong. Different academic groups subscribe to the adversarial viewp

2 0.22328602 343 hunch net-2009-02-18-Decision by Vetocracy

Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi

3 0.19567579 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

4 0.19005296 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

5 0.17847928 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

6 0.17316076 315 hunch net-2008-09-03-Bidding Problems

7 0.154569 304 hunch net-2008-06-27-Reviewing Horror Stories

8 0.1539274 38 hunch net-2005-03-09-Bad Reviewing

9 0.15334915 395 hunch net-2010-04-26-Compassionate Reviewing

10 0.14255801 452 hunch net-2012-01-04-Why ICML? and the summer conferences

11 0.14146233 382 hunch net-2009-12-09-Future Publication Models @ NIPS

12 0.14030291 454 hunch net-2012-01-30-ICML Posters and Scope

13 0.13674425 318 hunch net-2008-09-26-The SODA Program Committee

14 0.13587786 6 hunch net-2005-01-27-Learning Complete Problems

15 0.13119128 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

16 0.12893675 207 hunch net-2006-09-12-Incentive Compatible Reviewing

17 0.1281677 134 hunch net-2005-12-01-The Webscience Future

18 0.12799132 39 hunch net-2005-03-10-Breaking Abstractions

19 0.12285724 461 hunch net-2012-04-09-ICML author feedback is open

20 0.11848444 40 hunch net-2005-03-13-Avoiding Bad Reviewing


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.278), (1, -0.154), (2, 0.101), (3, 0.11), (4, -0.011), (5, 0.039), (6, 0.025), (7, 0.049), (8, -0.015), (9, 0.058), (10, 0.069), (11, -0.015), (12, -0.007), (13, -0.018), (14, -0.004), (15, 0.005), (16, -0.029), (17, -0.011), (18, -0.016), (19, -0.036), (20, -0.026), (21, -0.023), (22, -0.01), (23, -0.011), (24, -0.01), (25, -0.0), (26, 0.022), (27, -0.021), (28, 0.06), (29, 0.042), (30, 0.067), (31, -0.007), (32, -0.019), (33, 0.012), (34, -0.062), (35, 0.028), (36, -0.048), (37, -0.002), (38, 0.01), (39, 0.018), (40, 0.019), (41, 0.004), (42, -0.019), (43, -0.042), (44, 0.049), (45, -0.137), (46, 0.008), (47, -0.023), (48, -0.023), (49, 0.083)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96759558 333 hunch net-2008-12-27-Adversarial Academia

Introduction: One viewpoint on academia is that it is inherently adversarial: there are finite research dollars, positions, and students to work with, implying a zero-sum game between different participants. This is not a viewpoint that I want to promote, as I consider it flawed. However, I know several people believe strongly in this viewpoint, and I have found it to have substantial explanatory power. For example: It explains why your paper was rejected based on poor logic. The reviewer wasn’t concerned with research quality, but rather with rejecting a competitor. It explains why professors rarely work together. The goal of a non-tenured professor (at least) is to get tenure, and a case for tenure comes from a portfolio of work that is undisputably yours. It explains why new research programs are not quickly adopted. Adopting a competitor’s program is impossible, if your career is based on the competitor being wrong. Different academic groups subscribe to the adversarial viewp

2 0.81186283 343 hunch net-2009-02-18-Decision by Vetocracy

Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi

3 0.80125988 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

Introduction: This post is a (near) transcript of a talk that I gave at the ICML 2013 Workshop on Peer Review and Publishing Models . Although there’s a PDF available on my website , I’ve chosen to post a slightly modified version here as well in order to better facilitate discussion. Disclaimers and Context I want to start with a couple of disclaimers and some context. First, I want to point out that although I’ve read a lot about double-blind review, this isn’t my research area and the research discussed in this post is not my own. As a result, I probably can’t answer super detailed questions about these studies. I also want to note that I’m not opposed to open peer review — I was a free and open source software developer for over ten years and I care a great deal about openness and transparency. Rather, my motivation in writing this post is simply to create awareness of and to initiate discussion about the benefits of double-blind review. Lastly, and most importantly, I think it’s e

4 0.76997542 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

5 0.76102751 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.74276423 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

7 0.72240746 38 hunch net-2005-03-09-Bad Reviewing

8 0.72002697 382 hunch net-2009-12-09-Future Publication Models @ NIPS

9 0.71804339 461 hunch net-2012-04-09-ICML author feedback is open

10 0.70936102 437 hunch net-2011-07-10-ICML 2011 and the future

11 0.69960916 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

12 0.68807596 463 hunch net-2012-05-02-ICML: Behind the Scenes

13 0.68643337 318 hunch net-2008-09-26-The SODA Program Committee

14 0.67203021 304 hunch net-2008-06-27-Reviewing Horror Stories

15 0.65284973 454 hunch net-2012-01-30-ICML Posters and Scope

16 0.652089 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

17 0.64792138 395 hunch net-2010-04-26-Compassionate Reviewing

18 0.64468086 288 hunch net-2008-02-10-Complexity Illness

19 0.64417732 207 hunch net-2006-09-12-Incentive Compatible Reviewing

20 0.6239593 40 hunch net-2005-03-13-Avoiding Bad Reviewing


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.012), (10, 0.023), (27, 0.177), (34, 0.018), (38, 0.043), (42, 0.019), (48, 0.018), (51, 0.019), (53, 0.082), (55, 0.121), (90, 0.237), (94, 0.087), (95, 0.08)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.94796616 32 hunch net-2005-02-27-Antilearning: When proximity goes bad

Introduction: Joel Predd mentioned “ Antilearning ” by Adam Kowalczyk , which is interesting from a foundational intuitions viewpoint. There is a pervasive intuition that “nearby things tend to have the same label”. This intuition is instantiated in SVMs, nearest neighbor classifiers, decision trees, and neural networks. It turns out there are natural problems where this intuition is opposite of the truth. One natural situation where this occurs is in competition. For example, when Intel fails to meet its earnings estimate, is this evidence that AMD is doing badly also? Or evidence that AMD is doing well? This violation of the proximity intuition means that when the number of examples is few, negating a classifier which attempts to exploit proximity can provide predictive power (thus, the term “antilearning”).

2 0.93465763 144 hunch net-2005-12-28-Yet more nips thoughts

Introduction: I only managed to make it out to the NIPS workshops this year so I’ll give my comments on what I saw there. The Learing and Robotics workshops lives again. I hope it continues and gets more high quality papers in the future. The most interesting talk for me was Larry Jackel’s on the LAGR program (see John’s previous post on said program). I got some ideas as to what progress has been made. Larry really explained the types of benchmarks and the tradeoffs that had to be made to make the goals achievable but challenging. Hal Daume gave a very interesting talk about structured prediction using RL techniques, something near and dear to my own heart. He achieved rather impressive results using only a very greedy search. The non-parametric Bayes workshop was great. I enjoyed the entire morning session I spent there, and particularly (the usually desultory) discussion periods. One interesting topic was the Gibbs/Variational inference divide. I won’t try to summarize espe

same-blog 3 0.89073968 333 hunch net-2008-12-27-Adversarial Academia

Introduction: One viewpoint on academia is that it is inherently adversarial: there are finite research dollars, positions, and students to work with, implying a zero-sum game between different participants. This is not a viewpoint that I want to promote, as I consider it flawed. However, I know several people believe strongly in this viewpoint, and I have found it to have substantial explanatory power. For example: It explains why your paper was rejected based on poor logic. The reviewer wasn’t concerned with research quality, but rather with rejecting a competitor. It explains why professors rarely work together. The goal of a non-tenured professor (at least) is to get tenure, and a case for tenure comes from a portfolio of work that is undisputably yours. It explains why new research programs are not quickly adopted. Adopting a competitor’s program is impossible, if your career is based on the competitor being wrong. Different academic groups subscribe to the adversarial viewp

4 0.88320136 139 hunch net-2005-12-11-More NIPS Papers

Introduction: Let me add to John’s post with a few of my own favourites from this year’s conference. First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. And, speaking of tagging, I think the U.Mass Citeseer replacement system Rexa from the demo track is very cool. Finally, let me add my recommendations for specific papers: Z. Ghahramani, K. Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects.) T. Griffiths, Z. Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties.) K. Weinberger, J. Blitzer, L. Saul: Distance Metric Lea

5 0.86181295 340 hunch net-2009-01-28-Nielsen’s talk

Introduction: I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting.

6 0.82496148 323 hunch net-2008-11-04-Rise of the Machines

7 0.77624917 239 hunch net-2007-04-18-$50K Spock Challenge

8 0.71791691 134 hunch net-2005-12-01-The Webscience Future

9 0.71643317 437 hunch net-2011-07-10-ICML 2011 and the future

10 0.71192044 95 hunch net-2005-07-14-What Learning Theory might do

11 0.71053249 466 hunch net-2012-06-05-ICML acceptance statistics

12 0.70998776 464 hunch net-2012-05-03-Microsoft Research, New York City

13 0.70911282 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

14 0.70484769 423 hunch net-2011-02-02-User preferences for search engines

15 0.70484608 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.70379144 22 hunch net-2005-02-18-What it means to do research.

17 0.70293385 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

18 0.70201087 141 hunch net-2005-12-17-Workshops as Franchise Conferences

19 0.70197845 454 hunch net-2012-01-30-ICML Posters and Scope

20 0.70071518 40 hunch net-2005-03-13-Avoiding Bad Reviewing