hunch_net hunch_net-2009 hunch_net-2009-363 knowledge-graph by maker-knowledge-mining

363 hunch net-2009-07-09-The Machine Learning Forum


meta infos for this blog

Source: html

Introduction: Dear Fellow Machine Learners, For the past year or so I have become increasingly frustrated with the peer review system in our field. I constantly get asked to review papers in which I have no interest. At the same time, as an action editor in JMLR, I constantly have to harass people to review papers. When I send papers to conferences and to journals I often get rejected with reviews that, at least in my mind, make no sense. Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it… I decided to try an do something to improve the situation. I started a new web site, which I decided to call “The machine learning forum” the URL is http://themachinelearningforum.org The main idea behind this web site is to remove anonymity from the review process. In this site, all opinions are attributed to the actual person that expressed them. I expect that this will improve the quality of the reviews. An obvious other effect is that there wil


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Dear Fellow Machine Learners, For the past year or so I have become increasingly frustrated with the peer review system in our field. [sent-1, score-0.408]

2 I constantly get asked to review papers in which I have no interest. [sent-2, score-0.512]

3 At the same time, as an action editor in JMLR, I constantly have to harass people to review papers. [sent-3, score-0.399]

4 When I send papers to conferences and to journals I often get rejected with reviews that, at least in my mind, make no sense. [sent-4, score-0.46]

5 Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it… I decided to try an do something to improve the situation. [sent-5, score-0.286]

6 I started a new web site, which I decided to call “The machine learning forum” the URL is http://themachinelearningforum. [sent-6, score-0.341]

7 org The main idea behind this web site is to remove anonymity from the review process. [sent-7, score-0.964]

8 In this site, all opinions are attributed to the actual person that expressed them. [sent-8, score-0.155]

9 I expect that this will improve the quality of the reviews. [sent-9, score-0.095]

10 An obvious other effect is that there will be fewer negative reviews, weak papers will tend not to get reviewed at all, but then again, is that such a bad thing? [sent-10, score-0.324]

11 If you have any interest in this endeavor, please register to the web site and please submit a photo of yourself. [sent-11, score-0.791]

12 Based on the information on your web site I will decide whether to grant you “author” privileges that would allow you to write reviews and overviews. [sent-12, score-0.755]

13 Anybody can submit pointers to publications that they would like somebody to review. [sent-13, score-0.545]

14 Anybody can participate in the discussion forum that is a fancy message board with threads etc. [sent-14, score-0.342]

15 Right now the main contribution I am looking for are “overviews”. [sent-15, score-0.246]

16 Overviews are pages written by somebody who is an authority in some area (for example, Kamalika Chaudhuri is an authority on mixture models) in which they list the main papers in the area and five a high level description for how the papers relate. [sent-16, score-1.24]

17 These overviews are intended to serve as an entry point for somebody that wants to learn about that subfield. [sent-17, score-0.926]

18 Overviews *can* reference the work of the author of the overview. [sent-18, score-0.121]

19 This is unlike reviews, in which the reviewer cannot be the author of the reviewed paper. [sent-19, score-0.249]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('overviews', 0.406), ('somebody', 0.27), ('site', 0.266), ('web', 0.224), ('reviews', 0.198), ('authority', 0.18), ('forum', 0.18), ('anybody', 0.18), ('main', 0.163), ('review', 0.158), ('constantly', 0.158), ('reviewed', 0.128), ('author', 0.121), ('decided', 0.117), ('submit', 0.117), ('papers', 0.116), ('improve', 0.095), ('please', 0.092), ('endeavor', 0.09), ('dear', 0.09), ('increasingly', 0.09), ('chaudhuri', 0.09), ('frustrated', 0.09), ('intended', 0.09), ('kamalika', 0.09), ('serve', 0.09), ('board', 0.083), ('editor', 0.083), ('contribution', 0.083), ('anonymity', 0.083), ('opinions', 0.083), ('get', 0.08), ('message', 0.079), ('pointers', 0.079), ('publications', 0.079), ('fellow', 0.079), ('jmlr', 0.075), ('try', 0.074), ('area', 0.074), ('expressed', 0.072), ('learners', 0.072), ('yoav', 0.072), ('peer', 0.07), ('entry', 0.07), ('behind', 0.07), ('freund', 0.067), ('decide', 0.067), ('http', 0.067), ('mixture', 0.067), ('journals', 0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 363 hunch net-2009-07-09-The Machine Learning Forum

Introduction: Dear Fellow Machine Learners, For the past year or so I have become increasingly frustrated with the peer review system in our field. I constantly get asked to review papers in which I have no interest. At the same time, as an action editor in JMLR, I constantly have to harass people to review papers. When I send papers to conferences and to journals I often get rejected with reviews that, at least in my mind, make no sense. Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it… I decided to try an do something to improve the situation. I started a new web site, which I decided to call “The machine learning forum” the URL is http://themachinelearningforum.org The main idea behind this web site is to remove anonymity from the review process. In this site, all opinions are attributed to the actual person that expressed them. I expect that this will improve the quality of the reviews. An obvious other effect is that there wil

2 0.16752152 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.15746367 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

4 0.14746477 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

5 0.13940327 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

6 0.1253214 318 hunch net-2008-09-26-The SODA Program Committee

7 0.12306202 315 hunch net-2008-09-03-Bidding Problems

8 0.11845271 437 hunch net-2011-07-10-ICML 2011 and the future

9 0.11420681 343 hunch net-2009-02-18-Decision by Vetocracy

10 0.10907771 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

11 0.10890584 468 hunch net-2012-06-29-ICML survey and comments

12 0.10743877 466 hunch net-2012-06-05-ICML acceptance statistics

13 0.10704068 207 hunch net-2006-09-12-Incentive Compatible Reviewing

14 0.10555124 65 hunch net-2005-05-02-Reviewing techniques for conferences

15 0.099904001 453 hunch net-2012-01-28-Why COLT?

16 0.097385153 116 hunch net-2005-09-30-Research in conferences

17 0.095515773 233 hunch net-2007-02-16-The Forgetting

18 0.093512468 409 hunch net-2010-09-13-AIStats

19 0.092213169 463 hunch net-2012-05-02-ICML: Behind the Scenes

20 0.090776622 208 hunch net-2006-09-18-What is missing for online collaborative research?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.158), (1, -0.155), (2, 0.12), (3, 0.065), (4, 0.005), (5, 0.092), (6, -0.007), (7, -0.011), (8, -0.016), (9, -0.042), (10, -0.014), (11, -0.026), (12, 0.029), (13, 0.023), (14, -0.028), (15, -0.004), (16, -0.052), (17, -0.018), (18, 0.007), (19, 0.051), (20, 0.037), (21, -0.015), (22, -0.054), (23, -0.094), (24, -0.03), (25, 0.004), (26, -0.023), (27, 0.023), (28, -0.007), (29, 0.011), (30, 0.023), (31, 0.069), (32, 0.066), (33, -0.073), (34, 0.011), (35, 0.023), (36, -0.063), (37, -0.055), (38, 0.01), (39, -0.03), (40, -0.1), (41, 0.019), (42, 0.033), (43, 0.048), (44, -0.123), (45, 0.014), (46, 0.061), (47, -0.056), (48, 0.09), (49, -0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97379458 363 hunch net-2009-07-09-The Machine Learning Forum

Introduction: Dear Fellow Machine Learners, For the past year or so I have become increasingly frustrated with the peer review system in our field. I constantly get asked to review papers in which I have no interest. At the same time, as an action editor in JMLR, I constantly have to harass people to review papers. When I send papers to conferences and to journals I often get rejected with reviews that, at least in my mind, make no sense. Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it… I decided to try an do something to improve the situation. I started a new web site, which I decided to call “The machine learning forum” the URL is http://themachinelearningforum.org The main idea behind this web site is to remove anonymity from the review process. In this site, all opinions are attributed to the actual person that expressed them. I expect that this will improve the quality of the reviews. An obvious other effect is that there wil

2 0.66918302 463 hunch net-2012-05-02-ICML: Behind the Scenes

Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze

3 0.66570842 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

4 0.63826746 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

5 0.62461257 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.60891008 318 hunch net-2008-09-26-The SODA Program Committee

7 0.60821062 116 hunch net-2005-09-30-Research in conferences

8 0.6076318 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

9 0.60474956 461 hunch net-2012-04-09-ICML author feedback is open

10 0.59991765 207 hunch net-2006-09-12-Incentive Compatible Reviewing

11 0.56151682 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

12 0.55773586 395 hunch net-2010-04-26-Compassionate Reviewing

13 0.54880828 437 hunch net-2011-07-10-ICML 2011 and the future

14 0.54306877 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

15 0.53320229 297 hunch net-2008-04-22-Taking the next step

16 0.52858919 466 hunch net-2012-06-05-ICML acceptance statistics

17 0.52077675 172 hunch net-2006-04-14-JMLR is a success

18 0.50758332 354 hunch net-2009-05-17-Server Update

19 0.50387299 233 hunch net-2007-02-16-The Forgetting

20 0.49634013 468 hunch net-2012-06-29-ICML survey and comments


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.039), (10, 0.054), (27, 0.149), (38, 0.011), (42, 0.015), (53, 0.092), (55, 0.115), (93, 0.356), (94, 0.014), (95, 0.063)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.87056565 363 hunch net-2009-07-09-The Machine Learning Forum

Introduction: Dear Fellow Machine Learners, For the past year or so I have become increasingly frustrated with the peer review system in our field. I constantly get asked to review papers in which I have no interest. At the same time, as an action editor in JMLR, I constantly have to harass people to review papers. When I send papers to conferences and to journals I often get rejected with reviews that, at least in my mind, make no sense. Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it… I decided to try an do something to improve the situation. I started a new web site, which I decided to call “The machine learning forum” the URL is http://themachinelearningforum.org The main idea behind this web site is to remove anonymity from the review process. In this site, all opinions are attributed to the actual person that expressed them. I expect that this will improve the quality of the reviews. An obvious other effect is that there wil

2 0.77568549 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

Introduction: I’m releasing version 4.0 ( tarball ) of Vowpal Wabbit . The biggest change (by far) in this release is experimental support for cluster parallelism, with notable help from Daniel Hsu . I also took advantage of the major version number to introduce some incompatible changes, including switching to murmurhash 2 , and other alterations to cachefiles. You’ll need to delete and regenerate them. In addition, the precise specification for a “tag” (i.e. string that can be used to identify an example) changed—you can’t have a space between the tag and the ‘|’ at the beginning of the feature namespace. And, of course, we made it faster. For the future, I put up my todo list outlining the major future improvements I want to see in the code. I’m planning to discuss the current mechanism and results of the cluster parallel implementation at the large scale machine learning workshop at NIPS later this week. Several people have asked me to do a tutorial/walkthrough of VW, wh

3 0.77060211 112 hunch net-2005-09-14-The Predictionist Viewpoint

Introduction: Virtually every discipline of significant human endeavor has a way explaining itself as fundamental and important. In all the cases I know of, they are both right (they are vital) and wrong (they are not solely vital). Politics. This is the one that everyone is familiar with at the moment. “What could be more important than the process of making decisions?” Science and Technology. This is the one that we-the-academics are familiar with. “The loss of modern science and technology would be catastrophic.” Military. “Without the military, a nation will be invaded and destroyed.” (insert your favorite here) Within science and technology, the same thing happens again. Mathematics. “What could be more important than a precise language for establishing truths?” Physics. “Nothing is more fundamental than the laws which govern the universe. Understanding them is the key to understanding everything else.” Biology. “Without life, we wouldn’t be here, so clearly the s

4 0.50521141 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

5 0.50507015 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.50366271 466 hunch net-2012-06-05-ICML acceptance statistics

7 0.50165641 437 hunch net-2011-07-10-ICML 2011 and the future

8 0.49589828 89 hunch net-2005-07-04-The Health of COLT

9 0.49556753 225 hunch net-2007-01-02-Retrospective

10 0.49494785 40 hunch net-2005-03-13-Avoiding Bad Reviewing

11 0.49385342 151 hunch net-2006-01-25-1 year

12 0.49049199 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

13 0.48999354 207 hunch net-2006-09-12-Incentive Compatible Reviewing

14 0.48953342 452 hunch net-2012-01-04-Why ICML? and the summer conferences

15 0.48900691 464 hunch net-2012-05-03-Microsoft Research, New York City

16 0.48871881 134 hunch net-2005-12-01-The Webscience Future

17 0.48667222 116 hunch net-2005-09-30-Research in conferences

18 0.48610744 343 hunch net-2009-02-18-Decision by Vetocracy

19 0.48479399 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

20 0.48478076 461 hunch net-2012-04-09-ICML author feedback is open