hunch_net hunch_net-2007 hunch_net-2007-238 knowledge-graph by maker-knowledge-mining

238 hunch net-2007-04-13-What to do with an unreasonable conditional accept


meta infos for this blog

Source: html

Introduction: Last year about this time, we received a conditional accept for the searn paper , which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper , leading to unhappiness for all. Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is: If an SPC asks for a revision that is inappropriate, the correct action is to contact the chairs as soon as the decision is made, clearly explaining what the problem is, so we can decide whether or not to over-rule the SPC. As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the ov


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Last year about this time, we received a conditional accept for the searn paper , which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. [sent-1, score-1.759]

2 We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper , leading to unhappiness for all. [sent-2, score-1.215]

3 Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. [sent-3, score-0.982]

4 As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the overall quality of the reviewing process and the conference. [sent-5, score-0.434]

5 At the time, I operated under the belief that the PC chair’s job was simply too heavy to bother with something like this, but that was wrong. [sent-7, score-0.404]

6 William invited me to post this, and I hope we all learn a little bit from it. [sent-8, score-0.156]

7 Obviously, this should only be used if there is a real flaw in the conditions for a conditional accept paper. [sent-9, score-0.652]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('cite', 0.253), ('spc', 0.253), ('william', 0.239), ('conditional', 0.231), ('accept', 0.222), ('explaining', 0.193), ('sanjoy', 0.18), ('pc', 0.18), ('soon', 0.18), ('chair', 0.161), ('job', 0.159), ('chairs', 0.156), ('cohen', 0.136), ('unhappiness', 0.136), ('paper', 0.13), ('bother', 0.126), ('excuse', 0.126), ('spoke', 0.126), ('wrote', 0.119), ('strictly', 0.119), ('heavy', 0.119), ('relevant', 0.119), ('received', 0.114), ('asks', 0.109), ('contact', 0.105), ('suggested', 0.105), ('flaw', 0.102), ('decide', 0.102), ('dasgupta', 0.099), ('conditions', 0.097), ('searn', 0.097), ('draft', 0.094), ('leading', 0.094), ('willing', 0.094), ('re', 0.09), ('us', 0.089), ('work', 0.087), ('didn', 0.087), ('reference', 0.087), ('alternative', 0.083), ('reject', 0.083), ('bit', 0.082), ('action', 0.082), ('later', 0.081), ('email', 0.081), ('asked', 0.081), ('response', 0.081), ('obviously', 0.077), ('extra', 0.077), ('invited', 0.074)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

Introduction: Last year about this time, we received a conditional accept for the searn paper , which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper , leading to unhappiness for all. Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is: If an SPC asks for a revision that is inappropriate, the correct action is to contact the chairs as soon as the decision is made, clearly explaining what the problem is, so we can decide whether or not to over-rule the SPC. As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the ov

2 0.19490629 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

Introduction: This is a very difficult post to write, because it is about a perenially touchy subject. Nevertheless, it is an important one which needs to be thought about carefully. There are a few things which should be understood: The system is changing and responsive. We-the-authors are we-the-reviewers, we-the-PC, and even we-the-NIPS-board. NIPS has implemented ‘secondary program chairs’, ‘author response’, and ‘double blind reviewing’ in the last few years to help with the decision process, and more changes may happen in the future. Agreement creates a perception of correctness. When any PC meets and makes a group decision about a paper, there is a strong tendency for the reinforcement inherent in a group decision to create the perception of correctness. For the many people who have been on the NIPS PC it’s reasonable to entertain a healthy skepticism in the face of this reinforcing certainty. This post is about structural problems. What problems arise because of the structure

3 0.15815605 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

4 0.14564919 177 hunch net-2006-05-05-An ICML reject

Introduction: Hal , Daniel , and I have been working on the algorithm Searn for structured prediction. This was just conditionally accepted and then rejected from ICML, and we were quite surprised. By any reasonable criteria, it seems this is an interesting algorithm. Prediction Performance: Searn performed better than any other algorithm on all the problems we tested against using the same feature set. This is true even using the numbers reported by authors in their papers. Theoretical underpinning. Searn is a reduction which comes with a reduction guarantee: the good performance on a base classifiers implies good performance for the overall system. No other theorem of this type has been made for other structured prediction algorithms, as far as we know. Speed. Searn has no problem handling much larger datasets than other algorithms we tested against. Simplicity. Given code for a binary classifier and a problem-specific search algorithm, only a few tens of lines are necessary to

5 0.13031702 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.12679435 318 hunch net-2008-09-26-The SODA Program Committee

7 0.12635177 461 hunch net-2012-04-09-ICML author feedback is open

8 0.12423392 116 hunch net-2005-09-30-Research in conferences

9 0.11215939 466 hunch net-2012-06-05-ICML acceptance statistics

10 0.10983254 437 hunch net-2011-07-10-ICML 2011 and the future

11 0.10875335 304 hunch net-2008-06-27-Reviewing Horror Stories

12 0.10613771 463 hunch net-2012-05-02-ICML: Behind the Scenes

13 0.10335551 454 hunch net-2012-01-30-ICML Posters and Scope

14 0.10015623 343 hunch net-2009-02-18-Decision by Vetocracy

15 0.091989525 468 hunch net-2012-06-29-ICML survey and comments

16 0.083684318 40 hunch net-2005-03-13-Avoiding Bad Reviewing

17 0.081920341 325 hunch net-2008-11-10-ICML Reviewing Criteria

18 0.081574894 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

19 0.077402145 315 hunch net-2008-09-03-Bidding Problems

20 0.073769189 23 hunch net-2005-02-19-Loss Functions for Discriminative Training of Energy-Based Models


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.159), (1, -0.101), (2, 0.128), (3, 0.054), (4, 0.032), (5, 0.049), (6, -0.028), (7, 0.016), (8, -0.04), (9, -0.061), (10, 0.048), (11, -0.045), (12, -0.032), (13, 0.008), (14, -0.022), (15, -0.013), (16, -0.039), (17, 0.054), (18, 0.019), (19, -0.004), (20, -0.002), (21, 0.03), (22, 0.024), (23, -0.048), (24, -0.055), (25, 0.029), (26, -0.013), (27, -0.056), (28, 0.141), (29, -0.025), (30, 0.009), (31, -0.055), (32, -0.048), (33, 0.017), (34, 0.072), (35, 0.066), (36, -0.01), (37, 0.063), (38, -0.013), (39, 0.034), (40, 0.017), (41, 0.018), (42, -0.033), (43, -0.002), (44, 0.017), (45, -0.049), (46, -0.045), (47, 0.064), (48, 0.025), (49, 0.043)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97845292 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

Introduction: Last year about this time, we received a conditional accept for the searn paper , which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper , leading to unhappiness for all. Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is: If an SPC asks for a revision that is inappropriate, the correct action is to contact the chairs as soon as the decision is made, clearly explaining what the problem is, so we can decide whether or not to over-rule the SPC. As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the ov

2 0.73542428 304 hunch net-2008-06-27-Reviewing Horror Stories

Introduction: Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past. Prediction Theory for Classification Tutorial . This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR . The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. App

3 0.73262006 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

Introduction: This is a very difficult post to write, because it is about a perenially touchy subject. Nevertheless, it is an important one which needs to be thought about carefully. There are a few things which should be understood: The system is changing and responsive. We-the-authors are we-the-reviewers, we-the-PC, and even we-the-NIPS-board. NIPS has implemented ‘secondary program chairs’, ‘author response’, and ‘double blind reviewing’ in the last few years to help with the decision process, and more changes may happen in the future. Agreement creates a perception of correctness. When any PC meets and makes a group decision about a paper, there is a strong tendency for the reinforcement inherent in a group decision to create the perception of correctness. For the many people who have been on the NIPS PC it’s reasonable to entertain a healthy skepticism in the face of this reinforcing certainty. This post is about structural problems. What problems arise because of the structure

4 0.7295711 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

5 0.72630334 463 hunch net-2012-05-02-ICML: Behind the Scenes

Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze

6 0.7069217 38 hunch net-2005-03-09-Bad Reviewing

7 0.66509587 318 hunch net-2008-09-26-The SODA Program Committee

8 0.63822794 461 hunch net-2012-04-09-ICML author feedback is open

9 0.63631612 315 hunch net-2008-09-03-Bidding Problems

10 0.63128942 484 hunch net-2013-06-16-Representative Reviewing

11 0.608302 52 hunch net-2005-04-04-Grounds for Rejection

12 0.59974939 333 hunch net-2008-12-27-Adversarial Academia

13 0.58796632 343 hunch net-2009-02-18-Decision by Vetocracy

14 0.53459668 180 hunch net-2006-05-21-NIPS paper evaluation criteria

15 0.52594334 437 hunch net-2011-07-10-ICML 2011 and the future

16 0.52229118 468 hunch net-2012-06-29-ICML survey and comments

17 0.51663798 466 hunch net-2012-06-05-ICML acceptance statistics

18 0.51315808 98 hunch net-2005-07-27-Not goal metrics

19 0.49880448 207 hunch net-2006-09-12-Incentive Compatible Reviewing

20 0.49611694 177 hunch net-2006-05-05-An ICML reject


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.011), (27, 0.165), (53, 0.089), (55, 0.129), (92, 0.372), (94, 0.08), (95, 0.053)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.93153107 272 hunch net-2007-11-14-BellKor wins Netflix

Introduction: … but only the little prize. The BellKor team focused on integrating predictions from many different methods. The base methods consist of: Nearest Neighbor Methods Matrix Factorization Methods (asymmetric and symmetric) Linear Regression on various feature spaces Restricted Boltzman Machines The final predictor was an ensemble (as was reasonable to expect), although it’s a little bit more complicated than just a weighted average—it’s essentially a customized learning algorithm. Base approaches (1)-(3) seem like relatively well-known approaches (although I haven’t seen the asymmetric factorization variant before). RBMs are the new approach. The writeup is pretty clear for more details. The contestants are close to reaching the big prize, but the last 1.5% is probably at least as hard as what’s been done. A few new structurally different methods for making predictions may need to be discovered and added into the mixture. In other words, research may be require

2 0.9231773 362 hunch net-2009-06-26-Netflix nearly done

Introduction: A $1M qualifying result was achieved on the public Netflix test set by a 3-way ensemble team . This is just in time for Yehuda ‘s presentation at KDD , which I’m sure will be one of the best attended ever. This isn’t quite over—there are a few days for another super-conglomerate team to come together and there is some small chance that the performance is nonrepresentative of the final test set, but I expect not. Regardless of the final outcome, the biggest lesson for ML from the Netflix contest has been the formidable performance edge of ensemble methods.

same-blog 3 0.90043366 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

Introduction: Last year about this time, we received a conditional accept for the searn paper , which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper , leading to unhappiness for all. Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is: If an SPC asks for a revision that is inappropriate, the correct action is to contact the chairs as soon as the decision is made, clearly explaining what the problem is, so we can decide whether or not to over-rule the SPC. As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the ov

4 0.83025873 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

5 0.63141257 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

6 0.5901463 293 hunch net-2008-03-23-Interactive Machine Learning

7 0.5722965 463 hunch net-2012-05-02-ICML: Behind the Scenes

8 0.56944728 461 hunch net-2012-04-09-ICML author feedback is open

9 0.56881332 141 hunch net-2005-12-17-Workshops as Franchise Conferences

10 0.56045103 75 hunch net-2005-05-28-Running A Machine Learning Summer School

11 0.54582036 207 hunch net-2006-09-12-Incentive Compatible Reviewing

12 0.53928888 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.53535068 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

14 0.52777714 80 hunch net-2005-06-10-Workshops are not Conferences

15 0.52264237 370 hunch net-2009-09-18-Necessary and Sufficient Research

16 0.52123886 116 hunch net-2005-09-30-Research in conferences

17 0.5190599 410 hunch net-2010-09-17-New York Area Machine Learning Events

18 0.51551366 371 hunch net-2009-09-21-Netflix finishes (and starts)

19 0.51544052 453 hunch net-2012-01-28-Why COLT?

20 0.51476967 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?