hunch_net hunch_net-2006 hunch_net-2006-172 knowledge-graph by maker-knowledge-mining

172 hunch net-2006-04-14-JMLR is a success


meta infos for this blog

Source: html

Introduction: In 2001, the “ Journal of Machine Learning Research ” was created in reaction to unadaptive publisher policies at MLJ . Essentially, with the creation of the internet, the bottleneck in publishing research shifted from publishing to research. The declaration of independence accompanying this move expresses the reasons why in greater detail. MLJ has strongly changed its policy in reaction to this. In particular, there is no longer an assignment of copyright to the publisher (*), and MLJ regularly sponsors many student “best paper awards” across several conferences with cash prizes. This is an advantage of MLJ over JMLR: MLJ can afford to sponsor cash prizes for the machine learning community. The remaining disadvantage is that reading papers in MLJ sometimes requires searching for the author’s website where the free version is available. In contrast, JMLR articles are freely available to everyone off the JMLR website. Whether or not this disadvantage cancels the advantage i


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In 2001, the “ Journal of Machine Learning Research ” was created in reaction to unadaptive publisher policies at MLJ . [sent-1, score-0.438]

2 Essentially, with the creation of the internet, the bottleneck in publishing research shifted from publishing to research. [sent-2, score-0.542]

3 The declaration of independence accompanying this move expresses the reasons why in greater detail. [sent-3, score-0.307]

4 MLJ has strongly changed its policy in reaction to this. [sent-4, score-0.186]

5 In particular, there is no longer an assignment of copyright to the publisher (*), and MLJ regularly sponsors many student “best paper awards” across several conferences with cash prizes. [sent-5, score-0.673]

6 This is an advantage of MLJ over JMLR: MLJ can afford to sponsor cash prizes for the machine learning community. [sent-6, score-0.5]

7 The remaining disadvantage is that reading papers in MLJ sometimes requires searching for the author’s website where the free version is available. [sent-7, score-0.387]

8 In contrast, JMLR articles are freely available to everyone off the JMLR website. [sent-8, score-0.159]

9 Whether or not this disadvantage cancels the advantage is debatable, but essentially no one working on machine learning argues with the following: the changes brought by the creation of JMLR have been positive for the general machine learning community. [sent-9, score-0.551]

10 This model can and should be emulated in other areas of research where publishers are not behaving in a sufficiently constructive manner. [sent-10, score-0.222]

11 Doing so requires two vital ingredients: a consensus of leaders to support a new journal and the willigness to spend the time and effort setting it up. [sent-11, score-0.535]

12 Presumably, some lessons on how to do this have been learned by the editors of JMLR and they are willing to share it. [sent-12, score-0.238]

13 (*) Back in the day, it was typical to be forced to sign over all rights to your journal paper, then ignore this and place it on your homepage. [sent-13, score-0.476]

14 The natural act of placing your paper on your webpage is no longer illegal. [sent-14, score-0.392]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('mlj', 0.559), ('jmlr', 0.389), ('journal', 0.198), ('reaction', 0.186), ('publisher', 0.186), ('creation', 0.163), ('cash', 0.155), ('disadvantage', 0.136), ('publishing', 0.114), ('longer', 0.114), ('advantage', 0.095), ('requires', 0.095), ('expresses', 0.093), ('behaving', 0.093), ('editors', 0.093), ('accompanying', 0.086), ('brought', 0.086), ('leaders', 0.086), ('rights', 0.086), ('afford', 0.086), ('sponsor', 0.086), ('regularly', 0.081), ('articles', 0.081), ('ingredients', 0.081), ('bottleneck', 0.081), ('awards', 0.081), ('consensus', 0.081), ('lessons', 0.081), ('searching', 0.078), ('remaining', 0.078), ('prizes', 0.078), ('freely', 0.078), ('vital', 0.075), ('placing', 0.075), ('essentially', 0.071), ('shifted', 0.07), ('assignment', 0.07), ('act', 0.068), ('presumably', 0.068), ('webpage', 0.068), ('paper', 0.067), ('constructive', 0.066), ('policies', 0.066), ('ignore', 0.064), ('forced', 0.064), ('independence', 0.064), ('willing', 0.064), ('move', 0.064), ('sign', 0.064), ('sufficiently', 0.063)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999976 172 hunch net-2006-04-14-JMLR is a success

Introduction: In 2001, the “ Journal of Machine Learning Research ” was created in reaction to unadaptive publisher policies at MLJ . Essentially, with the creation of the internet, the bottleneck in publishing research shifted from publishing to research. The declaration of independence accompanying this move expresses the reasons why in greater detail. MLJ has strongly changed its policy in reaction to this. In particular, there is no longer an assignment of copyright to the publisher (*), and MLJ regularly sponsors many student “best paper awards” across several conferences with cash prizes. This is an advantage of MLJ over JMLR: MLJ can afford to sponsor cash prizes for the machine learning community. The remaining disadvantage is that reading papers in MLJ sometimes requires searching for the author’s website where the free version is available. In contrast, JMLR articles are freely available to everyone off the JMLR website. Whether or not this disadvantage cancels the advantage i

2 0.11815628 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

3 0.080775939 270 hunch net-2007-11-02-The Machine Learning Award goes to …

Introduction: Perhaps the biggest CS prize for research is the Turing Award , which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards). In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost. In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this: There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around. Machine Learning work doesn’t last Almost every paper is fo

4 0.078303352 13 hunch net-2005-02-04-JMLG

Introduction: The Journal of Machine Learning Gossip has some fine satire about learning research. In particular, the guides are amusing and remarkably true. As in all things, it’s easy to criticize the way things are and harder to make them better.

5 0.072279662 304 hunch net-2008-06-27-Reviewing Horror Stories

Introduction: Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past. Prediction Theory for Classification Tutorial . This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR . The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. App

6 0.072048739 427 hunch net-2011-03-20-KDD Cup 2011

7 0.070720099 382 hunch net-2009-12-09-Future Publication Models @ NIPS

8 0.067578554 288 hunch net-2008-02-10-Complexity Illness

9 0.066089392 86 hunch net-2005-06-28-The cross validation problem: cash reward

10 0.06363868 146 hunch net-2006-01-06-MLTV

11 0.063013762 38 hunch net-2005-03-09-Bad Reviewing

12 0.062344197 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

13 0.052768275 343 hunch net-2009-02-18-Decision by Vetocracy

14 0.052351311 194 hunch net-2006-07-11-New Models

15 0.051887043 388 hunch net-2010-01-24-Specializations of the Master Problem

16 0.049893826 395 hunch net-2010-04-26-Compassionate Reviewing

17 0.049440842 378 hunch net-2009-11-15-The Other Online Learning

18 0.047482625 406 hunch net-2010-08-22-KDD 2010

19 0.045764089 344 hunch net-2009-02-22-Effective Research Funding

20 0.045691825 318 hunch net-2008-09-26-The SODA Program Committee


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.108), (1, -0.05), (2, 0.0), (3, 0.046), (4, -0.028), (5, 0.002), (6, 0.014), (7, 0.002), (8, -0.025), (9, 0.023), (10, 0.017), (11, 0.016), (12, 0.017), (13, 0.016), (14, 0.012), (15, 0.009), (16, 0.018), (17, 0.019), (18, 0.022), (19, -0.003), (20, 0.002), (21, 0.012), (22, 0.006), (23, -0.032), (24, 0.012), (25, 0.058), (26, 0.008), (27, -0.001), (28, -0.035), (29, 0.009), (30, 0.006), (31, 0.02), (32, 0.022), (33, -0.025), (34, 0.027), (35, 0.037), (36, -0.032), (37, 0.016), (38, 0.051), (39, -0.013), (40, -0.057), (41, -0.032), (42, -0.026), (43, 0.046), (44, 0.029), (45, -0.057), (46, 0.052), (47, -0.088), (48, 0.057), (49, 0.007)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90348613 172 hunch net-2006-04-14-JMLR is a success

Introduction: In 2001, the “ Journal of Machine Learning Research ” was created in reaction to unadaptive publisher policies at MLJ . Essentially, with the creation of the internet, the bottleneck in publishing research shifted from publishing to research. The declaration of independence accompanying this move expresses the reasons why in greater detail. MLJ has strongly changed its policy in reaction to this. In particular, there is no longer an assignment of copyright to the publisher (*), and MLJ regularly sponsors many student “best paper awards” across several conferences with cash prizes. This is an advantage of MLJ over JMLR: MLJ can afford to sponsor cash prizes for the machine learning community. The remaining disadvantage is that reading papers in MLJ sometimes requires searching for the author’s website where the free version is available. In contrast, JMLR articles are freely available to everyone off the JMLR website. Whether or not this disadvantage cancels the advantage i

2 0.56170368 240 hunch net-2007-04-21-Videolectures.net

Introduction: Davor has been working to setup videolectures.net which is the new site for the many lectures mentioned here . (Tragically, they seem to only be available in windows media format.) I went through my own projects and added a few links to the videos. The day when every result is a set of {paper, slides, video} isn’t quite here yet, but it’s within sight. (For many papers, of course, code is a 4th component.)

3 0.54763806 382 hunch net-2009-12-09-Future Publication Models @ NIPS

Introduction: Yesterday, there was a discussion about future publication models at NIPS . Yann and Zoubin have specific detailed proposals which I’ll add links to when I get them ( Yann’s proposal and Zoubin’s proposal ). What struck me about the discussion is that there are many simultaneous concerns as well as many simultaneous proposals, which makes it difficult to keep all the distinctions straight in a verbal conversation. It also seemed like people were serious enough about this that we may see some real movement. Certainly, my personal experience motivates that as I’ve posted many times about the substantial flaws in our review process, including some very poor personal experiences. Concerns include the following: (Several) Reviewers are overloaded, boosting the noise in decision making. ( Yann ) A new system should run with as little built-in delay and friction to the process of research as possible. ( Hanna Wallach (updated)) Double-blind review is particularly impor

4 0.52006835 363 hunch net-2009-07-09-The Machine Learning Forum

Introduction: Dear Fellow Machine Learners, For the past year or so I have become increasingly frustrated with the peer review system in our field. I constantly get asked to review papers in which I have no interest. At the same time, as an action editor in JMLR, I constantly have to harass people to review papers. When I send papers to conferences and to journals I often get rejected with reviews that, at least in my mind, make no sense. Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it… I decided to try an do something to improve the situation. I started a new web site, which I decided to call “The machine learning forum” the URL is http://themachinelearningforum.org The main idea behind this web site is to remove anonymity from the review process. In this site, all opinions are attributed to the actual person that expressed them. I expect that this will improve the quality of the reviews. An obvious other effect is that there wil

5 0.51504976 270 hunch net-2007-11-02-The Machine Learning Award goes to …

Introduction: Perhaps the biggest CS prize for research is the Turing Award , which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards). In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost. In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this: There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around. Machine Learning work doesn’t last Almost every paper is fo

6 0.51478589 333 hunch net-2008-12-27-Adversarial Academia

7 0.49417096 146 hunch net-2006-01-06-MLTV

8 0.48873192 208 hunch net-2006-09-18-What is missing for online collaborative research?

9 0.48035187 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

10 0.47654617 1 hunch net-2005-01-19-Why I decided to run a weblog.

11 0.46073031 437 hunch net-2011-07-10-ICML 2011 and the future

12 0.4536103 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

13 0.44605318 483 hunch net-2013-06-10-The Large Scale Learning class notes

14 0.44303918 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling

15 0.43906194 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.43706819 304 hunch net-2008-06-27-Reviewing Horror Stories

17 0.4361783 414 hunch net-2010-10-17-Partha Niyogi has died

18 0.43428561 107 hunch net-2005-09-05-Site Update

19 0.43402442 463 hunch net-2012-05-02-ICML: Behind the Scenes

20 0.43034658 269 hunch net-2007-10-24-Contextual Bandits


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.724), (55, 0.054), (80, 0.04), (94, 0.022), (95, 0.042)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99940085 172 hunch net-2006-04-14-JMLR is a success

Introduction: In 2001, the “ Journal of Machine Learning Research ” was created in reaction to unadaptive publisher policies at MLJ . Essentially, with the creation of the internet, the bottleneck in publishing research shifted from publishing to research. The declaration of independence accompanying this move expresses the reasons why in greater detail. MLJ has strongly changed its policy in reaction to this. In particular, there is no longer an assignment of copyright to the publisher (*), and MLJ regularly sponsors many student “best paper awards” across several conferences with cash prizes. This is an advantage of MLJ over JMLR: MLJ can afford to sponsor cash prizes for the machine learning community. The remaining disadvantage is that reading papers in MLJ sometimes requires searching for the author’s website where the free version is available. In contrast, JMLR articles are freely available to everyone off the JMLR website. Whether or not this disadvantage cancels the advantage i

2 0.99713224 247 hunch net-2007-06-14-Interesting Papers at COLT 2007

Introduction: Here are two papers that seem particularly interesting at this year’s COLT. Gilles Blanchard and François Fleuret , Occam’s Hammer . When we are interested in very tight bounds on the true error rate of a classifier, it is tempting to use a PAC-Bayes bound which can (empirically) be quite tight . A disadvantage of the PAC-Bayes bound is that it applies to a classifier which is randomized over a set of base classifiers rather than a single classifier. This paper shows that a similar bound can be proved which holds for a single classifier drawn from the set. The ability to safely use a single classifier is very nice. This technique applies generically to any base bound, so it has other applications covered in the paper. Adam Tauman Kalai . Learning Nested Halfspaces and Uphill Decision Trees . Classification PAC-learning, where you prove that any problem amongst some set is polytime learnable with respect to any distribution over the input X is extraordinarily ch

3 0.99562299 274 hunch net-2007-11-28-Computational Consequences of Classification

Introduction: In the regression vs classification debate , I’m adding a new “pro” to classification. It seems there are computational shortcuts available for classification which simply aren’t available for regression. This arises in several situations. In active learning it is sometimes possible to find an e error classifier with just log(e) labeled samples. Only much more modest improvements appear to be achievable for squared loss regression. The essential reason is that the loss function on many examples is flat with respect to large variations in the parameter spaces of a learned classifier, which implies that many of these classifiers do not need to be considered. In contrast, for squared loss regression, most substantial variations in the parameter space influence the loss at most points. In budgeted learning, where there is either a computational time constraint or a feature cost constraint, a classifier can sometimes be learned to very high accuracy under the constraints

4 0.99511331 288 hunch net-2008-02-10-Complexity Illness

Introduction: One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, becaus

5 0.9938103 308 hunch net-2008-07-06-To Dual or Not

Introduction: Yoram and Shai ‘s online learning tutorial at ICML brings up a question for me, “Why use the dual ?” The basic setting is learning a weight vector w i so that the function f(x)= sum i w i x i optimizes some convex loss function. The functional view of the dual is that instead of (or in addition to) keeping track of w i over the feature space, you keep track of a vector a j over the examples and define w i = sum j a j x ji . The above view of duality makes operating in the dual appear unnecessary, because in the end a weight vector is always used. The tutorial suggests that thinking about the dual gives a unified algorithmic font for deriving online learning algorithms. I haven’t worked with the dual representation much myself, but I have seen a few examples where it appears helpful. Noise When doing online optimization (i.e. online learning where you are allowed to look at individual examples multiple times), the dual representation may be helpfu

6 0.99361807 166 hunch net-2006-03-24-NLPers

7 0.99361807 246 hunch net-2007-06-13-Not Posting

8 0.99361807 418 hunch net-2010-12-02-Traffic Prediction Problem

9 0.99288279 400 hunch net-2010-06-13-The Good News on Exploration and Learning

10 0.99206793 245 hunch net-2007-05-12-Loss Function Semantics

11 0.99090606 45 hunch net-2005-03-22-Active learning

12 0.97953171 9 hunch net-2005-02-01-Watchword: Loss

13 0.97865605 352 hunch net-2009-05-06-Machine Learning to AI

14 0.97700256 341 hunch net-2009-02-04-Optimal Proxy Loss for Classification

15 0.97664309 304 hunch net-2008-06-27-Reviewing Horror Stories

16 0.96813178 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

17 0.95502895 483 hunch net-2013-06-10-The Large Scale Learning class notes

18 0.9510603 293 hunch net-2008-03-23-Interactive Machine Learning

19 0.94897586 244 hunch net-2007-05-09-The Missing Bound

20 0.94043607 8 hunch net-2005-02-01-NIPS: Online Bayes