hunch_net hunch_net-2010 hunch_net-2010-401 knowledge-graph by maker-knowledge-mining

401 hunch net-2010-06-20-2010 ICML discussion site


meta infos for this blog

Source: html

Introduction: A substantial difficulty with the 2009 and 2008 ICML discussion system was a communication vacuum, where authors were not informed of comments, and commenters were not informed of responses to their comments without explicit monitoring. Mark Reid has setup a new discussion system for 2010 with the goal of addressing this. Mark didn’t want to make it to intrusive, so you must opt-in. As an author, find your paper and “Subscribe by email” to the comments. As a commenter, you have the option of providing an email for follow-up notification.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 A substantial difficulty with the 2009 and 2008 ICML discussion system was a communication vacuum, where authors were not informed of comments, and commenters were not informed of responses to their comments without explicit monitoring. [sent-1, score-2.638]

2 Mark Reid has setup a new discussion system for 2010 with the goal of addressing this. [sent-2, score-0.842]

3 Mark didn’t want to make it to intrusive, so you must opt-in. [sent-3, score-0.209]

4 As an author, find your paper and “Subscribe by email” to the comments. [sent-4, score-0.152]

5 As a commenter, you have the option of providing an email for follow-up notification. [sent-5, score-0.637]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('informed', 0.394), ('mark', 0.352), ('email', 0.302), ('comments', 0.266), ('commenters', 0.236), ('subscribe', 0.223), ('responses', 0.223), ('discussion', 0.203), ('reid', 0.197), ('option', 0.191), ('addressing', 0.181), ('communication', 0.176), ('system', 0.165), ('didn', 0.162), ('explicit', 0.148), ('providing', 0.144), ('setup', 0.142), ('authors', 0.127), ('difficulty', 0.116), ('author', 0.114), ('goal', 0.106), ('without', 0.1), ('find', 0.091), ('substantial', 0.09), ('must', 0.079), ('icml', 0.079), ('want', 0.074), ('paper', 0.061), ('make', 0.056), ('new', 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 401 hunch net-2010-06-20-2010 ICML discussion site

Introduction: A substantial difficulty with the 2009 and 2008 ICML discussion system was a communication vacuum, where authors were not informed of comments, and commenters were not informed of responses to their comments without explicit monitoring. Mark Reid has setup a new discussion system for 2010 with the goal of addressing this. Mark didn’t want to make it to intrusive, so you must opt-in. As an author, find your paper and “Subscribe by email” to the comments. As a commenter, you have the option of providing an email for follow-up notification.

2 0.28168201 254 hunch net-2007-07-12-ICML Trends

Introduction: Mark Reid did a post on ICML trends that I found interesting.

3 0.23840877 305 hunch net-2008-06-30-ICML has a comment system

Introduction: Mark Reid has stepped up and created a comment system for ICML papers which Greger Linden has tightly integrated. My understanding is that Mark spent quite a bit of time on the details, and there are some cool features like working latex math mode. This is an excellent chance for the ICML community to experiment with making ICML year-round, so I hope it works out. Please do consider experimenting with it.

4 0.18833491 356 hunch net-2009-05-24-2009 ICML discussion site

Introduction: Mark Reid has setup a discussion site for ICML papers again this year and Monica Dinculescu has linked it in from the ICML site. Last year’s attempt appears to have been an acceptable but not wild success as a little bit of fruitful discussion occurred. I’m hoping this year will be a bit more of a success—please don’t be shy I’d like to also point out that ICML ‘s early registration deadline has a few hours left, while UAI ‘s and COLT ‘s are in a week.

5 0.18121721 468 hunch net-2012-06-29-ICML survey and comments

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20

6 0.15044774 367 hunch net-2009-08-16-Centmail comments

7 0.12103103 403 hunch net-2010-07-18-ICML & COLT 2010

8 0.11789598 122 hunch net-2005-10-13-Site tweak

9 0.10496303 454 hunch net-2012-01-30-ICML Posters and Scope

10 0.10272529 354 hunch net-2009-05-17-Server Update

11 0.097279526 65 hunch net-2005-05-02-Reviewing techniques for conferences

12 0.096715234 195 hunch net-2006-07-12-Who is having visa problems reaching US conferences?

13 0.091322109 116 hunch net-2005-09-30-Research in conferences

14 0.09007898 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

15 0.08496958 25 hunch net-2005-02-20-At One Month

16 0.084388286 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

17 0.080435961 461 hunch net-2012-04-09-ICML author feedback is open

18 0.080189392 297 hunch net-2008-04-22-Taking the next step

19 0.080011442 117 hunch net-2005-10-03-Not ICML

20 0.073941208 40 hunch net-2005-03-13-Avoiding Bad Reviewing


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.11), (1, -0.113), (2, 0.085), (3, 0.039), (4, 0.021), (5, -0.002), (6, -0.018), (7, -0.072), (8, -0.044), (9, -0.002), (10, -0.054), (11, -0.037), (12, -0.199), (13, 0.121), (14, 0.173), (15, 0.002), (16, -0.273), (17, -0.179), (18, -0.023), (19, 0.128), (20, -0.005), (21, -0.096), (22, -0.102), (23, 0.036), (24, -0.042), (25, -0.082), (26, 0.071), (27, -0.101), (28, 0.085), (29, 0.023), (30, -0.058), (31, -0.007), (32, 0.02), (33, 0.066), (34, -0.005), (35, 0.051), (36, -0.004), (37, -0.097), (38, 0.022), (39, -0.015), (40, 0.072), (41, -0.043), (42, -0.037), (43, 0.001), (44, 0.099), (45, -0.014), (46, -0.004), (47, 0.068), (48, 0.074), (49, -0.039)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99211001 401 hunch net-2010-06-20-2010 ICML discussion site

Introduction: A substantial difficulty with the 2009 and 2008 ICML discussion system was a communication vacuum, where authors were not informed of comments, and commenters were not informed of responses to their comments without explicit monitoring. Mark Reid has setup a new discussion system for 2010 with the goal of addressing this. Mark didn’t want to make it to intrusive, so you must opt-in. As an author, find your paper and “Subscribe by email” to the comments. As a commenter, you have the option of providing an email for follow-up notification.

2 0.87026197 254 hunch net-2007-07-12-ICML Trends

Introduction: Mark Reid did a post on ICML trends that I found interesting.

3 0.83848649 305 hunch net-2008-06-30-ICML has a comment system

Introduction: Mark Reid has stepped up and created a comment system for ICML papers which Greger Linden has tightly integrated. My understanding is that Mark spent quite a bit of time on the details, and there are some cool features like working latex math mode. This is an excellent chance for the ICML community to experiment with making ICML year-round, so I hope it works out. Please do consider experimenting with it.

4 0.71965218 356 hunch net-2009-05-24-2009 ICML discussion site

Introduction: Mark Reid has setup a discussion site for ICML papers again this year and Monica Dinculescu has linked it in from the ICML site. Last year’s attempt appears to have been an acceptable but not wild success as a little bit of fruitful discussion occurred. I’m hoping this year will be a bit more of a success—please don’t be shy I’d like to also point out that ICML ‘s early registration deadline has a few hours left, while UAI ‘s and COLT ‘s are in a week.

5 0.68645245 117 hunch net-2005-10-03-Not ICML

Introduction: Alex Smola showed me this ICML 2006 webpage. This is NOT the ICML we know, but rather some people at “Enformatika”. Investigation shows that they registered with an anonymous yahoo email account from dotregistrar.com the “Home of the $6.79 wholesale domain!” and their nameservers are by Turkticaret , a Turkish internet company. It appears the website has since been altered to “ ICNL ” (the above link uses the google cache). They say that imitation is the sincerest form of flattery, so the organizers of the real ICML 2006 must feel quite flattered.

6 0.5829252 468 hunch net-2012-06-29-ICML survey and comments

7 0.45860365 367 hunch net-2009-08-16-Centmail comments

8 0.45732489 354 hunch net-2009-05-17-Server Update

9 0.44074258 122 hunch net-2005-10-13-Site tweak

10 0.39594448 246 hunch net-2007-06-13-Not Posting

11 0.3863188 297 hunch net-2008-04-22-Taking the next step

12 0.36119244 223 hunch net-2006-12-06-The Spam Problem

13 0.36055171 37 hunch net-2005-03-08-Fast Physics for Learning

14 0.35605898 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

15 0.35413468 403 hunch net-2010-07-18-ICML & COLT 2010

16 0.33906785 364 hunch net-2009-07-11-Interesting papers at KDD

17 0.33610108 278 hunch net-2007-12-17-New Machine Learning mailing list

18 0.33029938 65 hunch net-2005-05-02-Reviewing techniques for conferences

19 0.32324412 452 hunch net-2012-01-04-Why ICML? and the summer conferences

20 0.32239637 107 hunch net-2005-09-05-Site Update


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.601), (27, 0.104), (55, 0.129)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94273984 401 hunch net-2010-06-20-2010 ICML discussion site

Introduction: A substantial difficulty with the 2009 and 2008 ICML discussion system was a communication vacuum, where authors were not informed of comments, and commenters were not informed of responses to their comments without explicit monitoring. Mark Reid has setup a new discussion system for 2010 with the goal of addressing this. Mark didn’t want to make it to intrusive, so you must opt-in. As an author, find your paper and “Subscribe by email” to the comments. As a commenter, you have the option of providing an email for follow-up notification.

2 0.86094207 243 hunch net-2007-05-08-Conditional Tournaments for Multiclass to Binary

Introduction: This problem has been cracked (but not quite completely solved) by Alina , Pradeep , and I . The problem is essentially finding a better way to reduce multiclass classification to binary classification. The solution is to use a carefully crafted tournament, the simplest version of which is a single elimination tournament where the “players” are the different classes. An example of the structure is here: For the single elimination tournament, we can prove that: For all multiclass problems D , for all learned binary classifiers c , the regret of an induced multiclass classifier is bounded by the regret of the binary classifier times log 2 k . Restated: reg multiclass (D,Filter_tree_test(c)) <= reg binary (Filter_tree_train(D),c) Here: Filter_tree_train(D) is the induced binary classification problem Filter_tree_test(c) is the induced multiclass classifier. reg multiclass is the multiclass regret (= difference between error rate and minim

3 0.76234168 298 hunch net-2008-04-26-Eliminating the Birthday Paradox for Universal Features

Introduction: I want to expand on this post which describes one of the core tricks for making Vowpal Wabbit fast and easy to use when learning from text. The central trick is converting a word (or any other parseable quantity) into a number via a hash function. Kishore tells me this is a relatively old trick in NLP land, but it has some added advantages when doing online learning, because you can learn directly from the existing data without preprocessing the data to create features (destroying the online property) or using an expensive hashtable lookup (slowing things down). A central concern for this approach is collisions, which create a loss of information. If you use m features in an index space of size n the birthday paradox suggests a collision if m > n 0.5 , essentially because there are m 2 pairs. This is pretty bad, because it says that with a vocabulary of 10 5 features, you might need to have 10 10 entries in your table. It turns out that redundancy is gr

4 0.66473085 213 hunch net-2006-10-08-Incompatibilities between classical confidence intervals and learning.

Introduction: Classical confidence intervals satisfy a theorem of the form: For some data sources D , Pr S ~ D (f(D) > g(S)) > 1-d where f is some function of the distribution (such as the mean) and g is some function of the observed sample S . The constraints on D can vary between “Independent and identically distributed (IID) samples from a gaussian with an unknown mean” to “IID samples from an arbitrary distribution D “. There are even some confidence intervals which do not require IID samples. Classical confidence intervals often confuse people. They do not say “with high probability, for my observed sample, the bounds holds”. Instead, they tell you that if you reason according to the confidence interval in the future (and the constraints on D are satisfied), then you are not often wrong. Restated, they tell you something about what a safe procedure is in a stochastic world where d is the safety parameter. There are a number of results in theoretical machine learn

5 0.66075087 31 hunch net-2005-02-26-Problem: Reductions and Relative Ranking Metrics

Introduction: This, again, is something of a research direction rather than a single problem. There are several metrics people care about which depend upon the relative ranking of examples and there are sometimes good reasons to care about such metrics. Examples include AROC , “F1″, the proportion of the time that the top ranked element is in some class, the proportion of the top 10 examples in some class ( google ‘s problem), the lowest ranked example of some class, and the “sort distance” from a predicted ranking to a correct ranking. See here for an example of some of these. Problem What does the ability to classify well imply about performance under these metrics? Past Work Probabilistic classification under squared error can be solved with a classifier. A counterexample shows this does not imply a good AROC. Sample complexity bounds for AROC (and here ). A paper on “ Learning to Order Things “. Difficulty Several of these may be easy. Some of them may be h

6 0.60975826 391 hunch net-2010-03-15-The Efficient Robust Conditional Probability Estimation Problem

7 0.52941233 289 hunch net-2008-02-17-The Meaning of Confidence

8 0.49298245 484 hunch net-2013-06-16-Representative Reviewing

9 0.44570956 183 hunch net-2006-06-14-Explorations of Exploration

10 0.39144817 40 hunch net-2005-03-13-Avoiding Bad Reviewing

11 0.3901968 461 hunch net-2012-04-09-ICML author feedback is open

12 0.37474334 34 hunch net-2005-03-02-Prior, “Prior” and Bias

13 0.35434186 65 hunch net-2005-05-02-Reviewing techniques for conferences

14 0.33345187 78 hunch net-2005-06-06-Exact Online Learning for Classification

15 0.32957193 129 hunch net-2005-11-07-Prediction Competitions

16 0.32890087 102 hunch net-2005-08-11-Why Manifold-Based Dimension Reduction Techniques?

17 0.32780525 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

18 0.31997728 223 hunch net-2006-12-06-The Spam Problem

19 0.31818748 77 hunch net-2005-05-29-Maximum Margin Mismatch?

20 0.31509787 463 hunch net-2012-05-02-ICML: Behind the Scenes