hunch_net hunch_net-2012 hunch_net-2012-458 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap
sentIndex sentText sentNum sentScore
1 Sasha is the open problems chair for both COLT and ICML . [sent-1, score-0.452]
2 Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. [sent-2, score-0.606]
3 COLT has a history of open sessions, but this is new for ICML. [sent-3, score-0.362]
4 If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . [sent-4, score-0.485]
5 You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. [sent-5, score-0.784]
6 Your problem will be officially published and citable. [sent-6, score-0.39]
7 You might have it solved by some very intelligent bored people. [sent-7, score-0.121]
8 The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. [sent-8, score-1.275]
9 Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to papers. [sent-9, score-0.389]
10 We’ll be updating instructions for the PC and ACs as we field questions. [sent-10, score-0.307]
11 Feel free to comment here on things of plausible general interest, but email us directly with specific concerns. [sent-11, score-0.49]
wordName wordTfidf (topN-words)
[('open', 0.228), ('verifiable', 0.18), ('officially', 0.18), ('mahdi', 0.18), ('formulation', 0.166), ('sasha', 0.166), ('expanding', 0.166), ('instructions', 0.157), ('laurent', 0.157), ('finished', 0.15), ('updating', 0.15), ('acs', 0.15), ('colt', 0.148), ('sessions', 0.144), ('ll', 0.142), ('easily', 0.14), ('joelle', 0.139), ('history', 0.134), ('assignment', 0.134), ('theoretically', 0.131), ('overlap', 0.131), ('submitting', 0.131), ('joint', 0.127), ('consider', 0.125), ('session', 0.124), ('intelligent', 0.121), ('pc', 0.119), ('problems', 0.118), ('concerns', 0.116), ('benefit', 0.114), ('published', 0.112), ('committee', 0.112), ('march', 0.11), ('precise', 0.108), ('stated', 0.106), ('presented', 0.106), ('later', 0.106), ('email', 0.106), ('chair', 0.106), ('initial', 0.105), ('chairs', 0.103), ('three', 0.101), ('nature', 0.099), ('writing', 0.099), ('specific', 0.099), ('problem', 0.098), ('comment', 0.097), ('helps', 0.097), ('general', 0.096), ('plausible', 0.092)]
simIndex simValue blogId blogTitle
same-blog 1 1.0000001 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions
Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap
2 0.21029922 453 hunch net-2012-01-28-Why COLT?
Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi
3 0.1496345 484 hunch net-2013-06-16-Representative Reviewing
Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo
4 0.14378303 292 hunch net-2008-03-15-COLT Open Problems
Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).
5 0.13675334 437 hunch net-2011-07-10-ICML 2011 and the future
Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI
6 0.13186716 452 hunch net-2012-01-04-Why ICML? and the summer conferences
7 0.12100601 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making
8 0.11403411 454 hunch net-2012-01-30-ICML Posters and Scope
9 0.11232358 86 hunch net-2005-06-28-The cross validation problem: cash reward
10 0.11030403 89 hunch net-2005-07-04-The Health of COLT
11 0.10189775 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
12 0.098209992 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers
13 0.097593054 273 hunch net-2007-11-16-MLSS 2008
14 0.097455129 447 hunch net-2011-10-10-ML Symposium and ICML details
15 0.096993774 463 hunch net-2012-05-02-ICML: Behind the Scenes
16 0.095342584 141 hunch net-2005-12-17-Workshops as Franchise Conferences
17 0.095324591 429 hunch net-2011-04-06-COLT open questions
18 0.093816184 100 hunch net-2005-08-04-Why Reinforcement Learning is Important
19 0.091433875 11 hunch net-2005-02-02-Paper Deadlines
20 0.083606742 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS
topicId topicWeight
[(0, 0.194), (1, -0.132), (2, 0.017), (3, -0.01), (4, -0.001), (5, -0.018), (6, 0.024), (7, 0.025), (8, -0.069), (9, -0.082), (10, 0.024), (11, -0.044), (12, -0.029), (13, 0.125), (14, 0.094), (15, -0.037), (16, 0.099), (17, -0.081), (18, -0.103), (19, 0.021), (20, -0.088), (21, 0.181), (22, 0.045), (23, -0.068), (24, -0.017), (25, 0.104), (26, 0.01), (27, 0.043), (28, 0.134), (29, 0.032), (30, 0.066), (31, 0.064), (32, 0.018), (33, -0.001), (34, -0.022), (35, 0.061), (36, 0.01), (37, -0.037), (38, 0.023), (39, -0.031), (40, 0.048), (41, -0.014), (42, -0.017), (43, 0.004), (44, 0.035), (45, -0.065), (46, -0.055), (47, -0.051), (48, -0.0), (49, 0.006)]
simIndex simValue blogId blogTitle
same-blog 1 0.97724485 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions
Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap
2 0.77224433 292 hunch net-2008-03-15-COLT Open Problems
Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).
3 0.6841647 429 hunch net-2011-04-06-COLT open questions
Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.
4 0.63550413 324 hunch net-2008-11-09-A Healthy COLT
Introduction: A while ago , we discussed the health of COLT . COLT 2008 substantially addressed my concerns. The papers were diverse and several were interesting. Attendance was up, which is particularly notable in Europe. In my opinion, the colocation with UAI and ICML was the best colocation since 1998. And, perhaps best of all, registration ended up being free for all students due to various grants from the Academy of Finland , Google , IBM , and Yahoo . A basic question is: what went right? There seem to be several answers. Cost-wise, COLT had sufficient grants to alleviate the high cost of the Euro and location at a university substantially reduces the cost compared to a hotel. Organization-wise, the Finns were great with hordes of volunteers helping set everything up. Having too many volunteers is a good failure mode. Organization-wise, it was clear that all 3 program chairs were cooperating in designing the program. Facilities-wise, proximity in time and space made
5 0.62406164 453 hunch net-2012-01-28-Why COLT?
Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi
6 0.59799474 82 hunch net-2005-06-17-Reopening RL->Classification
7 0.59309947 86 hunch net-2005-06-28-The cross validation problem: cash reward
8 0.58918148 47 hunch net-2005-03-28-Open Problems for Colt
9 0.5353775 89 hunch net-2005-07-04-The Health of COLT
10 0.51177806 273 hunch net-2007-11-16-MLSS 2008
11 0.50687957 88 hunch net-2005-07-01-The Role of Impromptu Talks
12 0.50290096 394 hunch net-2010-04-24-COLT Treasurer is now Phil Long
13 0.50011051 447 hunch net-2011-10-10-ML Symposium and ICML details
14 0.49877575 437 hunch net-2011-07-10-ICML 2011 and the future
15 0.49642649 147 hunch net-2006-01-08-Debugging Your Brain
16 0.49002722 100 hunch net-2005-08-04-Why Reinforcement Learning is Important
17 0.48693985 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making
18 0.46724275 215 hunch net-2006-10-22-Exemplar programming
19 0.46609846 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
20 0.46416172 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers
topicId topicWeight
[(27, 0.233), (53, 0.111), (54, 0.307), (55, 0.147), (94, 0.024), (95, 0.078)]
simIndex simValue blogId blogTitle
1 0.9396373 494 hunch net-2014-03-11-The New York ML Symposium, take 2
Introduction: The 20 13 14 is New York Machine Learning Symposium is finally happening on March 28th at the New York Academy of Science . Every invited speaker interests me personally. They are: Rayid Ghani (Chief Scientist at Obama 2012) Brian Kingsbury (Speech Recognition @ IBM) Jorge Nocedal (who did LBFGS) We’ve been somewhat disorganized in advertising this. As a consequence, anyone who has not submitted an abstract but would like to do so may send one directly to me (jl@hunch.net title NYASMLS) by Friday March 14. I will forward them to the rest of the committee for consideration.
2 0.93193364 335 hunch net-2009-01-08-Predictive Analytics World
Introduction: Carla Vicens and Eric Siegel contacted me about Predictive Analytics World in San Francisco February 18&19, which I wasn’t familiar with. A quick look at the agenda reveals several people I know working on applications of machine learning in businesses, covering deployed applications topics. It’s interesting to see a business-focused machine learning conference, as it says that we are succeeding as a field. If you are interested in deployed applications, you might attend. Eric and I did a quick interview by email. John > I’ve mostly published and participated in academic machine learning conferences like ICML, COLT, and NIPS. When I look at the set of speakers and subjects for your conference I think “machine learning for business”. Is that your understanding of things? What I’m trying to ask is: what do you view as the primary goal for this conference? Eric > You got it. This is the business event focused on the commercial deployment of technology developed at
3 0.89706314 336 hunch net-2009-01-19-Netflix prize within epsilon
Introduction: The competitors for the Netflix Prize are tantalizingly close winning the million dollar prize. This year, BellKor and Commendo Research sent a combined solution that won the progress prize . Reading the writeups 2 is instructive. Several aspects of solutions are taken for granted including stochastic gradient descent, ensemble prediction, and targeting residuals (a form of boosting). Relatively to last year, it appears that many approaches have added parameterizations, especially for the purpose of modeling through time. The big question is: will they make the big prize? At this point, the level of complexity in entering the competition is prohibitive, so perhaps only the existing competitors will continue to try. (This equation might change drastically if the teams open source their existing solutions, including parameter settings.) One fear is that the progress is asymptoting on the wrong side of the 10% threshold. In the first year, the teams progressed through
same-blog 4 0.88049102 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions
Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap
5 0.86671084 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems
Introduction: I’d like to point out Yisong Yue ‘s post on Self-improving systems , which is a nicely readable description of the necessity and potential of interactive learning to deal with the information overload problem that is endemic to the modern internet.
6 0.78320491 33 hunch net-2005-02-28-Regularization
7 0.68508506 225 hunch net-2007-01-02-Retrospective
8 0.67713559 466 hunch net-2012-06-05-ICML acceptance statistics
9 0.67256182 151 hunch net-2006-01-25-1 year
10 0.66775739 437 hunch net-2011-07-10-ICML 2011 and the future
11 0.66534227 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World
12 0.66527593 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class
13 0.66482472 134 hunch net-2005-12-01-The Webscience Future
14 0.66409266 452 hunch net-2012-01-04-Why ICML? and the summer conferences
15 0.6637488 89 hunch net-2005-07-04-The Health of COLT
16 0.6629045 22 hunch net-2005-02-18-What it means to do research.
17 0.66265392 116 hunch net-2005-09-30-Research in conferences
18 0.66239738 464 hunch net-2012-05-03-Microsoft Research, New York City
19 0.65925747 194 hunch net-2006-07-11-New Models
20 0.65885013 343 hunch net-2009-02-18-Decision by Vetocracy