hunch_net hunch_net-2005 hunch_net-2005-21 knowledge-graph by maker-knowledge-mining

21 hunch net-2005-02-17-Learning Research Programs


meta infos for this blog

Source: html

Introduction: This is an attempt to organize the broad research programs related to machine learning currently underway. This isn’t easy—this map is partial, the categories often overlap, and there are many details left out. Nevertheless, it is (perhaps) helpful to have some map of what is happening where. The word ‘typical’ should not be construed narrowly here. Learning Theory Focuses on analyzing mathematical models of learning, essentially no experiments. Typical conference: COLT. Bayesian Learning Bayes law is always used. Focus on methods of speeding up or approximating integration, new probabilistic models, and practical applications. Typical conferences: NIPS,UAI Structured learning Predicting complex structured outputs, some applications. Typiical conferences: NIPS, UAI, others Reinforcement Learning Focused on ‘agent-in-the-world’ learning problems where the goal is optimizing reward. Typical conferences: ICML Unsupervised Learning/Clustering/Dimensionality Reduc


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This is an attempt to organize the broad research programs related to machine learning currently underway. [sent-1, score-0.533]

2 This isn’t easy—this map is partial, the categories often overlap, and there are many details left out. [sent-2, score-0.402]

3 Nevertheless, it is (perhaps) helpful to have some map of what is happening where. [sent-3, score-0.304]

4 The word ‘typical’ should not be construed narrowly here. [sent-4, score-0.229]

5 Learning Theory Focuses on analyzing mathematical models of learning, essentially no experiments. [sent-5, score-0.303]

6 Focus on methods of speeding up or approximating integration, new probabilistic models, and practical applications. [sent-8, score-0.493]

7 Typical conferences: NIPS,UAI Structured learning Predicting complex structured outputs, some applications. [sent-9, score-0.25]

8 Typiical conferences: NIPS, UAI, others Reinforcement Learning Focused on ‘agent-in-the-world’ learning problems where the goal is optimizing reward. [sent-10, score-0.17]

9 Typicaly conferences: Many (each with a somewhat different viewpoint) Applied Learning Worries about cost sensitive learning, what to do on very large datasets, applications, etc. [sent-12, score-0.093]

10 Typical conference: KDD Supervised Leanring Chief concern is making practical algorithms for simpler predictions. [sent-14, score-0.352]

11 Typical conference: ICML Please comment on any missing pieces—it would be good to build up a better understanding of what are the focuses and where they are. [sent-16, score-0.496]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('typical', 0.438), ('focuses', 0.25), ('conferences', 0.23), ('map', 0.203), ('focused', 0.178), ('structured', 0.17), ('conference', 0.151), ('worries', 0.15), ('practical', 0.144), ('chief', 0.139), ('leanring', 0.139), ('narrowly', 0.139), ('outputs', 0.139), ('speeding', 0.139), ('approximating', 0.131), ('models', 0.123), ('simpler', 0.116), ('categories', 0.112), ('overlap', 0.109), ('organize', 0.106), ('integration', 0.106), ('analyzing', 0.104), ('unsupervised', 0.104), ('happening', 0.101), ('partial', 0.099), ('law', 0.097), ('broad', 0.095), ('sensitive', 0.093), ('icml', 0.093), ('concern', 0.092), ('word', 0.09), ('optimizing', 0.09), ('pieces', 0.089), ('left', 0.087), ('programs', 0.087), ('currently', 0.084), ('missing', 0.084), ('viewpoint', 0.082), ('bayes', 0.082), ('focus', 0.082), ('comment', 0.081), ('attempt', 0.081), ('uai', 0.081), ('build', 0.081), ('learning', 0.08), ('probabilistic', 0.079), ('datasets', 0.077), ('kdd', 0.076), ('please', 0.076), ('mathematical', 0.076)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 21 hunch net-2005-02-17-Learning Research Programs

Introduction: This is an attempt to organize the broad research programs related to machine learning currently underway. This isn’t easy—this map is partial, the categories often overlap, and there are many details left out. Nevertheless, it is (perhaps) helpful to have some map of what is happening where. The word ‘typical’ should not be construed narrowly here. Learning Theory Focuses on analyzing mathematical models of learning, essentially no experiments. Typical conference: COLT. Bayesian Learning Bayes law is always used. Focus on methods of speeding up or approximating integration, new probabilistic models, and practical applications. Typical conferences: NIPS,UAI Structured learning Predicting complex structured outputs, some applications. Typiical conferences: NIPS, UAI, others Reinforcement Learning Focused on ‘agent-in-the-world’ learning problems where the goal is optimizing reward. Typical conferences: ICML Unsupervised Learning/Clustering/Dimensionality Reduc

2 0.16939621 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

3 0.15376213 44 hunch net-2005-03-21-Research Styles in Machine Learning

Introduction: Machine Learning is a field with an impressively diverse set of reseearch styles. Understanding this may be important in appreciating what you see at a conference. Engineering . How can I solve this problem? People in the engineering research style try to solve hard problems directly by any means available and then describe how they did it. This is typical of problem-specific conferences and communities. Scientific . What are the principles for solving learning problems? People in this research style test techniques on many different problems. This is fairly common at ICML and NIPS. Mathematical . How can the learning problem be mathematically understood? People in this research style prove theorems with implications for learning but often do not implement (or test algorithms). COLT is a typical conference for this style. Many people manage to cross these styles, and that is often beneficial. Whenver we list a set of alternative, it becomes natural to think “wh

4 0.14214717 89 hunch net-2005-07-04-The Health of COLT

Introduction: The health of COLT (Conference on Learning Theory or Computational Learning Theory depending on who you ask) has been questioned over the last few years. Low points for the conference occurred when EuroCOLT merged with COLT in 2001, and the attendance at the 2002 Sydney COLT fell to a new low. This occurred in the general context of machine learning conferences rising in both number and size over the last decade. Any discussion of why COLT has had difficulties is inherently controversial as is any story about well-intentioned people making the wrong decisions. Nevertheless, this may be worth discussing in the hope of avoiding problems in the future and general understanding. In any such discussion there is a strong tendency to identify with a conference/community in a patriotic manner that is detrimental to thinking. Keep in mind that conferences exist to further research. My understanding (I wasn’t around) is that COLT started as a subcommunity of the computer science

5 0.13671103 416 hunch net-2010-10-29-To Vidoelecture or not

Introduction: (update: cross-posted on CACM ) For the first time in several years, ICML 2010 did not have videolectures attending. Luckily, the tutorial on exploration and learning which Alina and I put together can be viewed , since we also presented at KDD 2010 , which included videolecture support. ICML didn’t cover the cost of a videolecture, because PASCAL didn’t provide a grant for it this year. On the other hand, KDD covered it out of registration costs. The cost of videolectures isn’t cheap. For a workshop the baseline quote we have is 270 euro per hour, plus a similar cost for the cameraman’s travel and accomodation. This can be reduced substantially by having a volunteer with a camera handle the cameraman duties, uploading the video and slides to be processed for a quoted 216 euro per hour. Youtube is the most predominant free video site with a cost of $0, but it turns out to be a poor alternative. 15 minute upload limits do not match typical talk lengths.

6 0.12042181 183 hunch net-2006-06-14-Explorations of Exploration

7 0.11107171 116 hunch net-2005-09-30-Research in conferences

8 0.10904435 235 hunch net-2007-03-03-All Models of Learning have Flaws

9 0.10742541 454 hunch net-2012-01-30-ICML Posters and Scope

10 0.10530731 46 hunch net-2005-03-24-The Role of Workshops

11 0.10440481 103 hunch net-2005-08-18-SVM Adaptability

12 0.1038985 141 hunch net-2005-12-17-Workshops as Franchise Conferences

13 0.10335962 456 hunch net-2012-02-24-ICML+50%

14 0.096718356 93 hunch net-2005-07-13-“Sister Conference” presentations

15 0.096049726 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

16 0.094156705 12 hunch net-2005-02-03-Learning Theory, by assumption

17 0.093587048 212 hunch net-2006-10-04-Health of Conferences Wiki

18 0.092725039 95 hunch net-2005-07-14-What Learning Theory might do

19 0.090827435 345 hunch net-2009-03-08-Prediction Science

20 0.089955866 351 hunch net-2009-05-02-Wielding a New Abstraction


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.209), (1, -0.047), (2, -0.023), (3, -0.059), (4, 0.029), (5, -0.062), (6, 0.035), (7, 0.004), (8, 0.077), (9, 0.027), (10, -0.015), (11, -0.028), (12, -0.055), (13, 0.097), (14, 0.094), (15, 0.014), (16, 0.046), (17, 0.006), (18, -0.044), (19, -0.072), (20, 0.1), (21, -0.17), (22, 0.044), (23, 0.013), (24, -0.056), (25, -0.066), (26, -0.08), (27, 0.032), (28, -0.047), (29, 0.032), (30, 0.018), (31, 0.142), (32, -0.063), (33, -0.077), (34, 0.068), (35, -0.002), (36, -0.007), (37, -0.064), (38, -0.004), (39, 0.056), (40, -0.069), (41, -0.049), (42, 0.033), (43, 0.131), (44, -0.118), (45, -0.011), (46, -0.028), (47, -0.031), (48, -0.047), (49, 0.04)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96687448 21 hunch net-2005-02-17-Learning Research Programs

Introduction: This is an attempt to organize the broad research programs related to machine learning currently underway. This isn’t easy—this map is partial, the categories often overlap, and there are many details left out. Nevertheless, it is (perhaps) helpful to have some map of what is happening where. The word ‘typical’ should not be construed narrowly here. Learning Theory Focuses on analyzing mathematical models of learning, essentially no experiments. Typical conference: COLT. Bayesian Learning Bayes law is always used. Focus on methods of speeding up or approximating integration, new probabilistic models, and practical applications. Typical conferences: NIPS,UAI Structured learning Predicting complex structured outputs, some applications. Typiical conferences: NIPS, UAI, others Reinforcement Learning Focused on ‘agent-in-the-world’ learning problems where the goal is optimizing reward. Typical conferences: ICML Unsupervised Learning/Clustering/Dimensionality Reduc

2 0.64862967 335 hunch net-2009-01-08-Predictive Analytics World

Introduction: Carla Vicens and Eric Siegel contacted me about Predictive Analytics World in San Francisco February 18&19, which I wasn’t familiar with. A quick look at the agenda reveals several people I know working on applications of machine learning in businesses, covering deployed applications topics. It’s interesting to see a business-focused machine learning conference, as it says that we are succeeding as a field. If you are interested in deployed applications, you might attend. Eric and I did a quick interview by email. John > I’ve mostly published and participated in academic machine learning conferences like ICML, COLT, and NIPS. When I look at the set of speakers and subjects for your conference I think “machine learning for business”. Is that your understanding of things? What I’m trying to ask is: what do you view as the primary goal for this conference? Eric > You got it. This is the business event focused on the commercial deployment of technology developed at

3 0.63855112 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

4 0.6258586 93 hunch net-2005-07-13-“Sister Conference” presentations

Introduction: Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else. Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE , IUI , and ACL . There was also some discussion of having a super-colocation event similar to FCRC , but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.

5 0.6164906 232 hunch net-2007-02-11-24

Introduction: To commemorate the Twenty Fourth Annual International Conference on Machine Learning (ICML-07), the FOX Network has decided to launch a new spin-off series in prime time. Through unofficial sources, I have obtained the story arc for the first season, which appears frighteningly realistic.

6 0.57709211 44 hunch net-2005-03-21-Research Styles in Machine Learning

7 0.53763103 416 hunch net-2010-10-29-To Vidoelecture or not

8 0.53482348 89 hunch net-2005-07-04-The Health of COLT

9 0.52269953 146 hunch net-2006-01-06-MLTV

10 0.51353019 456 hunch net-2012-02-24-ICML+50%

11 0.50028044 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

12 0.49081701 103 hunch net-2005-08-18-SVM Adaptability

13 0.49050504 95 hunch net-2005-07-14-What Learning Theory might do

14 0.48895285 1 hunch net-2005-01-19-Why I decided to run a weblog.

15 0.48688772 384 hunch net-2009-12-24-Top graduates this season

16 0.48556378 403 hunch net-2010-07-18-ICML & COLT 2010

17 0.4807792 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

18 0.47605196 255 hunch net-2007-07-13-The View From China

19 0.47416705 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

20 0.47413826 228 hunch net-2007-01-15-The Machine Learning Department


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.013), (27, 0.121), (37, 0.238), (38, 0.109), (53, 0.25), (55, 0.095), (89, 0.052), (95, 0.02)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.91207087 21 hunch net-2005-02-17-Learning Research Programs

Introduction: This is an attempt to organize the broad research programs related to machine learning currently underway. This isn’t easy—this map is partial, the categories often overlap, and there are many details left out. Nevertheless, it is (perhaps) helpful to have some map of what is happening where. The word ‘typical’ should not be construed narrowly here. Learning Theory Focuses on analyzing mathematical models of learning, essentially no experiments. Typical conference: COLT. Bayesian Learning Bayes law is always used. Focus on methods of speeding up or approximating integration, new probabilistic models, and practical applications. Typical conferences: NIPS,UAI Structured learning Predicting complex structured outputs, some applications. Typiical conferences: NIPS, UAI, others Reinforcement Learning Focused on ‘agent-in-the-world’ learning problems where the goal is optimizing reward. Typical conferences: ICML Unsupervised Learning/Clustering/Dimensionality Reduc

2 0.88631701 431 hunch net-2011-04-18-A paper not at Snowbird

Introduction: Unfortunately, a scheduling failure meant I missed all of AIStat and most of the learning workshop , otherwise known as Snowbird, when it’s at Snowbird . At snowbird, the talk on Sum-Product networks by Hoifung Poon stood out to me ( Pedro Domingos is a coauthor.). The basic point was that by appropriately constructing networks based on sums and products, the normalization problem in probabilistic models is eliminated, yielding a highly tractable yet flexible representation+learning algorithm. As an algorithm, this is noticeably cleaner than deep belief networks with a claim to being an order of magnitude faster and working better on an image completion task. Snowbird doesn’t have real papers—just the abstract above. I look forward to seeing the paper. (added: Rodrigo points out the deep learning workshop draft .)

3 0.72816563 367 hunch net-2009-08-16-Centmail comments

Introduction: Centmail is a scheme which makes charity donations have a secondary value, as a stamp for email. When discussed on newscientist , slashdot , and others, some of the comments make the academic review process appear thoughtful . Some prominent fallacies are: Costing money fallacy. Some commenters appear to believe the system charges money per email. Instead, the basic idea is that users get an extra benefit from donations to a charity and participation is strictly voluntary. The solution to this fallacy is simply reading the details . Single solution fallacy. Some commenters seem to think this is proposed as a complete solution to spam, and since not everyone will opt to participate, it won’t work. But a complete solution is not at all necessary or even possible given the flag-day problem . Deployed machine learning systems for fighting spam are great at taking advantage of a partial solution. The solution to this fallacy is learning about machine learning. In the

4 0.71760428 2 hunch net-2005-01-24-Holy grails of machine learning?

Introduction: Let me kick things off by posing this question to ML researchers: What do you think are some important holy grails of machine learning? For example: – “A classifier with SVM-level performance but much more scalable” – “Practical confidence bounds (or learning bounds) for classification” – “A reinforcement learning algorithm that can handle the ___ problem” – “Understanding theoretically why ___ works so well in practice” etc. I pose this question because I believe that when goals are stated explicitly and well (thus providing clarity as well as opening up the problems to more people), rather than left implicit, they are likely to be achieved much more quickly. I would also like to know more about the internal goals of the various machine learning sub-areas (theory, kernel methods, graphical models, reinforcement learning, etc) as stated by people in these respective areas. This could help people cross sub-areas.

5 0.71672529 16 hunch net-2005-02-09-Intuitions from applied learning

Introduction: Since learning is far from an exact science, it’s good to pay attention to basic intuitions of applied learning. Here are a few I’ve collected. Integration In Bayesian learning, the posterior is computed by an integral, and the optimal thing to do is to predict according to this integral. This phenomena seems to be far more general. Bagging, Boosting, SVMs, and Neural Networks all take advantage of this idea to some extent. The phenomena is more general: you can average over many different classification predictors to improve performance. Sources: Zoubin , Caruana Differentiation Different pieces of an average should differentiate to achieve good performance by different methods. This is know as the ‘symmetry breaking’ problem for neural networks, and it’s why weights are initialized randomly. Boosting explicitly attempts to achieve good differentiation by creating new, different, learning problems. Sources: Yann LeCun , Phil Long Deep Representation Ha

6 0.71498728 91 hunch net-2005-07-10-Thinking the Unthought

7 0.70976734 63 hunch net-2005-04-27-DARPA project: LAGR

8 0.70753694 56 hunch net-2005-04-14-Families of Learning Theory Statements

9 0.69890153 145 hunch net-2005-12-29-Deadline Season

10 0.69873238 107 hunch net-2005-09-05-Site Update

11 0.69344538 6 hunch net-2005-01-27-Learning Complete Problems

12 0.68061632 368 hunch net-2009-08-26-Another 10-year paper in Machine Learning

13 0.67955196 1 hunch net-2005-01-19-Why I decided to run a weblog.

14 0.63725662 138 hunch net-2005-12-09-Some NIPS papers

15 0.61597008 151 hunch net-2006-01-25-1 year

16 0.61505777 191 hunch net-2006-07-08-MaxEnt contradicts Bayes Rule?

17 0.61402458 201 hunch net-2006-08-07-The Call of the Deep

18 0.6129334 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

19 0.60327584 416 hunch net-2010-10-29-To Vidoelecture or not

20 0.60308677 194 hunch net-2006-07-11-New Models