hunch_net hunch_net-2005 hunch_net-2005-44 knowledge-graph by maker-knowledge-mining

44 hunch net-2005-03-21-Research Styles in Machine Learning


meta infos for this blog

Source: html

Introduction: Machine Learning is a field with an impressively diverse set of reseearch styles. Understanding this may be important in appreciating what you see at a conference. Engineering . How can I solve this problem? People in the engineering research style try to solve hard problems directly by any means available and then describe how they did it. This is typical of problem-specific conferences and communities. Scientific . What are the principles for solving learning problems? People in this research style test techniques on many different problems. This is fairly common at ICML and NIPS. Mathematical . How can the learning problem be mathematically understood? People in this research style prove theorems with implications for learning but often do not implement (or test algorithms). COLT is a typical conference for this style. Many people manage to cross these styles, and that is often beneficial. Whenver we list a set of alternative, it becomes natural to think “wh


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Machine Learning is a field with an impressively diverse set of reseearch styles. [sent-1, score-0.332]

2 Understanding this may be important in appreciating what you see at a conference. [sent-2, score-0.309]

3 People in the engineering research style try to solve hard problems directly by any means available and then describe how they did it. [sent-5, score-1.377]

4 This is typical of problem-specific conferences and communities. [sent-6, score-0.273]

5 People in this research style test techniques on many different problems. [sent-9, score-0.677]

6 People in this research style prove theorems with implications for learning but often do not implement (or test algorithms). [sent-13, score-1.175]

7 Many people manage to cross these styles, and that is often beneficial. [sent-15, score-0.469]

8 Whenver we list a set of alternative, it becomes natural to think “which is best? [sent-16, score-0.367]

9 In this case of learning it seems that each of these styles is useful, and can lead to new useful discoveries. [sent-18, score-0.676]

10 I sometimes see failures to appreciate the other approaches, which is a shame. [sent-19, score-0.384]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('styles', 0.353), ('style', 0.291), ('engineering', 0.267), ('appreciating', 0.202), ('shame', 0.202), ('typical', 0.196), ('test', 0.169), ('mathematically', 0.168), ('principles', 0.168), ('failures', 0.156), ('scientific', 0.147), ('implement', 0.143), ('solve', 0.141), ('manage', 0.133), ('implications', 0.133), ('research', 0.133), ('useful', 0.13), ('cross', 0.128), ('people', 0.125), ('alternative', 0.123), ('describe', 0.123), ('diverse', 0.123), ('lead', 0.121), ('appreciate', 0.121), ('theorems', 0.116), ('understood', 0.111), ('see', 0.107), ('prove', 0.107), ('set', 0.106), ('fairly', 0.103), ('mathematical', 0.103), ('field', 0.103), ('becomes', 0.097), ('directly', 0.09), ('problems', 0.088), ('list', 0.088), ('available', 0.088), ('techniques', 0.084), ('try', 0.083), ('colt', 0.083), ('often', 0.083), ('approaches', 0.079), ('conferences', 0.077), ('natural', 0.076), ('solving', 0.075), ('problem', 0.073), ('means', 0.073), ('case', 0.072), ('understanding', 0.071), ('conference', 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 44 hunch net-2005-03-21-Research Styles in Machine Learning

Introduction: Machine Learning is a field with an impressively diverse set of reseearch styles. Understanding this may be important in appreciating what you see at a conference. Engineering . How can I solve this problem? People in the engineering research style try to solve hard problems directly by any means available and then describe how they did it. This is typical of problem-specific conferences and communities. Scientific . What are the principles for solving learning problems? People in this research style test techniques on many different problems. This is fairly common at ICML and NIPS. Mathematical . How can the learning problem be mathematically understood? People in this research style prove theorems with implications for learning but often do not implement (or test algorithms). COLT is a typical conference for this style. Many people manage to cross these styles, and that is often beneficial. Whenver we list a set of alternative, it becomes natural to think “wh

2 0.15376213 21 hunch net-2005-02-17-Learning Research Programs

Introduction: This is an attempt to organize the broad research programs related to machine learning currently underway. This isn’t easy—this map is partial, the categories often overlap, and there are many details left out. Nevertheless, it is (perhaps) helpful to have some map of what is happening where. The word ‘typical’ should not be construed narrowly here. Learning Theory Focuses on analyzing mathematical models of learning, essentially no experiments. Typical conference: COLT. Bayesian Learning Bayes law is always used. Focus on methods of speeding up or approximating integration, new probabilistic models, and practical applications. Typical conferences: NIPS,UAI Structured learning Predicting complex structured outputs, some applications. Typiical conferences: NIPS, UAI, others Reinforcement Learning Focused on ‘agent-in-the-world’ learning problems where the goal is optimizing reward. Typical conferences: ICML Unsupervised Learning/Clustering/Dimensionality Reduc

3 0.15326475 22 hunch net-2005-02-18-What it means to do research.

Introduction: I want to try to describe what doing research means, especially from the point of view of an undergraduate. The shift from a class-taking mentality to a research mentality is very significant and not easy. Problem Posing Posing the right problem is often as important as solving them. Many people can get by in research by solving problems others have posed, but that’s not sufficient for really inspiring research. For learning in particular, there is a strong feeling that we just haven’t figured out which questions are the right ones to ask. You can see this, because the answers we have do not seem convincing. Gambling your life When you do research, you think very hard about new ways of solving problems, new problems, and new solutions. Many conversations are of the form “I wonder what would happen if…” These processes can be short (days or weeks) or years-long endeavours. The worst part is that you’ll only know if you were succesful at the end of the process (and some

4 0.11990934 347 hunch net-2009-03-26-Machine Learning is too easy

Introduction: One of the remarkable things about machine learning is how diverse it is. The viewpoints of Bayesian learning, reinforcement learning, graphical models, supervised learning, unsupervised learning, genetic programming, etc… share little enough overlap that many people can and do make their careers within one without touching, or even necessarily understanding the others. There are two fundamental reasons why this is possible. For many problems, many approaches work in the sense that they do something useful. This is true empirically, where for many problems we can observe that many different approaches yield better performance than any constant predictor. It’s also true in theory, where we know that for any set of predictors representable in a finite amount of RAM, minimizing training error over the set of predictors does something nontrivial when there are a sufficient number of examples. There is nothing like a unifying problem defining the field. In many other areas there

5 0.11805895 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

6 0.11693382 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

7 0.11085439 19 hunch net-2005-02-14-Clever Methods of Overfitting

8 0.10951069 26 hunch net-2005-02-21-Problem: Cross Validation

9 0.10764377 67 hunch net-2005-05-06-Don’t mix the solution into the problem

10 0.10435769 86 hunch net-2005-06-28-The cross validation problem: cash reward

11 0.1018981 89 hunch net-2005-07-04-The Health of COLT

12 0.10102537 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.10042904 115 hunch net-2005-09-26-Prediction Bounds as the Mathematics of Science

14 0.099788383 332 hunch net-2008-12-23-Use of Learning Theory

15 0.098267354 194 hunch net-2006-07-11-New Models

16 0.096640244 235 hunch net-2007-03-03-All Models of Learning have Flaws

17 0.095262185 36 hunch net-2005-03-05-Funding Research

18 0.094475247 42 hunch net-2005-03-17-Going all the Way, Sometimes

19 0.090468891 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

20 0.088316873 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.223), (1, -0.007), (2, -0.036), (3, 0.049), (4, -0.048), (5, -0.072), (6, 0.041), (7, 0.065), (8, 0.041), (9, 0.033), (10, 0.002), (11, 0.063), (12, -0.003), (13, 0.109), (14, 0.113), (15, 0.03), (16, 0.098), (17, 0.011), (18, -0.135), (19, -0.006), (20, 0.063), (21, -0.037), (22, 0.011), (23, -0.054), (24, 0.038), (25, -0.009), (26, -0.066), (27, -0.004), (28, -0.131), (29, -0.008), (30, -0.069), (31, 0.107), (32, -0.052), (33, 0.023), (34, 0.007), (35, 0.108), (36, 0.028), (37, 0.03), (38, -0.038), (39, 0.027), (40, 0.0), (41, -0.002), (42, -0.043), (43, 0.024), (44, -0.031), (45, 0.046), (46, -0.1), (47, -0.035), (48, 0.042), (49, -0.088)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96229327 44 hunch net-2005-03-21-Research Styles in Machine Learning

Introduction: Machine Learning is a field with an impressively diverse set of reseearch styles. Understanding this may be important in appreciating what you see at a conference. Engineering . How can I solve this problem? People in the engineering research style try to solve hard problems directly by any means available and then describe how they did it. This is typical of problem-specific conferences and communities. Scientific . What are the principles for solving learning problems? People in this research style test techniques on many different problems. This is fairly common at ICML and NIPS. Mathematical . How can the learning problem be mathematically understood? People in this research style prove theorems with implications for learning but often do not implement (or test algorithms). COLT is a typical conference for this style. Many people manage to cross these styles, and that is often beneficial. Whenver we list a set of alternative, it becomes natural to think “wh

2 0.70094812 22 hunch net-2005-02-18-What it means to do research.

Introduction: I want to try to describe what doing research means, especially from the point of view of an undergraduate. The shift from a class-taking mentality to a research mentality is very significant and not easy. Problem Posing Posing the right problem is often as important as solving them. Many people can get by in research by solving problems others have posed, but that’s not sufficient for really inspiring research. For learning in particular, there is a strong feeling that we just haven’t figured out which questions are the right ones to ask. You can see this, because the answers we have do not seem convincing. Gambling your life When you do research, you think very hard about new ways of solving problems, new problems, and new solutions. Many conversations are of the form “I wonder what would happen if…” These processes can be short (days or weeks) or years-long endeavours. The worst part is that you’ll only know if you were succesful at the end of the process (and some

3 0.65559375 307 hunch net-2008-07-04-More Presentation Preparation

Introduction: We’ve discussed presentation preparation before , but I have one more thing to add: transitioning . For a research presentation, it is substantially helpful for the audience if transitions are clear. A common outline for a research presentation in machine leanring is: The problem . Presentations which don’t describe the problem almost immediately lose people, because the context is missing to understand the detail. Prior relevant work . In many cases, a paper builds on some previous bit of work which must be understood in order to understand what the paper does. A common failure mode seems to be spending too much time on prior work. Discuss just the relevant aspects of prior work in the language of your work. Sometimes this is missing when unneeded. What we did . For theory papers in particular, it is often not possible to really cover the details. Prioritizing what you present can be very important. How it worked . Many papers in Machine Learning have some sor

4 0.62428117 42 hunch net-2005-03-17-Going all the Way, Sometimes

Introduction: At many points in research, you face a choice: should I keep on improving some old piece of technology or should I do something new? For example: Should I refine bounds to make them tighter? Should I take some learning theory and turn it into a learning algorithm? Should I implement the learning algorithm? Should I test the learning algorithm widely? Should I release the algorithm as source code? Should I go see what problems people actually need to solve? The universal temptation of people attracted to research is doing something new. That is sometimes the right decision, but is also often not. I’d like to discuss some reasons why not. Expertise Once expertise are developed on some subject, you are the right person to refine them. What is the real problem? Continually improving a piece of technology is a mechanism forcing you to confront this question. In many cases, this confrontation is uncomfortable because you discover that your method has fundamen

5 0.62025613 347 hunch net-2009-03-26-Machine Learning is too easy

Introduction: One of the remarkable things about machine learning is how diverse it is. The viewpoints of Bayesian learning, reinforcement learning, graphical models, supervised learning, unsupervised learning, genetic programming, etc… share little enough overlap that many people can and do make their careers within one without touching, or even necessarily understanding the others. There are two fundamental reasons why this is possible. For many problems, many approaches work in the sense that they do something useful. This is true empirically, where for many problems we can observe that many different approaches yield better performance than any constant predictor. It’s also true in theory, where we know that for any set of predictors representable in a finite amount of RAM, minimizing training error over the set of predictors does something nontrivial when there are a sufficient number of examples. There is nothing like a unifying problem defining the field. In many other areas there

6 0.61127329 370 hunch net-2009-09-18-Necessary and Sufficient Research

7 0.60870409 21 hunch net-2005-02-17-Learning Research Programs

8 0.59448659 249 hunch net-2007-06-21-Presentation Preparation

9 0.58926672 31 hunch net-2005-02-26-Problem: Reductions and Relative Ranking Metrics

10 0.5877611 162 hunch net-2006-03-09-Use of Notation

11 0.5857783 89 hunch net-2005-07-04-The Health of COLT

12 0.57336509 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

13 0.57331431 104 hunch net-2005-08-22-Do you believe in induction?

14 0.56893677 358 hunch net-2009-06-01-Multitask Poisoning

15 0.56869638 146 hunch net-2006-01-06-MLTV

16 0.56396788 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

17 0.56167847 148 hunch net-2006-01-13-Benchmarks for RL

18 0.55518234 202 hunch net-2006-08-10-Precision is not accuracy

19 0.54925644 2 hunch net-2005-01-24-Holy grails of machine learning?

20 0.54452544 67 hunch net-2005-05-06-Don’t mix the solution into the problem


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.229), (38, 0.086), (53, 0.072), (55, 0.156), (70, 0.233), (94, 0.108)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.92829245 440 hunch net-2011-08-06-Interesting thing at UAI 2011

Introduction: I had a chance to attend UAI this year, where several papers interested me, including: Hoifung Poon and Pedro Domingos Sum-Product Networks: A New Deep Architecture . We’ve already discussed this one , but in a nutshell, they identify a large class of efficiently normalizable distributions and do learning with it. Yao-Liang Yu and Dale Schuurmans , Rank/norm regularization with closed-form solutions: Application to subspace clustering . This paper is about matrices, and in particular they prove that certain matrices are the solution of matrix optimizations. I’m not matrix inclined enough to fully appreciate this one, but I believe many others may be, and anytime closed form solutions come into play, you get 2 order of magnitude speedups, as they show experimentally. Laurent Charlin , Richard Zemel and Craig Boutilier , A Framework for Optimizing Paper Matching . This is about what works in matching papers to reviewers, as has been tested at several previous

same-blog 2 0.89551198 44 hunch net-2005-03-21-Research Styles in Machine Learning

Introduction: Machine Learning is a field with an impressively diverse set of reseearch styles. Understanding this may be important in appreciating what you see at a conference. Engineering . How can I solve this problem? People in the engineering research style try to solve hard problems directly by any means available and then describe how they did it. This is typical of problem-specific conferences and communities. Scientific . What are the principles for solving learning problems? People in this research style test techniques on many different problems. This is fairly common at ICML and NIPS. Mathematical . How can the learning problem be mathematically understood? People in this research style prove theorems with implications for learning but often do not implement (or test algorithms). COLT is a typical conference for this style. Many people manage to cross these styles, and that is often beneficial. Whenver we list a set of alternative, it becomes natural to think “wh

3 0.77909952 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

4 0.77451944 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

Introduction: Bob Williamson and I are the learning theory PC members at NIPS this year. This is some attempt to state the standards and tests I applied to the papers. I think it is a good idea to talk about this for two reasons: Making community standards a matter of public record seems healthy. It give us a chance to debate what is and is not the right standard. It might even give us a bit more consistency across the years. It may save us all time. There are a number of papers submitted which just aren’t there yet. Avoiding submitting is the right decision in this case. There are several criteria for judging a paper. All of these were active this year. Some criteria are uncontroversial while others may be so. The paper must have a theorem establishing something new for which it is possible to derive high confidence in the correctness of the results. A surprising number of papers fail this test. This criteria seems essential to the definition of “theory”. Missing theo

5 0.76619458 297 hunch net-2008-04-22-Taking the next step

Introduction: At the last ICML , Tom Dietterich asked me to look into systems for commenting on papers. I’ve been slow getting to this, but it’s relevant now. The essential observation is that we now have many tools for online collaboration, but they are not yet much used in academic research. If we can find the right way to use them, then perhaps great things might happen, with extra kudos to the first conference that manages to really create an online community. Various conferences have been poking at this. For example, UAI has setup a wiki , COLT has started using Joomla , with some dynamic content, and AAAI has been setting up a “ student blog “. Similarly, Dinoj Surendran setup a twiki for the Chicago Machine Learning Summer School , which was quite useful for coordinating events and other things. I believe the most important thing is a willingness to experiment. A good place to start seems to be enhancing existing conference websites. For example, the ICML 2007 papers pag

6 0.76610237 95 hunch net-2005-07-14-What Learning Theory might do

7 0.76528013 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

8 0.76253706 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

9 0.76182044 343 hunch net-2009-02-18-Decision by Vetocracy

10 0.76145387 116 hunch net-2005-09-30-Research in conferences

11 0.76135349 452 hunch net-2012-01-04-Why ICML? and the summer conferences

12 0.76059818 423 hunch net-2011-02-02-User preferences for search engines

13 0.75948066 40 hunch net-2005-03-13-Avoiding Bad Reviewing

14 0.7592544 403 hunch net-2010-07-18-ICML & COLT 2010

15 0.75893319 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

16 0.75743091 454 hunch net-2012-01-30-ICML Posters and Scope

17 0.75739151 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

18 0.75584453 98 hunch net-2005-07-27-Not goal metrics

19 0.75525576 96 hunch net-2005-07-21-Six Months

20 0.75380689 325 hunch net-2008-11-10-ICML Reviewing Criteria