hunch_net hunch_net-2006 hunch_net-2006-187 knowledge-graph by maker-knowledge-mining

187 hunch net-2006-06-25-Presentation of Proofs is Hard.


meta infos for this blog

Source: html

Introduction: When presenting part of the Reinforcement Learning theory tutorial at ICML 2006 , I was forcibly reminded of this. There are several difficulties. When creating the presentation, the correct level of detail is tricky. With too much detail, the proof takes too much time and people may be lost to boredom. With too little detail, the steps of the proof involve too-great a jump. This is very difficult to judge. What may be an easy step in the careful thought of a quiet room is not so easy when you are occupied by the process of presentation. What may be easy after having gone over this (and other) proofs is not so easy to follow in the first pass by a viewer. These problems seem only correctable by process of repeated test-and-revise. When presenting the proof, simply speaking with sufficient precision is substantially harder than in normal conversation (where precision is not so critical). Practice can help here. When presenting the proof, going at the right p


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 When presenting part of the Reinforcement Learning theory tutorial at ICML 2006 , I was forcibly reminded of this. [sent-1, score-0.417]

2 When creating the presentation, the correct level of detail is tricky. [sent-3, score-0.495]

3 With too much detail, the proof takes too much time and people may be lost to boredom. [sent-4, score-0.566]

4 With too little detail, the steps of the proof involve too-great a jump. [sent-5, score-0.574]

5 What may be an easy step in the careful thought of a quiet room is not so easy when you are occupied by the process of presentation. [sent-7, score-0.507]

6 What may be easy after having gone over this (and other) proofs is not so easy to follow in the first pass by a viewer. [sent-8, score-0.726]

7 These problems seem only correctable by process of repeated test-and-revise. [sent-9, score-0.305]

8 When presenting the proof, simply speaking with sufficient precision is substantially harder than in normal conversation (where precision is not so critical). [sent-10, score-0.898]

9 When presenting the proof, going at the right pace for understanding is difficult. [sent-12, score-0.633]

10 When we use a blackboard/whiteboard, a natural reasonable pace is imposed by the process of writing. [sent-13, score-0.588]

11 Unfortunately, writing doesn’t scale well to large audiences for vision reasons, losing this natural pacing mechanism. [sent-14, score-0.351]

12 It is difficult to entertain with a proof—there is nothing particularly funny about it. [sent-15, score-0.432]

13 This particularly matters for a large audience which tends to naturally develop an expectation of being entertained. [sent-16, score-0.466]

14 Given all these difficulties, it is very tempting to avoid presenting proofs. [sent-17, score-0.392]

15 Avoiding the proof in any serious detail is fairly reasonable in a conference presentation—the time is too short and the people viewing are too heavily overloaded to follow the logic well. [sent-18, score-1.387]

16 The “right” level of detail is often the theorem statement. [sent-19, score-0.495]

17 Nevertheless, avoidance is not always possible because the proof is one of the more powerful mechanisms we have for doing research. [sent-20, score-0.677]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('proof', 0.429), ('detail', 0.4), ('presenting', 0.312), ('pace', 0.255), ('precision', 0.169), ('follow', 0.141), ('easy', 0.135), ('presentation', 0.124), ('funny', 0.113), ('avoidance', 0.113), ('audiences', 0.113), ('correctable', 0.113), ('entertain', 0.105), ('reminded', 0.105), ('process', 0.098), ('level', 0.095), ('repeated', 0.094), ('overloaded', 0.094), ('normal', 0.094), ('proofs', 0.094), ('viewing', 0.091), ('natural', 0.085), ('conversation', 0.085), ('losing', 0.085), ('matters', 0.085), ('heavily', 0.082), ('tempting', 0.08), ('develop', 0.08), ('tends', 0.08), ('gone', 0.078), ('avoiding', 0.078), ('pass', 0.076), ('imposed', 0.076), ('audience', 0.076), ('logic', 0.076), ('reasonable', 0.074), ('steps', 0.073), ('particularly', 0.073), ('difficult', 0.072), ('careful', 0.072), ('expectation', 0.072), ('involve', 0.072), ('lost', 0.07), ('nothing', 0.069), ('speaking', 0.069), ('powerful', 0.069), ('vision', 0.068), ('may', 0.067), ('right', 0.066), ('mechanisms', 0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

Introduction: When presenting part of the Reinforcement Learning theory tutorial at ICML 2006 , I was forcibly reminded of this. There are several difficulties. When creating the presentation, the correct level of detail is tricky. With too much detail, the proof takes too much time and people may be lost to boredom. With too little detail, the steps of the proof involve too-great a jump. This is very difficult to judge. What may be an easy step in the careful thought of a quiet room is not so easy when you are occupied by the process of presentation. What may be easy after having gone over this (and other) proofs is not so easy to follow in the first pass by a viewer. These problems seem only correctable by process of repeated test-and-revise. When presenting the proof, simply speaking with sufficient precision is substantially harder than in normal conversation (where precision is not so critical). Practice can help here. When presenting the proof, going at the right p

2 0.19260126 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

Introduction: Bob Williamson and I are the learning theory PC members at NIPS this year. This is some attempt to state the standards and tests I applied to the papers. I think it is a good idea to talk about this for two reasons: Making community standards a matter of public record seems healthy. It give us a chance to debate what is and is not the right standard. It might even give us a bit more consistency across the years. It may save us all time. There are a number of papers submitted which just aren’t there yet. Avoiding submitting is the right decision in this case. There are several criteria for judging a paper. All of these were active this year. Some criteria are uncontroversial while others may be so. The paper must have a theorem establishing something new for which it is possible to derive high confidence in the correctness of the results. A surprising number of papers fail this test. This criteria seems essential to the definition of “theory”. Missing theo

3 0.13750234 412 hunch net-2010-09-28-Machined Learnings

Introduction: Paul Mineiro has started Machined Learnings where he’s seriously attempting to do ML research in public. I personally need to read through in greater detail, as much of it is learning reduction related, trying to deal with the sorts of complex source problems that come up in practice.

4 0.13556869 104 hunch net-2005-08-22-Do you believe in induction?

Introduction: Foster Provost gave a talk at the ICML metalearning workshop on “metalearning” and the “no free lunch theorem” which seems worth summarizing. As a review: the no free lunch theorem is the most complicated way we know of to say that a bias is required in order to learn. The simplest way to see this is in a nonprobabilistic setting. If you are given examples of the form (x,y) and you wish to predict y from x then any prediction mechanism errs half the time in expectation over all sequences of examples. The proof of this is very simple: on every example a predictor must make some prediction and by symmetry over the set of sequences it will be wrong half the time and right half the time. The basic idea of this proof has been applied to many other settings. The simplistic interpretation of this theorem which many people jump to is “machine learning is dead” since there can be no single learning algorithm which can solve all learning problems. This is the wrong way to thi

5 0.10819178 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

6 0.090655915 307 hunch net-2008-07-04-More Presentation Preparation

7 0.089603886 202 hunch net-2006-08-10-Precision is not accuracy

8 0.082320847 249 hunch net-2007-06-21-Presentation Preparation

9 0.081253305 22 hunch net-2005-02-18-What it means to do research.

10 0.080633417 454 hunch net-2012-01-30-ICML Posters and Scope

11 0.079262927 38 hunch net-2005-03-09-Bad Reviewing

12 0.074661613 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

13 0.074504182 142 hunch net-2005-12-22-Yes , I am applying

14 0.073908471 395 hunch net-2010-04-26-Compassionate Reviewing

15 0.073490568 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.073239677 27 hunch net-2005-02-23-Problem: Reinforcement Learning with Classification

17 0.072478853 75 hunch net-2005-05-28-Running A Machine Learning Summer School

18 0.072323635 327 hunch net-2008-11-16-Observations on Linearity for Reductions to Regression

19 0.071304679 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

20 0.070794635 80 hunch net-2005-06-10-Workshops are not Conferences


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.16), (1, -0.03), (2, 0.002), (3, 0.056), (4, -0.025), (5, 0.013), (6, 0.049), (7, 0.012), (8, -0.002), (9, 0.021), (10, -0.023), (11, -0.037), (12, 0.046), (13, 0.056), (14, 0.055), (15, -0.013), (16, 0.054), (17, 0.043), (18, -0.044), (19, -0.001), (20, -0.019), (21, -0.037), (22, -0.002), (23, -0.049), (24, -0.006), (25, -0.012), (26, -0.022), (27, -0.117), (28, -0.116), (29, -0.009), (30, -0.044), (31, -0.052), (32, -0.099), (33, 0.068), (34, -0.006), (35, -0.076), (36, 0.032), (37, 0.077), (38, -0.008), (39, -0.017), (40, 0.01), (41, 0.021), (42, -0.031), (43, -0.024), (44, -0.051), (45, 0.037), (46, 0.016), (47, 0.09), (48, 0.051), (49, -0.009)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9772768 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

Introduction: When presenting part of the Reinforcement Learning theory tutorial at ICML 2006 , I was forcibly reminded of this. There are several difficulties. When creating the presentation, the correct level of detail is tricky. With too much detail, the proof takes too much time and people may be lost to boredom. With too little detail, the steps of the proof involve too-great a jump. This is very difficult to judge. What may be an easy step in the careful thought of a quiet room is not so easy when you are occupied by the process of presentation. What may be easy after having gone over this (and other) proofs is not so easy to follow in the first pass by a viewer. These problems seem only correctable by process of repeated test-and-revise. When presenting the proof, simply speaking with sufficient precision is substantially harder than in normal conversation (where precision is not so critical). Practice can help here. When presenting the proof, going at the right p

2 0.73827964 249 hunch net-2007-06-21-Presentation Preparation

Introduction: A big part of doing research is presenting it at a conference. Since many people start out shy of public presentations, this can be a substantial challenge. Here are a few notes which might be helpful when thinking about preparing a presentation on research. Motivate . Talks which don’t start by describing the problem to solve cause many people to zone out. Prioritize . It is typical that you have more things to say than time to say them, and many presenters fall into the failure mode of trying to say too much. This is an easy-to-understand failure mode as it’s very natural to want to include everything. A basic fact is: you can’t. Example of this are: Your slides are so densely full of equations and words that you can’t cover them. Your talk runs over and a moderator prioritizes for you by cutting you off. You motor-mouth through the presentation, and the information absorption rate of the audience prioritizes in some uncontrolled fashion. The rate of flow of c

3 0.72415614 162 hunch net-2006-03-09-Use of Notation

Introduction: For most people, a mathematical notation is like a language: you learn it and stick with it. For people doing mathematical research, however, this is not enough: they must design new notations for new problems. The design of good notation is both hard and worthwhile since a bad initial notation can retard a line of research greatly. Before we had mathematical notation, equations were all written out in language. Since words have multiple meanings and variable precedences, long equations written out in language can be extraordinarily difficult and sometimes fundamentally ambiguous. A good representative example of this is the legalese in the tax code. Since we want greater precision and clarity, we adopt mathematical notation. One fundamental thing to understand about mathematical notation, is that humans as logic verifiers, are barely capable. This is the fundamental reason why one notation can be much better than another. This observation is easier to miss than you might

4 0.7080552 202 hunch net-2006-08-10-Precision is not accuracy

Introduction: In my experience, there are two different groups of people who believe the same thing: the mathematics encountered in typical machine learning conference papers is often of questionable value. The two groups who agree on this are applied machine learning people who have given up on math, and mature theoreticians who understand the limits of theory. Partly, this is just a statement about where we are with respect to machine learning. In particular, we have no mechanism capable of generating a prescription for how to solve all learning problems. In the absence of such certainty, people try to come up with formalisms that partially describe and motivate how and why they do things. This is natural and healthy—we might hope that it will eventually lead to just such a mechanism. But, part of this is simply an emphasis on complexity over clarity. A very natural and simple theoretical statement is often obscured by complexifications. Common sources of complexification include:

5 0.66677922 307 hunch net-2008-07-04-More Presentation Preparation

Introduction: We’ve discussed presentation preparation before , but I have one more thing to add: transitioning . For a research presentation, it is substantially helpful for the audience if transitions are clear. A common outline for a research presentation in machine leanring is: The problem . Presentations which don’t describe the problem almost immediately lose people, because the context is missing to understand the detail. Prior relevant work . In many cases, a paper builds on some previous bit of work which must be understood in order to understand what the paper does. A common failure mode seems to be spending too much time on prior work. Discuss just the relevant aspects of prior work in the language of your work. Sometimes this is missing when unneeded. What we did . For theory papers in particular, it is often not possible to really cover the details. Prioritizing what you present can be very important. How it worked . Many papers in Machine Learning have some sor

6 0.6080184 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

7 0.60399967 91 hunch net-2005-07-10-Thinking the Unthought

8 0.60187006 22 hunch net-2005-02-18-What it means to do research.

9 0.5812853 76 hunch net-2005-05-29-Bad ideas

10 0.57498366 104 hunch net-2005-08-22-Do you believe in induction?

11 0.54031885 44 hunch net-2005-03-21-Research Styles in Machine Learning

12 0.52213174 454 hunch net-2012-01-30-ICML Posters and Scope

13 0.51934695 126 hunch net-2005-10-26-Fallback Analysis is a Secret to Useful Algorithms

14 0.51184428 57 hunch net-2005-04-16-Which Assumptions are Reasonable?

15 0.50962913 231 hunch net-2007-02-10-Best Practices for Collaboration

16 0.50827366 257 hunch net-2007-07-28-Asking questions

17 0.48985958 42 hunch net-2005-03-17-Going all the Way, Sometimes

18 0.48456082 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

19 0.47084263 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

20 0.46741021 147 hunch net-2006-01-08-Debugging Your Brain


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.134), (38, 0.018), (53, 0.072), (55, 0.144), (56, 0.38), (94, 0.081), (95, 0.065)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.91846651 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

Introduction: When presenting part of the Reinforcement Learning theory tutorial at ICML 2006 , I was forcibly reminded of this. There are several difficulties. When creating the presentation, the correct level of detail is tricky. With too much detail, the proof takes too much time and people may be lost to boredom. With too little detail, the steps of the proof involve too-great a jump. This is very difficult to judge. What may be an easy step in the careful thought of a quiet room is not so easy when you are occupied by the process of presentation. What may be easy after having gone over this (and other) proofs is not so easy to follow in the first pass by a viewer. These problems seem only correctable by process of repeated test-and-revise. When presenting the proof, simply speaking with sufficient precision is substantially harder than in normal conversation (where precision is not so critical). Practice can help here. When presenting the proof, going at the right p

2 0.80420363 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

Introduction: The consensus of several discussions at ICML is that the number of jobs for people knowing machine learning well substantially exceeds supply. This is my experience as well. Demand comes from many places, but I’ve seen particularly strong demand from trading companies and internet startups. Like all interest bursts, this one will probably pass because of economic recession or other distractions. Nevertheless, the general outlook for machine learning in business seems to be good. Machine learning is all about optimization when there is uncertainty and lots of data. The quantity of data available is growing quickly as computer-run processes and sensors become more common, and the quality of the data is dropping since there is little editorial control in it’s collection. Machine Learning is a difficult subject to master (*), so those who do should remain in demand over the long term. (*) In fact, it would be reasonable to claim that no one has mastered it—there are just some peo

3 0.79008037 307 hunch net-2008-07-04-More Presentation Preparation

Introduction: We’ve discussed presentation preparation before , but I have one more thing to add: transitioning . For a research presentation, it is substantially helpful for the audience if transitions are clear. A common outline for a research presentation in machine leanring is: The problem . Presentations which don’t describe the problem almost immediately lose people, because the context is missing to understand the detail. Prior relevant work . In many cases, a paper builds on some previous bit of work which must be understood in order to understand what the paper does. A common failure mode seems to be spending too much time on prior work. Discuss just the relevant aspects of prior work in the language of your work. Sometimes this is missing when unneeded. What we did . For theory papers in particular, it is often not possible to really cover the details. Prioritizing what you present can be very important. How it worked . Many papers in Machine Learning have some sor

4 0.75979412 356 hunch net-2009-05-24-2009 ICML discussion site

Introduction: Mark Reid has setup a discussion site for ICML papers again this year and Monica Dinculescu has linked it in from the ICML site. Last year’s attempt appears to have been an acceptable but not wild success as a little bit of fruitful discussion occurred. I’m hoping this year will be a bit more of a success—please don’t be shy I’d like to also point out that ICML ‘s early registration deadline has a few hours left, while UAI ‘s and COLT ‘s are in a week.

5 0.72086519 460 hunch net-2012-03-24-David Waltz

Introduction: has died . He lived a full life. I know him personally as a founder of the Center for Computational Learning Systems and the New York Machine Learning Symposium , both of which have sheltered and promoted the advancement of machine learning. I expect much of the New York area machine learning community will miss him, as well as many others around the world.

6 0.71998388 202 hunch net-2006-08-10-Precision is not accuracy

7 0.54261583 249 hunch net-2007-06-21-Presentation Preparation

8 0.53271699 416 hunch net-2010-10-29-To Vidoelecture or not

9 0.53072 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

10 0.52669466 452 hunch net-2012-01-04-Why ICML? and the summer conferences

11 0.52587575 141 hunch net-2005-12-17-Workshops as Franchise Conferences

12 0.52227604 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

13 0.51884288 464 hunch net-2012-05-03-Microsoft Research, New York City

14 0.50445718 453 hunch net-2012-01-28-Why COLT?

15 0.50024474 40 hunch net-2005-03-13-Avoiding Bad Reviewing

16 0.49905956 437 hunch net-2011-07-10-ICML 2011 and the future

17 0.49854651 116 hunch net-2005-09-30-Research in conferences

18 0.49534288 423 hunch net-2011-02-02-User preferences for search engines

19 0.49072778 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

20 0.48743856 454 hunch net-2012-01-30-ICML Posters and Scope