hunch_net hunch_net-2008 hunch_net-2008-307 knowledge-graph by maker-knowledge-mining

307 hunch net-2008-07-04-More Presentation Preparation


meta infos for this blog

Source: html

Introduction: We’ve discussed presentation preparation before , but I have one more thing to add: transitioning . For a research presentation, it is substantially helpful for the audience if transitions are clear. A common outline for a research presentation in machine leanring is: The problem . Presentations which don’t describe the problem almost immediately lose people, because the context is missing to understand the detail. Prior relevant work . In many cases, a paper builds on some previous bit of work which must be understood in order to understand what the paper does. A common failure mode seems to be spending too much time on prior work. Discuss just the relevant aspects of prior work in the language of your work. Sometimes this is missing when unneeded. What we did . For theory papers in particular, it is often not possible to really cover the details. Prioritizing what you present can be very important. How it worked . Many papers in Machine Learning have some sor


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 We’ve discussed presentation preparation before , but I have one more thing to add: transitioning . [sent-1, score-0.694]

2 For a research presentation, it is substantially helpful for the audience if transitions are clear. [sent-2, score-0.401]

3 A common outline for a research presentation in machine leanring is: The problem . [sent-3, score-0.943]

4 Presentations which don’t describe the problem almost immediately lose people, because the context is missing to understand the detail. [sent-4, score-0.685]

5 In many cases, a paper builds on some previous bit of work which must be understood in order to understand what the paper does. [sent-6, score-0.376]

6 A common failure mode seems to be spending too much time on prior work. [sent-7, score-0.498]

7 Discuss just the relevant aspects of prior work in the language of your work. [sent-8, score-0.464]

8 For theory papers in particular, it is often not possible to really cover the details. [sent-11, score-0.215]

9 Prioritizing what you present can be very important. [sent-12, score-0.071]

10 Many papers in Machine Learning have some sort of experimental test of the algorithm. [sent-14, score-0.196]

11 Sometimes this is missing when the work is theoretical. [sent-15, score-0.316]

12 What seems to often happen, is that there is no transitioning in the presentation. [sent-16, score-0.359]

13 This can happen in one of two ways: Content Confusion . [sent-17, score-0.129]

14 Sometimes the problem description is merged into (2), and (3). [sent-18, score-0.293]

15 The solution is to rewrite to isolate the presentation components. [sent-21, score-0.537]

16 Sometimes the presentation does have a reasonable structure as above, but there are just no transitions in the delivery, creating apparent content confusion. [sent-23, score-0.894]

17 An approach I often use is to just have an outline slide with the next subject highlighted between pieces of the transition. [sent-25, score-0.63]

18 The delivery of the presentation can also handle this well. [sent-26, score-0.737]

19 For example, have an extra long pause after stating the problem and check to see if the audience has questions. [sent-27, score-0.502]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('presentation', 0.412), ('transitioning', 0.282), ('delivery', 0.25), ('outline', 0.25), ('transitions', 0.232), ('missing', 0.209), ('sometimes', 0.196), ('prior', 0.17), ('audience', 0.169), ('content', 0.141), ('happen', 0.129), ('merged', 0.125), ('prioritizing', 0.125), ('highlighted', 0.125), ('rewrite', 0.125), ('leanring', 0.116), ('builds', 0.116), ('apparent', 0.109), ('relevant', 0.109), ('work', 0.107), ('spending', 0.104), ('slide', 0.104), ('stating', 0.097), ('problem', 0.091), ('confusion', 0.086), ('mode', 0.086), ('lose', 0.084), ('understand', 0.084), ('presentations', 0.081), ('aspects', 0.078), ('often', 0.077), ('description', 0.077), ('immediately', 0.077), ('describe', 0.077), ('handle', 0.075), ('pieces', 0.074), ('check', 0.074), ('common', 0.074), ('cover', 0.073), ('discuss', 0.072), ('present', 0.071), ('extra', 0.071), ('experimental', 0.069), ('understood', 0.069), ('add', 0.065), ('papers', 0.065), ('failure', 0.064), ('context', 0.063), ('sort', 0.062), ('worked', 0.061)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 307 hunch net-2008-07-04-More Presentation Preparation

Introduction: We’ve discussed presentation preparation before , but I have one more thing to add: transitioning . For a research presentation, it is substantially helpful for the audience if transitions are clear. A common outline for a research presentation in machine leanring is: The problem . Presentations which don’t describe the problem almost immediately lose people, because the context is missing to understand the detail. Prior relevant work . In many cases, a paper builds on some previous bit of work which must be understood in order to understand what the paper does. A common failure mode seems to be spending too much time on prior work. Discuss just the relevant aspects of prior work in the language of your work. Sometimes this is missing when unneeded. What we did . For theory papers in particular, it is often not possible to really cover the details. Prioritizing what you present can be very important. How it worked . Many papers in Machine Learning have some sor

2 0.16484819 80 hunch net-2005-06-10-Workshops are not Conferences

Introduction: … and you should use that fact. A workshop differs from a conference in that it is about a focused group of people worrying about a focused topic. It also differs in that a workshop is typically a “one-time affair” rather than a series. (The Snowbird learning workshop counts as a conference in this respect.) A common failure mode of both organizers and speakers at a workshop is to treat it as a conference. This is “ok”, but it is not really taking advantage of the situation. Here are some things I’ve learned: For speakers: A smaller audience means it can be more interactive. Interactive means a better chance to avoid losing your audience and a more interesting presentation (because you can adapt to your audience). Greater focus amongst the participants means you can get to the heart of the matter more easily, and discuss tradeoffs more carefully. Unlike conferences, relevance is more valued than newness. For organizers: Not everything needs to be in a conference st

3 0.16151333 249 hunch net-2007-06-21-Presentation Preparation

Introduction: A big part of doing research is presenting it at a conference. Since many people start out shy of public presentations, this can be a substantial challenge. Here are a few notes which might be helpful when thinking about preparing a presentation on research. Motivate . Talks which don’t start by describing the problem to solve cause many people to zone out. Prioritize . It is typical that you have more things to say than time to say them, and many presenters fall into the failure mode of trying to say too much. This is an easy-to-understand failure mode as it’s very natural to want to include everything. A basic fact is: you can’t. Example of this are: Your slides are so densely full of equations and words that you can’t cover them. Your talk runs over and a moderator prioritizes for you by cutting you off. You motor-mouth through the presentation, and the information absorption rate of the audience prioritizes in some uncontrolled fashion. The rate of flow of c

4 0.13673426 86 hunch net-2005-06-28-The cross validation problem: cash reward

Introduction: I just presented the cross validation problem at COLT . The problem now has a cash prize (up to $500) associated with it—see the presentation for details. The write-up for colt .

5 0.10304974 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

Introduction: Bob Williamson and I are the learning theory PC members at NIPS this year. This is some attempt to state the standards and tests I applied to the papers. I think it is a good idea to talk about this for two reasons: Making community standards a matter of public record seems healthy. It give us a chance to debate what is and is not the right standard. It might even give us a bit more consistency across the years. It may save us all time. There are a number of papers submitted which just aren’t there yet. Avoiding submitting is the right decision in this case. There are several criteria for judging a paper. All of these were active this year. Some criteria are uncontroversial while others may be so. The paper must have a theorem establishing something new for which it is possible to derive high confidence in the correctness of the results. A surprising number of papers fail this test. This criteria seems essential to the definition of “theory”. Missing theo

6 0.098702006 237 hunch net-2007-04-02-Contextual Scaling

7 0.09556289 54 hunch net-2005-04-08-Fast SVMs

8 0.094081625 454 hunch net-2012-01-30-ICML Posters and Scope

9 0.093855903 208 hunch net-2006-09-18-What is missing for online collaborative research?

10 0.093625315 452 hunch net-2012-01-04-Why ICML? and the summer conferences

11 0.093317859 165 hunch net-2006-03-23-The Approximation Argument

12 0.093287781 22 hunch net-2005-02-18-What it means to do research.

13 0.090655915 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

14 0.086221188 30 hunch net-2005-02-25-Why Papers?

15 0.083295919 235 hunch net-2007-03-03-All Models of Learning have Flaws

16 0.083164513 91 hunch net-2005-07-10-Thinking the Unthought

17 0.081772149 53 hunch net-2005-04-06-Structured Regret Minimization

18 0.079618357 98 hunch net-2005-07-27-Not goal metrics

19 0.078275129 233 hunch net-2007-02-16-The Forgetting

20 0.077396989 132 hunch net-2005-11-26-The Design of an Optimal Research Environment


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.191), (1, -0.023), (2, -0.0), (3, 0.069), (4, -0.018), (5, 0.021), (6, 0.038), (7, 0.025), (8, 0.066), (9, 0.017), (10, -0.028), (11, -0.023), (12, 0.007), (13, 0.088), (14, 0.098), (15, -0.09), (16, 0.062), (17, 0.096), (18, -0.061), (19, -0.035), (20, -0.045), (21, -0.023), (22, 0.036), (23, -0.032), (24, 0.092), (25, 0.091), (26, -0.0), (27, -0.014), (28, -0.064), (29, 0.049), (30, -0.074), (31, 0.011), (32, -0.05), (33, 0.051), (34, 0.124), (35, -0.05), (36, 0.021), (37, 0.067), (38, -0.061), (39, 0.073), (40, 0.081), (41, 0.003), (42, -0.035), (43, 0.022), (44, -0.023), (45, 0.047), (46, -0.071), (47, 0.058), (48, 0.196), (49, -0.017)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96198678 307 hunch net-2008-07-04-More Presentation Preparation

Introduction: We’ve discussed presentation preparation before , but I have one more thing to add: transitioning . For a research presentation, it is substantially helpful for the audience if transitions are clear. A common outline for a research presentation in machine leanring is: The problem . Presentations which don’t describe the problem almost immediately lose people, because the context is missing to understand the detail. Prior relevant work . In many cases, a paper builds on some previous bit of work which must be understood in order to understand what the paper does. A common failure mode seems to be spending too much time on prior work. Discuss just the relevant aspects of prior work in the language of your work. Sometimes this is missing when unneeded. What we did . For theory papers in particular, it is often not possible to really cover the details. Prioritizing what you present can be very important. How it worked . Many papers in Machine Learning have some sor

2 0.79756182 249 hunch net-2007-06-21-Presentation Preparation

Introduction: A big part of doing research is presenting it at a conference. Since many people start out shy of public presentations, this can be a substantial challenge. Here are a few notes which might be helpful when thinking about preparing a presentation on research. Motivate . Talks which don’t start by describing the problem to solve cause many people to zone out. Prioritize . It is typical that you have more things to say than time to say them, and many presenters fall into the failure mode of trying to say too much. This is an easy-to-understand failure mode as it’s very natural to want to include everything. A basic fact is: you can’t. Example of this are: Your slides are so densely full of equations and words that you can’t cover them. Your talk runs over and a moderator prioritizes for you by cutting you off. You motor-mouth through the presentation, and the information absorption rate of the audience prioritizes in some uncontrolled fashion. The rate of flow of c

3 0.63425505 91 hunch net-2005-07-10-Thinking the Unthought

Introduction: One thing common to much research is that the researcher must be the first person ever to have some thought. How do you think of something that has never been thought of? There seems to be no methodical manner of doing this, but there are some tricks. The easiest method is to just have some connection come to you. There is a trick here however: you should write it down and fill out the idea immediately because it can just as easily go away. A harder method is to set aside a block of time and simply think about an idea. Distraction elimination is essential here because thinking about the unthought is hard work which your mind will avoid. Another common method is in conversation. Sometimes the process of verbalizing implies new ideas come up and sometimes whoever you are talking to replies just the right way. This method is dangerous though—you must speak to someone who helps you think rather than someone who occupies your thoughts. Try to rephrase the problem so the a

4 0.63210791 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

Introduction: When presenting part of the Reinforcement Learning theory tutorial at ICML 2006 , I was forcibly reminded of this. There are several difficulties. When creating the presentation, the correct level of detail is tricky. With too much detail, the proof takes too much time and people may be lost to boredom. With too little detail, the steps of the proof involve too-great a jump. This is very difficult to judge. What may be an easy step in the careful thought of a quiet room is not so easy when you are occupied by the process of presentation. What may be easy after having gone over this (and other) proofs is not so easy to follow in the first pass by a viewer. These problems seem only correctable by process of repeated test-and-revise. When presenting the proof, simply speaking with sufficient precision is substantially harder than in normal conversation (where precision is not so critical). Practice can help here. When presenting the proof, going at the right p

5 0.61209166 54 hunch net-2005-04-08-Fast SVMs

Introduction: There was a presentation at snowbird about parallelized support vector machines. In many cases, people parallelize by ignoring serial operations, but that is not what happened here—they parallelize with optimizations. Consequently, this seems to be the fastest SVM in existence. There is a related paper here .

6 0.57241684 44 hunch net-2005-03-21-Research Styles in Machine Learning

7 0.56768119 162 hunch net-2006-03-09-Use of Notation

8 0.55860656 80 hunch net-2005-06-10-Workshops are not Conferences

9 0.55857676 22 hunch net-2005-02-18-What it means to do research.

10 0.53460914 88 hunch net-2005-07-01-The Role of Impromptu Talks

11 0.52737147 231 hunch net-2007-02-10-Best Practices for Collaboration

12 0.52685398 147 hunch net-2006-01-08-Debugging Your Brain

13 0.49869972 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

14 0.48851484 202 hunch net-2006-08-10-Precision is not accuracy

15 0.47493383 98 hunch net-2005-07-27-Not goal metrics

16 0.46679446 233 hunch net-2007-02-16-The Forgetting

17 0.46368662 370 hunch net-2009-09-18-Necessary and Sufficient Research

18 0.46249479 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

19 0.45752841 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

20 0.45552453 386 hunch net-2010-01-13-Sam Roweis died


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.182), (38, 0.068), (53, 0.097), (56, 0.388), (94, 0.087), (95, 0.07)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.87289798 307 hunch net-2008-07-04-More Presentation Preparation

Introduction: We’ve discussed presentation preparation before , but I have one more thing to add: transitioning . For a research presentation, it is substantially helpful for the audience if transitions are clear. A common outline for a research presentation in machine leanring is: The problem . Presentations which don’t describe the problem almost immediately lose people, because the context is missing to understand the detail. Prior relevant work . In many cases, a paper builds on some previous bit of work which must be understood in order to understand what the paper does. A common failure mode seems to be spending too much time on prior work. Discuss just the relevant aspects of prior work in the language of your work. Sometimes this is missing when unneeded. What we did . For theory papers in particular, it is often not possible to really cover the details. Prioritizing what you present can be very important. How it worked . Many papers in Machine Learning have some sor

2 0.86079216 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

Introduction: When presenting part of the Reinforcement Learning theory tutorial at ICML 2006 , I was forcibly reminded of this. There are several difficulties. When creating the presentation, the correct level of detail is tricky. With too much detail, the proof takes too much time and people may be lost to boredom. With too little detail, the steps of the proof involve too-great a jump. This is very difficult to judge. What may be an easy step in the careful thought of a quiet room is not so easy when you are occupied by the process of presentation. What may be easy after having gone over this (and other) proofs is not so easy to follow in the first pass by a viewer. These problems seem only correctable by process of repeated test-and-revise. When presenting the proof, simply speaking with sufficient precision is substantially harder than in normal conversation (where precision is not so critical). Practice can help here. When presenting the proof, going at the right p

3 0.81345016 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

Introduction: The consensus of several discussions at ICML is that the number of jobs for people knowing machine learning well substantially exceeds supply. This is my experience as well. Demand comes from many places, but I’ve seen particularly strong demand from trading companies and internet startups. Like all interest bursts, this one will probably pass because of economic recession or other distractions. Nevertheless, the general outlook for machine learning in business seems to be good. Machine learning is all about optimization when there is uncertainty and lots of data. The quantity of data available is growing quickly as computer-run processes and sensors become more common, and the quality of the data is dropping since there is little editorial control in it’s collection. Machine Learning is a difficult subject to master (*), so those who do should remain in demand over the long term. (*) In fact, it would be reasonable to claim that no one has mastered it—there are just some peo

4 0.72474849 202 hunch net-2006-08-10-Precision is not accuracy

Introduction: In my experience, there are two different groups of people who believe the same thing: the mathematics encountered in typical machine learning conference papers is often of questionable value. The two groups who agree on this are applied machine learning people who have given up on math, and mature theoreticians who understand the limits of theory. Partly, this is just a statement about where we are with respect to machine learning. In particular, we have no mechanism capable of generating a prescription for how to solve all learning problems. In the absence of such certainty, people try to come up with formalisms that partially describe and motivate how and why they do things. This is natural and healthy—we might hope that it will eventually lead to just such a mechanism. But, part of this is simply an emphasis on complexity over clarity. A very natural and simple theoretical statement is often obscured by complexifications. Common sources of complexification include:

5 0.69241041 460 hunch net-2012-03-24-David Waltz

Introduction: has died . He lived a full life. I know him personally as a founder of the Center for Computational Learning Systems and the New York Machine Learning Symposium , both of which have sheltered and promoted the advancement of machine learning. I expect much of the New York area machine learning community will miss him, as well as many others around the world.

6 0.55903262 356 hunch net-2009-05-24-2009 ICML discussion site

7 0.55739558 249 hunch net-2007-06-21-Presentation Preparation

8 0.50373846 380 hunch net-2009-11-29-AI Safety

9 0.5030269 12 hunch net-2005-02-03-Learning Theory, by assumption

10 0.50162631 141 hunch net-2005-12-17-Workshops as Franchise Conferences

11 0.50134534 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

12 0.50019121 416 hunch net-2010-10-29-To Vidoelecture or not

13 0.49723396 131 hunch net-2005-11-16-The Everything Ensemble Edge

14 0.49574292 14 hunch net-2005-02-07-The State of the Reduction

15 0.49518722 259 hunch net-2007-08-19-Choice of Metrics

16 0.49496362 370 hunch net-2009-09-18-Necessary and Sufficient Research

17 0.49441239 143 hunch net-2005-12-27-Automated Labeling

18 0.49355611 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

19 0.49225932 227 hunch net-2007-01-10-A Deep Belief Net Learning Problem

20 0.49210089 19 hunch net-2005-02-14-Clever Methods of Overfitting