hunch_net hunch_net-2005 hunch_net-2005-114 knowledge-graph by maker-knowledge-mining

114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning


meta infos for this blog

Source: html

Introduction: This is a proposal for a workshop. It may or may not happen depending on the level of interest. If you are interested, feel free to indicate so (by email or comments). Description: Assume(*) that any system for solving large difficult learning problems must decompose into repeated use of basic elements (i.e. atoms). There are many basic questions which remain: What are the viable basic elements? What makes a basic element viable? What are the viable principles for the composition of these basic elements? What are the viable principles for learning in such systems? What problems can this approach handle? Hal Daume adds: Can composition of atoms be (semi-) automatically constructed[?] When atoms are constructed through reductions, is there some notion of the “naturalness” of the created leaning problems? Other than Markov fields/graphical models/Bayes nets, is there a good language for representing atoms and their compositions? The answer to these a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 It may or may not happen depending on the level of interest. [sent-2, score-0.209]

2 If you are interested, feel free to indicate so (by email or comments). [sent-3, score-0.167]

3 Description: Assume(*) that any system for solving large difficult learning problems must decompose into repeated use of basic elements (i. [sent-4, score-0.903]

4 There are many basic questions which remain: What are the viable basic elements? [sent-7, score-0.835]

5 What are the viable principles for the composition of these basic elements? [sent-9, score-0.962]

6 What are the viable principles for learning in such systems? [sent-10, score-0.559]

7 Hal Daume adds: Can composition of atoms be (semi-) automatically constructed[? [sent-12, score-0.759]

8 ] When atoms are constructed through reductions, is there some notion of the “naturalness” of the created leaning problems? [sent-13, score-0.722]

9 Other than Markov fields/graphical models/Bayes nets, is there a good language for representing atoms and their compositions? [sent-14, score-0.579]

10 The answer to these and related questions remain unclear to me. [sent-15, score-0.318]

11 A workshop gives us a chance to pool what we have learned from some very different approaches to tackling this same basic goal. [sent-16, score-0.577]

12 (*) As a general principle, it’s very difficult to conceive of any system for solving any large problem which does not decompose. [sent-17, score-0.238]

13 Plan Sketch: A two day workshop with unhurried presentations and discussion seems appropriate, especially given the diversity of approaches. [sent-18, score-0.383]

14 The above two points suggest having a workshop on a {Friday, Saturday} or {Saturday, Sunday} at TTI-Chicago. [sent-20, score-0.205]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('atoms', 0.47), ('viable', 0.363), ('saturday', 0.235), ('composition', 0.218), ('elements', 0.202), ('principles', 0.196), ('constructed', 0.188), ('basic', 0.185), ('remain', 0.149), ('workshop', 0.137), ('naturalness', 0.117), ('sunday', 0.117), ('representing', 0.109), ('diversity', 0.109), ('decompose', 0.103), ('tackling', 0.103), ('questions', 0.102), ('indicate', 0.098), ('nets', 0.098), ('repeated', 0.098), ('sketch', 0.094), ('adds', 0.094), ('daume', 0.094), ('friday', 0.094), ('proposal', 0.091), ('solving', 0.087), ('pool', 0.085), ('principle', 0.083), ('element', 0.079), ('problems', 0.077), ('system', 0.076), ('presentations', 0.076), ('hal', 0.076), ('difficult', 0.075), ('assume', 0.072), ('description', 0.072), ('handle', 0.071), ('automatically', 0.071), ('plan', 0.071), ('may', 0.07), ('email', 0.069), ('depending', 0.069), ('markov', 0.069), ('suggest', 0.068), ('unclear', 0.067), ('gives', 0.067), ('appropriate', 0.065), ('created', 0.064), ('comments', 0.061), ('day', 0.061)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

Introduction: This is a proposal for a workshop. It may or may not happen depending on the level of interest. If you are interested, feel free to indicate so (by email or comments). Description: Assume(*) that any system for solving large difficult learning problems must decompose into repeated use of basic elements (i.e. atoms). There are many basic questions which remain: What are the viable basic elements? What makes a basic element viable? What are the viable principles for the composition of these basic elements? What are the viable principles for learning in such systems? What problems can this approach handle? Hal Daume adds: Can composition of atoms be (semi-) automatically constructed[?] When atoms are constructed through reductions, is there some notion of the “naturalness” of the created leaning problems? Other than Markov fields/graphical models/Bayes nets, is there a good language for representing atoms and their compositions? The answer to these a

2 0.11438545 166 hunch net-2006-03-24-NLPers

Introduction: Hal Daume has started the NLPers blog to discuss learning for language problems.

3 0.092401378 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

Introduction: This is a summary of the workshop on Learning Problem Design which Alina and I ran at NIPS this year. The first question many people have is “What is learning problem design?” This workshop is about admitting that solving learning problems does not start with labeled data, but rather somewhere before. When humans are hired to produce labels, this is usually not a serious problem because you can tell them precisely what semantics you want the labels to have, and we can fix some set of features in advance. However, when other methods are used this becomes more problematic. This focus is important for Machine Learning because there are very large quantities of data which are not labeled by a hired human. The title of the workshop was a bit ambitious, because a workshop is not long enough to synthesize a diversity of approaches into a coherent set of principles. For me, the posters at the end of the workshop were quite helpful in getting approaches to gel. Here are some an

4 0.086370572 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

5 0.086085066 46 hunch net-2005-03-24-The Role of Workshops

Introduction: A good workshop is often far more interesting than the papers at a conference. This happens because a workshop has a much tighter focus than a conference. Since you choose the workshops fitting your interest, the increased relevance can greatly enhance the level of your interest and attention. Roughly speaking, a workshop program consists of elements related to a subject of your interest. The main conference program consists of elements related to someone’s interest (which is rarely your own). Workshops are more about doing research while conferences are more about presenting research. Several conferences have associated workshop programs, some with deadlines due shortly. ICML workshops Due April 1 IJCAI workshops Deadlines Vary KDD workshops Not yet finalized Anyone going to these conferences should examine the workshops and see if any are of interest. (If none are, then maybe you should organize one next year.)

6 0.083765492 330 hunch net-2008-12-07-A NIPS paper

7 0.078657828 79 hunch net-2005-06-08-Question: “When is the right time to insert the loss function?”

8 0.074380204 84 hunch net-2005-06-22-Languages of Learning

9 0.072221413 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

10 0.06878072 108 hunch net-2005-09-06-A link

11 0.067419283 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

12 0.063213274 444 hunch net-2011-09-07-KDD and MUCMD 2011

13 0.063044138 14 hunch net-2005-02-07-The State of the Reduction

14 0.062183402 401 hunch net-2010-06-20-2010 ICML discussion site

15 0.061201952 424 hunch net-2011-02-17-What does Watson mean?

16 0.060634598 343 hunch net-2009-02-18-Decision by Vetocracy

17 0.05937615 454 hunch net-2012-01-30-ICML Posters and Scope

18 0.05914994 380 hunch net-2009-11-29-AI Safety

19 0.058919191 276 hunch net-2007-12-10-Learning Track of International Planning Competition

20 0.058473893 432 hunch net-2011-04-20-The End of the Beginning of Active Learning


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.135), (1, 0.001), (2, -0.067), (3, -0.006), (4, 0.004), (5, 0.043), (6, 0.041), (7, 0.001), (8, 0.009), (9, -0.011), (10, -0.045), (11, -0.095), (12, -0.046), (13, 0.071), (14, -0.018), (15, -0.044), (16, -0.025), (17, 0.013), (18, -0.069), (19, -0.004), (20, -0.076), (21, -0.032), (22, -0.029), (23, -0.012), (24, 0.042), (25, 0.055), (26, 0.013), (27, -0.061), (28, -0.006), (29, 0.073), (30, 0.006), (31, -0.006), (32, 0.026), (33, 0.012), (34, -0.004), (35, 0.025), (36, -0.052), (37, 0.013), (38, -0.037), (39, -0.019), (40, 0.02), (41, -0.033), (42, -0.058), (43, 0.016), (44, 0.075), (45, -0.026), (46, -0.04), (47, -0.005), (48, 0.058), (49, 0.002)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96474934 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

Introduction: This is a proposal for a workshop. It may or may not happen depending on the level of interest. If you are interested, feel free to indicate so (by email or comments). Description: Assume(*) that any system for solving large difficult learning problems must decompose into repeated use of basic elements (i.e. atoms). There are many basic questions which remain: What are the viable basic elements? What makes a basic element viable? What are the viable principles for the composition of these basic elements? What are the viable principles for learning in such systems? What problems can this approach handle? Hal Daume adds: Can composition of atoms be (semi-) automatically constructed[?] When atoms are constructed through reductions, is there some notion of the “naturalness” of the created leaning problems? Other than Markov fields/graphical models/Bayes nets, is there a good language for representing atoms and their compositions? The answer to these a

2 0.67781222 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

Introduction: Alina and I are organizing a workshop on Learning Problem Design at NIPS . What is learning problem design? It’s about being clever in creating learning problems from otherwise unlabeled data. Read the webpage above for examples. I want to participate! Email us before Nov. 1 with a description of what you want to talk about.

3 0.63243723 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

Introduction: This is a summary of the workshop on Learning Problem Design which Alina and I ran at NIPS this year. The first question many people have is “What is learning problem design?” This workshop is about admitting that solving learning problems does not start with labeled data, but rather somewhere before. When humans are hired to produce labels, this is usually not a serious problem because you can tell them precisely what semantics you want the labels to have, and we can fix some set of features in advance. However, when other methods are used this becomes more problematic. This focus is important for Machine Learning because there are very large quantities of data which are not labeled by a hired human. The title of the workshop was a bit ambitious, because a workshop is not long enough to synthesize a diversity of approaches into a coherent set of principles. For me, the posters at the end of the workshop were quite helpful in getting approaches to gel. Here are some an

4 0.56854057 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

5 0.55204958 459 hunch net-2012-03-13-The Submodularity workshop and Lucca Professorship

Introduction: Nina points out the Submodularity Workshop March 19-20 next week at Georgia Tech . Many people want to make Submodularity the new Convexity in machine learning, and it certainly seems worth exploring. Sara Olson also points out a tenured faculty position at IMT Lucca with a deadline of May 15th . Lucca happens to be the ancestral home of 1/4 of my heritage

6 0.54915601 349 hunch net-2009-04-21-Interesting Presentations at Snowbird

7 0.53853935 68 hunch net-2005-05-10-Learning Reductions are Reductionist

8 0.52715778 307 hunch net-2008-07-04-More Presentation Preparation

9 0.50988477 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

10 0.50702572 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

11 0.49865207 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

12 0.49427518 82 hunch net-2005-06-17-Reopening RL->Classification

13 0.48734123 431 hunch net-2011-04-18-A paper not at Snowbird

14 0.46942452 103 hunch net-2005-08-18-SVM Adaptability

15 0.46874616 37 hunch net-2005-03-08-Fast Physics for Learning

16 0.46836358 198 hunch net-2006-07-25-Upcoming conference

17 0.46590963 358 hunch net-2009-06-01-Multitask Poisoning

18 0.46155715 370 hunch net-2009-09-18-Necessary and Sufficient Research

19 0.46116266 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

20 0.46079671 192 hunch net-2006-07-08-Some recent papers


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(22, 0.461), (27, 0.217), (38, 0.029), (53, 0.056), (55, 0.03), (94, 0.015), (95, 0.078)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.88291919 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

Introduction: This is a proposal for a workshop. It may or may not happen depending on the level of interest. If you are interested, feel free to indicate so (by email or comments). Description: Assume(*) that any system for solving large difficult learning problems must decompose into repeated use of basic elements (i.e. atoms). There are many basic questions which remain: What are the viable basic elements? What makes a basic element viable? What are the viable principles for the composition of these basic elements? What are the viable principles for learning in such systems? What problems can this approach handle? Hal Daume adds: Can composition of atoms be (semi-) automatically constructed[?] When atoms are constructed through reductions, is there some notion of the “naturalness” of the created leaning problems? Other than Markov fields/graphical models/Bayes nets, is there a good language for representing atoms and their compositions? The answer to these a

2 0.73710978 113 hunch net-2005-09-19-NIPS Workshops

Introduction: Attendance at the NIPS workshops is highly recommended for both research and learning. Unfortunately, there does not yet appear to be a public list of workshops. However, I found the following workshop webpages of interest: Machine Learning in Finance Learning to Rank Foundations of Active Learning Machine Learning Based Robotics in Unstructured Environments There are many more workshops. In fact, there are so many that it is not plausible anyone can attend every workshop they are interested in. Maybe in future years the organizers can spread them out over more days to reduce overlap. Many of these workshops are accepting presentation proposals (due mid-October).

3 0.71586674 410 hunch net-2010-09-17-New York Area Machine Learning Events

Introduction: On Sept 21, there is another machine learning meetup where I’ll be speaking. Although the topic is contextual bandits, I think of it as “the future of machine learning”. In particular, it’s all about how to learn in an interactive environment, such as for ad display, trading, news recommendation, etc… On Sept 24, abstracts for the New York Machine Learning Symposium are due. This is the largest Machine Learning event in the area, so it’s a great way to have a conversation with other people. On Oct 22, the NY ML Symposium actually happens. This year, we are expanding the spotlights, and trying to have more time for posters. In addition, we have a strong set of invited speakers: David Blei , Sanjoy Dasgupta , Tommi Jaakkola , and Yann LeCun . After the meeting, a late hackNY related event is planned where students and startups can meet. I’d also like to point out the related CS/Econ symposium as I have interests there as well.

4 0.62253636 358 hunch net-2009-06-01-Multitask Poisoning

Introduction: There are many ways that interesting research gets done. For example it’s common at a conference for someone to discuss a problem with a partial solution, and for someone else to know how to solve a piece of it, resulting in a paper. In some sense, these are the easiest results we can achieve, so we should ask: Can all research be this easy? The answer is certainly no for fields where research inherently requires experimentation to discover how the real world works. However, mathematics, including parts of physics, computer science, statistics, etc… which are effectively mathematics don’t require experimentation. In effect, a paper can be simply a pure expression of thinking. Can all mathematical-style research be this easy? What’s going on here is research-by-communication. Someone knows something, someone knows something else, and as soon as someone knows both things, a problem is solved. The interesting thing about research-by-communication is that it is becoming radic

5 0.55471939 79 hunch net-2005-06-08-Question: “When is the right time to insert the loss function?”

Introduction: Hal asks a very good question: “When is the right time to insert the loss function?” In particular, should it be used at testing time or at training time? When the world imposes a loss on us, the standard Bayesian recipe is to predict the (conditional) probability of each possibility and then choose the possibility which minimizes the expected loss. In contrast, as the confusion over “loss = money lost” or “loss = the thing you optimize” might indicate, many people ignore the Bayesian approach and simply optimize their loss (or a close proxy for their loss) over the representation on the training set. The best answer I can give is “it’s unclear, but I prefer optimizing the loss at training time”. My experience is that optimizing the loss in the most direct manner possible typically yields best performance. This question is related to a basic principle which both Yann LeCun (applied) and Vladimir Vapnik (theoretical) advocate: “solve the simplest prediction problem that s

6 0.45144323 483 hunch net-2013-06-10-The Large Scale Learning class notes

7 0.44996712 12 hunch net-2005-02-03-Learning Theory, by assumption

8 0.44984347 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

9 0.44469115 360 hunch net-2009-06-15-In Active Learning, the question changes

10 0.44129002 432 hunch net-2011-04-20-The End of the Beginning of Active Learning

11 0.44014895 370 hunch net-2009-09-18-Necessary and Sufficient Research

12 0.43957117 194 hunch net-2006-07-11-New Models

13 0.43956298 220 hunch net-2006-11-27-Continuizing Solutions

14 0.43881726 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

15 0.43786997 9 hunch net-2005-02-01-Watchword: Loss

16 0.4371025 352 hunch net-2009-05-06-Machine Learning to AI

17 0.4370791 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

18 0.43676978 14 hunch net-2005-02-07-The State of the Reduction

19 0.43652076 8 hunch net-2005-02-01-NIPS: Online Bayes

20 0.43614033 41 hunch net-2005-03-15-The State of Tight Bounds