hunch_net hunch_net-2009 hunch_net-2009-345 knowledge-graph by maker-knowledge-mining

345 hunch net-2009-03-08-Prediction Science


meta infos for this blog

Source: html

Introduction: One view of machine learning is that it’s about how to program computers to predict well. This suggests a broader research program centered around the more pervasive goal of simply predicting well. There are many distinct strands of this broader research program which are only partially unified. Here are the ones that I know of: Learning Theory . Learning theory focuses on several topics related to the dynamics and process of prediction. Convergence bounds like the VC bound give an intellectual foundation to many learning algorithms. Online learning algorithms like Weighted Majority provide an alternate purely game theoretic foundation for learning. Boosting algorithms yield algorithms for purifying prediction abiliity. Reduction algorithms provide means for changing esoteric problems into well known ones. Machine Learning . A great deal of experience has accumulated in practical algorithm design from a mixture of paradigms, including bayesian, biological, opt


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Online learning algorithms like Weighted Majority provide an alternate purely game theoretic foundation for learning. [sent-7, score-0.401]

2 The core focus in game theory is on equilibria, mostly typically Nash equilibria , but also many other kinds of equilibria. [sent-13, score-0.397]

3 When this is employed well, principally in mechanism design for auctions, it can be a very powerful concept. [sent-15, score-0.436]

4 The basic idea in a prediction market is that commodities can be designed so that their buy/sell price reflects a form of wealth-weighted consensus estimate of the probability of some event. [sent-17, score-0.382]

5 This is not simply mechanism design, because (a) the thin market problem must be dealt with and (b) the structure of plausible guarantees is limited. [sent-18, score-0.326]

6 I have yet to find an example of robust search which isn’t useful—and there are several varieties. [sent-24, score-0.559]

7 This includes active learning, robust min finding , and (more generally) compressed sensing and error correcting codes . [sent-25, score-0.39]

8 The concept of mechanism design is mostly missing from learning theory, but it is sure to be essential when interactive agents are learning. [sent-28, score-0.723]

9 We’ve found several applications for robust search as well as new settings for robust search such as active learning , and error correcting tournaments , but there are surely others. [sent-29, score-1.346]

10 There is a strong relationship between incentive compatibility and choice of loss functions, both for choosing proxy losses and approximating the real loss function imposed by the world. [sent-32, score-0.341]

11 It’s easy to imagine designer loss functions from the study of incentive compatibility mechanisms giving learning algorithm an edge. [sent-33, score-0.54]

12 Since machine learning and information markets share a design goal, are there hybrid approaches which can outperform either? [sent-35, score-0.893]

13 For example there are papers about learning on permutations and pricing in combinatorial markets . [sent-38, score-0.449]

14 In general, the idea of using mechanism design with context information (as is done in machine learning), could also be extremely powerful. [sent-40, score-0.771]

15 Prediction markets are partly an empirical field and partly a mechanism design field. [sent-42, score-0.894]

16 There seems to be relatively little understanding about how well and how exactly information from multiple agents is supposed to interact to derive a good probability estimate. [sent-43, score-0.347]

17 Can an information market be designed with the guarantee that an imperfect but best player decides the vote after not-too-many rounds? [sent-46, score-0.376]

18 Investigations into robust search are extremely diverse, essentially only unified in a mathematically based analysis. [sent-49, score-0.626]

19 For people interested in robust search, machine learning and information markets provide a fertile ground for empirical application and new settings. [sent-50, score-1.117]

20 Can all mechanisms for robust search be done with context information, as is common in learning? [sent-51, score-0.688]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('markets', 0.32), ('robust', 0.306), ('search', 0.253), ('design', 0.236), ('mechanism', 0.2), ('equilibria', 0.165), ('agents', 0.148), ('fertile', 0.133), ('market', 0.126), ('information', 0.122), ('prediction', 0.11), ('foundation', 0.11), ('compatibility', 0.11), ('branches', 0.099), ('focuses', 0.099), ('statistics', 0.095), ('wealth', 0.095), ('broader', 0.092), ('theory', 0.089), ('incentive', 0.089), ('machine', 0.086), ('correcting', 0.084), ('provide', 0.08), ('probability', 0.077), ('majority', 0.075), ('game', 0.074), ('surely', 0.074), ('study', 0.072), ('loss', 0.071), ('learning', 0.07), ('topics', 0.07), ('mechanisms', 0.069), ('designed', 0.069), ('weighted', 0.069), ('partly', 0.069), ('mostly', 0.069), ('program', 0.069), ('algorithms', 0.067), ('extremely', 0.067), ('predictive', 0.067), ('research', 0.065), ('suggests', 0.062), ('context', 0.06), ('varieties', 0.059), ('imperfect', 0.059), ('territory', 0.059), ('hybrid', 0.059), ('pricing', 0.059), ('investigations', 0.059), ('designer', 0.059)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 345 hunch net-2009-03-08-Prediction Science

Introduction: One view of machine learning is that it’s about how to program computers to predict well. This suggests a broader research program centered around the more pervasive goal of simply predicting well. There are many distinct strands of this broader research program which are only partially unified. Here are the ones that I know of: Learning Theory . Learning theory focuses on several topics related to the dynamics and process of prediction. Convergence bounds like the VC bound give an intellectual foundation to many learning algorithms. Online learning algorithms like Weighted Majority provide an alternate purely game theoretic foundation for learning. Boosting algorithms yield algorithms for purifying prediction abiliity. Reduction algorithms provide means for changing esoteric problems into well known ones. Machine Learning . A great deal of experience has accumulated in practical algorithm design from a mixture of paradigms, including bayesian, biological, opt

2 0.28912106 214 hunch net-2006-10-13-David Pennock starts Oddhead

Introduction: his blog on information markets and other research topics .

3 0.17137507 120 hunch net-2005-10-10-Predictive Search is Coming

Introduction: “Search” is the other branch of AI research which has been succesful. Concrete examples include Deep Blue which beat the world chess champion and Chinook the champion checkers program. A set of core search techniques exist including A * , alpha-beta pruning, and others that can be applied to any of many different search problems. Given this, it may be surprising to learn that there has been relatively little succesful work on combining prediction and search. Given also that humans typically solve search problems using a number of predictive heuristics to narrow in on a solution, we might be surprised again. However, the big successful search-based systems have typically not used “smart” search algorithms. Insteady they have optimized for very fast search. This is not for lack of trying… many people have tried to synthesize search and prediction to various degrees of success. For example, Knightcap achieves good-but-not-stellar chess playing performance, and TD-gammon

4 0.15072063 332 hunch net-2008-12-23-Use of Learning Theory

Introduction: I’ve had serious conversations with several people who believe that the theory in machine learning is “only useful for getting papers published”. That’s a compelling statement, as I’ve seen many papers where the algorithm clearly came first, and the theoretical justification for it came second, purely as a perceived means to improve the chance of publication. Naturally, I disagree and believe that learning theory has much more substantial applications. Even in core learning algorithm design, I’ve found learning theory to be useful, although it’s application is more subtle than many realize. The most straightforward applications can fail, because (as expectation suggests) worst case bounds tend to be loose in practice (*). In my experience, considering learning theory when designing an algorithm has two important effects in practice: It can help make your algorithm behave right at a crude level of analysis, leaving finer details to tuning or common sense. The best example

5 0.14629894 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

6 0.13841213 397 hunch net-2010-05-02-What’s the difference between gambling and rewarding good prediction?

7 0.13648254 48 hunch net-2005-03-29-Academic Mechanism Design

8 0.13264336 375 hunch net-2009-10-26-NIPS workshops

9 0.13103493 235 hunch net-2007-03-03-All Models of Learning have Flaws

10 0.13039866 360 hunch net-2009-06-15-In Active Learning, the question changes

11 0.12764443 423 hunch net-2011-02-02-User preferences for search engines

12 0.12363747 420 hunch net-2010-12-26-NIPS 2010

13 0.12353871 351 hunch net-2009-05-02-Wielding a New Abstraction

14 0.11905227 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

15 0.11597265 156 hunch net-2006-02-11-Yahoo’s Learning Problems.

16 0.11257093 112 hunch net-2005-09-14-The Predictionist Viewpoint

17 0.11182039 9 hunch net-2005-02-01-Watchword: Loss

18 0.11171436 237 hunch net-2007-04-02-Contextual Scaling

19 0.11046513 264 hunch net-2007-09-30-NIPS workshops are out.

20 0.10702597 370 hunch net-2009-09-18-Necessary and Sufficient Research


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.299), (1, 0.074), (2, -0.06), (3, 0.012), (4, -0.062), (5, 0.002), (6, 0.012), (7, -0.015), (8, 0.039), (9, 0.022), (10, 0.067), (11, 0.051), (12, -0.014), (13, 0.052), (14, -0.077), (15, 0.067), (16, 0.033), (17, 0.001), (18, 0.054), (19, -0.055), (20, 0.125), (21, -0.075), (22, 0.038), (23, 0.08), (24, -0.059), (25, 0.019), (26, 0.143), (27, 0.042), (28, 0.054), (29, -0.048), (30, -0.082), (31, 0.062), (32, 0.034), (33, -0.144), (34, -0.083), (35, -0.069), (36, -0.147), (37, -0.07), (38, 0.073), (39, 0.022), (40, -0.002), (41, 0.064), (42, -0.071), (43, -0.051), (44, -0.021), (45, -0.026), (46, -0.054), (47, 0.023), (48, 0.051), (49, 0.107)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93952394 345 hunch net-2009-03-08-Prediction Science

Introduction: One view of machine learning is that it’s about how to program computers to predict well. This suggests a broader research program centered around the more pervasive goal of simply predicting well. There are many distinct strands of this broader research program which are only partially unified. Here are the ones that I know of: Learning Theory . Learning theory focuses on several topics related to the dynamics and process of prediction. Convergence bounds like the VC bound give an intellectual foundation to many learning algorithms. Online learning algorithms like Weighted Majority provide an alternate purely game theoretic foundation for learning. Boosting algorithms yield algorithms for purifying prediction abiliity. Reduction algorithms provide means for changing esoteric problems into well known ones. Machine Learning . A great deal of experience has accumulated in practical algorithm design from a mixture of paradigms, including bayesian, biological, opt

2 0.67901498 120 hunch net-2005-10-10-Predictive Search is Coming

Introduction: “Search” is the other branch of AI research which has been succesful. Concrete examples include Deep Blue which beat the world chess champion and Chinook the champion checkers program. A set of core search techniques exist including A * , alpha-beta pruning, and others that can be applied to any of many different search problems. Given this, it may be surprising to learn that there has been relatively little succesful work on combining prediction and search. Given also that humans typically solve search problems using a number of predictive heuristics to narrow in on a solution, we might be surprised again. However, the big successful search-based systems have typically not used “smart” search algorithms. Insteady they have optimized for very fast search. This is not for lack of trying… many people have tried to synthesize search and prediction to various degrees of success. For example, Knightcap achieves good-but-not-stellar chess playing performance, and TD-gammon

3 0.63415349 423 hunch net-2011-02-02-User preferences for search engines

Introduction: I want to comment on the “Bing copies Google” discussion here , here , and here , because there are data-related issues which the general public may not understand, and some of the framing seems substantially misleading to me. As a not-distant-outsider, let me mention the sources of bias I may have. I work at Yahoo! , which has started using Bing . This might predispose me towards Bing, but on the other hand I’m still at Yahoo!, and have been using Linux exclusively as an OS for many years, including even a couple minor kernel patches. And, on the gripping hand , I’ve spent quite a bit of time thinking about the basic principles of incorporating user feedback in machine learning . Also note, this post is not related to official Yahoo! policy, it’s just my personal view. The issue Google engineers inserted synthetic responses to synthetic queries on google.com, then executed the synthetic searches on google.com using Internet Explorer with the Bing toolbar and later

4 0.61634386 411 hunch net-2010-09-21-Regretting the dead

Introduction: Nikos pointed out this new york times article about poor clinical design killing people . For those of us who study learning from exploration information this is a reminder that low regret algorithms are particularly important, as regret in clinical trials is measured by patient deaths. Two obvious improvements on the experimental design are: With reasonable record keeping of existing outcomes for the standard treatments, there is no need to explicitly assign people to a control group with the standard treatment, as that approach is effectively explored with great certainty. Asserting otherwise would imply that the nature of effective treatments for cancer has changed between now and a year ago, which denies the value of any clinical trial. An optimal experimental design will smoothly phase between exploration and exploitation as evidence for a new treatment shows that it can be effective. This is old tech, for example in the EXP3.P algorithm (page 12 aka 59) although

5 0.61036265 179 hunch net-2006-05-16-The value of the orthodox view of Boosting

Introduction: The term “boosting” comes from the idea of using a meta-algorithm which takes “weak” learners (that may be able to only barely predict slightly better than random) and turn them into strongly capable learners (which predict very well). Adaboost in 1995 was the first widely used (and useful) boosting algorithm, although there were theoretical boosting algorithms floating around since 1990 (see the bottom of this page ). Since then, many different interpretations of why boosting works have arisen. There is significant discussion about these different views in the annals of statistics , including a response by Yoav Freund and Robert Schapire . I believe there is a great deal of value to be found in the original view of boosting (meta-algorithm for creating a strong learner from a weak learner). This is not a claim that one particular viewpoint obviates the value of all others, but rather that no other viewpoint seems to really capture important properties. Comparing wit

6 0.59945756 112 hunch net-2005-09-14-The Predictionist Viewpoint

7 0.59361541 312 hunch net-2008-08-04-Electoralmarkets.com

8 0.59305519 397 hunch net-2010-05-02-What’s the difference between gambling and rewarding good prediction?

9 0.58328682 420 hunch net-2010-12-26-NIPS 2010

10 0.5818069 205 hunch net-2006-09-07-Objective and subjective interpretations of probability

11 0.56019354 237 hunch net-2007-04-02-Contextual Scaling

12 0.55562979 193 hunch net-2006-07-09-The Stock Prediction Machine Learning Problem

13 0.54879206 106 hunch net-2005-09-04-Science in the Government

14 0.52786613 293 hunch net-2008-03-23-Interactive Machine Learning

15 0.52464551 241 hunch net-2007-04-28-The Coming Patent Apocalypse

16 0.52375954 348 hunch net-2009-04-02-Asymmophobia

17 0.52272242 235 hunch net-2007-03-03-All Models of Learning have Flaws

18 0.5170204 351 hunch net-2009-05-02-Wielding a New Abstraction

19 0.51578093 1 hunch net-2005-01-19-Why I decided to run a weblog.

20 0.51290393 200 hunch net-2006-08-03-AOL’s data drop


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.021), (9, 0.024), (27, 0.219), (37, 0.01), (38, 0.058), (51, 0.025), (53, 0.042), (55, 0.076), (64, 0.016), (68, 0.225), (77, 0.019), (83, 0.03), (94, 0.118), (95, 0.043)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96173674 48 hunch net-2005-03-29-Academic Mechanism Design

Introduction: From game theory, there is a notion of “mechanism design”: setting up the structure of the world so that participants have some incentive to do sane things (rather than obviously counterproductive things). Application of this principle to academic research may be fruitful. What is misdesigned about academic research? The JMLG guides give many hints. The common nature of bad reviewing also suggests the system isn’t working optimally. There are many ways to experimentally “cheat” in machine learning . Funding Prisoner’s Delimma. Good researchers often write grant proposals for funding rather than doing research. Since the pool of grant money is finite, this means that grant proposals are often rejected, implying that more must be written. This is essentially a “prisoner’s delimma”: anyone not writing grant proposals loses, but the entire process of doing research is slowed by distraction. If everyone wrote 1/2 as many grant proposals, roughly the same distribution

2 0.93808401 110 hunch net-2005-09-10-“Failure” is an option

Introduction: This is about the hard choices that graduate students must make. The cultural definition of success in academic research is to: Produce good research which many other people appreciate. Produce many students who go on to do the same. There are fundamental reasons why this is success in the local culture. Good research appreciated by others means access to jobs. Many students succesful in the same way implies that there are a number of people who think in a similar way and appreciate your work. In order to graduate, a phd student must live in an academic culture for a period of several years. It is common to adopt the culture’s definition of success during this time. It’s also common for many phd students discover they are not suited to an academic research lifestyle. This collision of values and abilities naturally results in depression. The most fundamental advice when this happens is: change something. Pick a new advisor. Pick a new research topic. Or leave th

3 0.93057102 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

Introduction: I just created version 5.1 of vowpal wabbit . This almost entirely a bugfix release, so it’s an easy upgrade from v5.0. In addition: There is now a mailing list , which I and several other developers are subscribed to. The main website has shifted to the wiki on github. This means that anyone with a github account can now edit it. I’m planning to give a tutorial tomorrow on it at eHarmony / the LA machine learning meetup at 10am. Drop by if you’re interested. The status of VW amongst other open source projects has changed. When VW first came out, it was relatively unique amongst existing projects in terms of features. At this point, many other projects have started to appreciate the value of the design choices here. This includes: Mahout , which now has an SGD implementation. Shogun , where Soeren is keen on incorporating features . LibLinear , where they won the KDD best paper award for out-of-core learning . This is expected—any open sourc

4 0.92941356 156 hunch net-2006-02-11-Yahoo’s Learning Problems.

Introduction: I just visited Yahoo Research which has several fundamental learning problems near to (or beyond) the set of problems we know how to solve well. Here are 3 of them. Ranking This is the canonical problem of all search engines. It is made extra difficult for several reasons. There is relatively little “good” supervised learning data and a great deal of data with some signal (such as click through rates). The learning must occur in a partially adversarial environment. Many people very actively attempt to place themselves at the top of rankings. It is not even quite clear whether the problem should be posed as ‘ranking’ or as ‘regression’ which is then used to produce a ranking. Collaborative filtering Yahoo has a large number of recommendation systems for music, movies, etc… In these sorts of systems, users specify how they liked a set of things, and then the system can (hopefully) find some more examples of things they might like by reasoning across multiple

same-blog 5 0.87783134 345 hunch net-2009-03-08-Prediction Science

Introduction: One view of machine learning is that it’s about how to program computers to predict well. This suggests a broader research program centered around the more pervasive goal of simply predicting well. There are many distinct strands of this broader research program which are only partially unified. Here are the ones that I know of: Learning Theory . Learning theory focuses on several topics related to the dynamics and process of prediction. Convergence bounds like the VC bound give an intellectual foundation to many learning algorithms. Online learning algorithms like Weighted Majority provide an alternate purely game theoretic foundation for learning. Boosting algorithms yield algorithms for purifying prediction abiliity. Reduction algorithms provide means for changing esoteric problems into well known ones. Machine Learning . A great deal of experience has accumulated in practical algorithm design from a mixture of paradigms, including bayesian, biological, opt

6 0.76206553 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

7 0.75909972 65 hunch net-2005-05-02-Reviewing techniques for conferences

8 0.75643229 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

9 0.75559127 235 hunch net-2007-03-03-All Models of Learning have Flaws

10 0.753474 98 hunch net-2005-07-27-Not goal metrics

11 0.75086802 95 hunch net-2005-07-14-What Learning Theory might do

12 0.7504546 225 hunch net-2007-01-02-Retrospective

13 0.74954748 351 hunch net-2009-05-02-Wielding a New Abstraction

14 0.74852073 358 hunch net-2009-06-01-Multitask Poisoning

15 0.74752587 360 hunch net-2009-06-15-In Active Learning, the question changes

16 0.74660254 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

17 0.74477226 449 hunch net-2011-11-26-Giving Thanks

18 0.74407643 237 hunch net-2007-04-02-Contextual Scaling

19 0.74341446 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

20 0.74319696 371 hunch net-2009-09-21-Netflix finishes (and starts)