hunch_net hunch_net-2006 hunch_net-2006-190 knowledge-graph by maker-knowledge-mining

190 hunch net-2006-07-06-Branch Prediction Competition


meta infos for this blog

Source: html

Introduction: Alan Fern points out the second branch prediction challenge (due September 29) which is a follow up to the first branch prediction competition . Branch prediction is one of the fundamental learning problems of the computer age: without it our computers might run an order of magnitude slower. This is a tough problem since there are sharp constraints on time and space complexity in an online environment. For machine learning, the “idealistic track” may fit well. Essentially, they remove these constraints to gain a weak upper bound on what might be done.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Alan Fern points out the second branch prediction challenge (due September 29) which is a follow up to the first branch prediction competition . [sent-1, score-2.172]

2 Branch prediction is one of the fundamental learning problems of the computer age: without it our computers might run an order of magnitude slower. [sent-2, score-1.139]

3 This is a tough problem since there are sharp constraints on time and space complexity in an online environment. [sent-3, score-1.015]

4 For machine learning, the “idealistic track” may fit well. [sent-4, score-0.2]

5 Essentially, they remove these constraints to gain a weak upper bound on what might be done. [sent-5, score-1.005]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('branch', 0.577), ('constraints', 0.248), ('age', 0.22), ('prediction', 0.204), ('fern', 0.204), ('september', 0.192), ('alan', 0.192), ('tough', 0.192), ('sharp', 0.176), ('remove', 0.156), ('gain', 0.152), ('magnitude', 0.145), ('computers', 0.139), ('follow', 0.137), ('upper', 0.134), ('competition', 0.13), ('track', 0.126), ('fit', 0.124), ('weak', 0.119), ('challenge', 0.115), ('bound', 0.1), ('computer', 0.097), ('space', 0.097), ('might', 0.096), ('fundamental', 0.094), ('run', 0.092), ('points', 0.089), ('order', 0.087), ('without', 0.086), ('second', 0.086), ('complexity', 0.085), ('essentially', 0.083), ('done', 0.075), ('due', 0.073), ('online', 0.07), ('since', 0.063), ('first', 0.053), ('problems', 0.048), ('time', 0.044), ('may', 0.044), ('problem', 0.04), ('machine', 0.032), ('learning', 0.026), ('one', 0.025)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 190 hunch net-2006-07-06-Branch Prediction Competition

Introduction: Alan Fern points out the second branch prediction challenge (due September 29) which is a follow up to the first branch prediction competition . Branch prediction is one of the fundamental learning problems of the computer age: without it our computers might run an order of magnitude slower. This is a tough problem since there are sharp constraints on time and space complexity in an online environment. For machine learning, the “idealistic track” may fit well. Essentially, they remove these constraints to gain a weak upper bound on what might be done.

2 0.1397101 129 hunch net-2005-11-07-Prediction Competitions

Introduction: There are two prediction competitions currently in the air. The Performance Prediction Challenge by Isabelle Guyon . Good entries minimize a weighted 0/1 loss + the difference between a prediction of this loss and the observed truth on 5 datasets. Isabelle tells me all of the problems are “real world” and the test datasets are large enough (17K minimum) that the winner should be well determined by ability rather than luck. This is due March 1. The Predictive Uncertainty Challenge by Gavin Cawley . Good entries minimize log loss on real valued output variables for one synthetic and 3 “real” datasets related to atmospheric prediction. The use of log loss (which can be infinite and hence is never convergent) and smaller test sets of size 1K to 7K examples makes the winner of this contest more luck dependent. Nevertheless, the contest may be of some interest particularly to the branch of learning (typically Bayes learning) which prefers to optimize log loss. May the

3 0.13470274 443 hunch net-2011-09-03-Fall Machine Learning Events

Introduction: Many Machine Learning related events are coming up this fall. September 9 , abstracts for the New York Machine Learning Symposium are due. Send a 2 page pdf, if interested, and note that we: widened submissions to be from anybody rather than students. set aside a larger fraction of time for contributed submissions. September 15 , there is a machine learning meetup , where I’ll be discussing terascale learning at AOL. September 16 , there is a CS&Econ; day at New York Academy of Sciences. This is not ML focused, but it’s easy to imagine interest. September 23 and later NIPS workshop submissions start coming due. As usual, there are too many good ones, so I won’t be able to attend all those that interest me. I do hope some workshop makers consider ICML this coming summer, as we are increasing to a 2 day format for you. Here are a few that interest me: Big Learning is about dealing with lots of data. Abstracts are due September 30 . The Bayes

4 0.10014088 120 hunch net-2005-10-10-Predictive Search is Coming

Introduction: “Search” is the other branch of AI research which has been succesful. Concrete examples include Deep Blue which beat the world chess champion and Chinook the champion checkers program. A set of core search techniques exist including A * , alpha-beta pruning, and others that can be applied to any of many different search problems. Given this, it may be surprising to learn that there has been relatively little succesful work on combining prediction and search. Given also that humans typically solve search problems using a number of predictive heuristics to narrow in on a solution, we might be surprised again. However, the big successful search-based systems have typically not used “smart” search algorithms. Insteady they have optimized for very fast search. This is not for lack of trying… many people have tried to synthesize search and prediction to various degrees of success. For example, Knightcap achieves good-but-not-stellar chess playing performance, and TD-gammon

5 0.098822914 170 hunch net-2006-04-06-Bounds greater than 1

Introduction: Nati Srebro and Shai Ben-David have a paper at COLT which, in the appendix, proves something very striking: several previous error bounds are always greater than 1. Background One branch of learning theory focuses on theorems which Assume samples are drawn IID from an unknown distribution D . Fix a set of classifiers Find a high probability bound on the maximum true error rate (with respect to D ) as a function of the empirical error rate on the training set. Many of these bounds become extremely complex and hairy. Current Everyone working on this subject wants “tighter bounds”, however there are different definitions of “tighter”. Some groups focus on “functional tightness” (getting the right functional dependency between the size of the training set and a parameterization of the hypothesis space) while others focus on “practical tightness” (finding bounds which work well on practical problems). (I am definitely in the second camp.) One of the da

6 0.095279895 12 hunch net-2005-02-03-Learning Theory, by assumption

7 0.09066847 267 hunch net-2007-10-17-Online as the new adjective

8 0.087605335 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

9 0.080537446 418 hunch net-2010-12-02-Traffic Prediction Problem

10 0.073512979 352 hunch net-2009-05-06-Machine Learning to AI

11 0.069085598 355 hunch net-2009-05-19-CI Fellows

12 0.068238102 276 hunch net-2007-12-10-Learning Track of International Planning Competition

13 0.066935293 33 hunch net-2005-02-28-Regularization

14 0.062728599 466 hunch net-2012-06-05-ICML acceptance statistics

15 0.05964331 288 hunch net-2008-02-10-Complexity Illness

16 0.059186239 389 hunch net-2010-02-26-Yahoo! ML events

17 0.056585602 109 hunch net-2005-09-08-Online Learning as the Mathematics of Accountability

18 0.056414362 208 hunch net-2006-09-18-What is missing for online collaborative research?

19 0.056066092 112 hunch net-2005-09-14-The Predictionist Viewpoint

20 0.055861402 420 hunch net-2010-12-26-NIPS 2010


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.125), (1, 0.039), (2, -0.026), (3, -0.034), (4, -0.007), (5, -0.011), (6, -0.032), (7, 0.015), (8, -0.02), (9, -0.04), (10, 0.012), (11, 0.096), (12, 0.013), (13, -0.071), (14, -0.033), (15, -0.053), (16, 0.043), (17, -0.078), (18, 0.029), (19, 0.022), (20, -0.009), (21, -0.033), (22, -0.151), (23, 0.035), (24, 0.033), (25, 0.04), (26, 0.156), (27, -0.011), (28, -0.056), (29, 0.01), (30, 0.145), (31, 0.079), (32, 0.071), (33, -0.016), (34, -0.086), (35, -0.09), (36, 0.003), (37, 0.074), (38, -0.019), (39, 0.042), (40, -0.024), (41, -0.019), (42, 0.006), (43, -0.041), (44, 0.035), (45, -0.039), (46, -0.006), (47, 0.065), (48, 0.06), (49, 0.033)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95854795 190 hunch net-2006-07-06-Branch Prediction Competition

Introduction: Alan Fern points out the second branch prediction challenge (due September 29) which is a follow up to the first branch prediction competition . Branch prediction is one of the fundamental learning problems of the computer age: without it our computers might run an order of magnitude slower. This is a tough problem since there are sharp constraints on time and space complexity in an online environment. For machine learning, the “idealistic track” may fit well. Essentially, they remove these constraints to gain a weak upper bound on what might be done.

2 0.59286201 418 hunch net-2010-12-02-Traffic Prediction Problem

Introduction: Slashdot points out the Traffic Prediction Challenge which looks pretty fun. The temporal aspect seems to be very common in many real-world problems and somewhat understudied.

3 0.51796836 94 hunch net-2005-07-13-Text Entailment at AAAI

Introduction: Rajat Raina presented a paper on the technique they used for the PASCAL Recognizing Textual Entailment challenge. “Text entailment” is the problem of deciding if one sentence implies another. For example the previous sentence entails: Text entailment is a decision problem. One sentence can imply another. The challenge was of the form: given an original sentence and another sentence predict whether there was an entailment. All current techniques for predicting correctness of an entailment are at the “flail” stage—accuracies of around 58% where humans could achieve near 100% accuracy, so there is much room to improve. Apparently, there may be another PASCAL challenge on this problem in the near future.

4 0.46902019 446 hunch net-2011-10-03-Monday announcements

Introduction: Various people want to use hunch.net to announce things. I’ve generally resisted this because I feared hunch becoming a pure announcement zone while I am much more interested contentful posts and discussion personally. Nevertheless there is clearly some value and announcements are easy, so I’m planning to summarize announcements on Mondays. D. Sculley points out an interesting Semisupervised feature learning competition, with a deadline of October 17. Lihong Li points out the webscope user interaction dataset which is the first high quality exploration dataset I’m aware of that is publicly available. Seth Rogers points out CrossValidated which looks similar in conception to metaoptimize , but directly using the stackoverflow interface and with a bit more of a statistics twist.

5 0.46603647 459 hunch net-2012-03-13-The Submodularity workshop and Lucca Professorship

Introduction: Nina points out the Submodularity Workshop March 19-20 next week at Georgia Tech . Many people want to make Submodularity the new Convexity in machine learning, and it certainly seems worth exploring. Sara Olson also points out a tenured faculty position at IMT Lucca with a deadline of May 15th . Lucca happens to be the ancestral home of 1/4 of my heritage

6 0.45309103 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

7 0.45200869 56 hunch net-2005-04-14-Families of Learning Theory Statements

8 0.4435623 129 hunch net-2005-11-07-Prediction Competitions

9 0.43016171 120 hunch net-2005-10-10-Predictive Search is Coming

10 0.42806429 33 hunch net-2005-02-28-Regularization

11 0.42366642 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition

12 0.41531983 63 hunch net-2005-04-27-DARPA project: LAGR

13 0.41428298 251 hunch net-2007-06-24-Interesting Papers at ICML 2007

14 0.40415743 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling

15 0.40303764 134 hunch net-2005-12-01-The Webscience Future

16 0.39126119 399 hunch net-2010-05-20-Google Predict

17 0.3905195 276 hunch net-2007-12-10-Learning Track of International Planning Competition

18 0.38109365 427 hunch net-2011-03-20-KDD Cup 2011

19 0.3804239 112 hunch net-2005-09-14-The Predictionist Viewpoint

20 0.37934792 34 hunch net-2005-03-02-Prior, “Prior” and Bias


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(20, 0.337), (27, 0.319), (55, 0.034), (94, 0.159)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90394211 190 hunch net-2006-07-06-Branch Prediction Competition

Introduction: Alan Fern points out the second branch prediction challenge (due September 29) which is a follow up to the first branch prediction competition . Branch prediction is one of the fundamental learning problems of the computer age: without it our computers might run an order of magnitude slower. This is a tough problem since there are sharp constraints on time and space complexity in an online environment. For machine learning, the “idealistic track” may fit well. Essentially, they remove these constraints to gain a weak upper bound on what might be done.

2 0.87008876 7 hunch net-2005-01-31-Watchword: Assumption

Introduction: “Assumption” is another word to be careful with in machine learning because it is used in several ways. Assumption = Bias There are several ways to see that some form of ‘bias’ (= preferring of one solution over another) is necessary. This is obvious in an adversarial setting. A good bit of work has been expended explaining this in other settings with “ no free lunch ” theorems. This is a usage specialized to learning which is particularly common when talking about priors for Bayesian Learning. Assumption = “if” of a theorem The assumptions are the ‘if’ part of the ‘if-then’ in a theorem. This is a fairly common usage. Assumption = Axiom The assumptions are the things that we assume are true, but which we cannot verify. Examples are “the IID assumption” or “my problem is a DNF on a small number of bits”. This is the usage which I prefer. One difficulty with any use of the word “assumption” is that you often encounter “if assumption then conclusion so if no

3 0.83562326 208 hunch net-2006-09-18-What is missing for online collaborative research?

Introduction: The internet has recently made the research process much smoother: papers are easy to obtain, citations are easy to follow, and unpublished “tutorials” are often available. Yet, new research fields can look very complicated to outsiders or newcomers. Every paper is like a small piece of an unfinished jigsaw puzzle: to understand just one publication, a researcher without experience in the field will typically have to follow several layers of citations, and many of the papers he encounters have a great deal of repeated information. Furthermore, from one publication to the next, notation and terminology may not be consistent which can further confuse the reader. But the internet is now proving to be an extremely useful medium for collaboration and knowledge aggregation. Online forums allow users to ask and answer questions and to share ideas. The recent phenomenon of Wikipedia provides a proof-of-concept for the “anyone can edit” system. Can such models be used to facilitate research a

4 0.76311147 351 hunch net-2009-05-02-Wielding a New Abstraction

Introduction: This post is partly meant as an advertisement for the reductions tutorial Alina , Bianca , and I are planning to do at ICML . Please come, if you are interested. Many research programs can be thought of as finding and building new useful abstractions. The running example I’ll use is learning reductions where I have experience. The basic abstraction here is that we can build a learning algorithm capable of solving classification problems up to a small expected regret. This is used repeatedly to solve more complex problems. In working on a new abstraction, I think you typically run into many substantial problems of understanding, which make publishing particularly difficult. It is difficult to seriously discuss the reason behind or mechanism for abstraction in a conference paper with small page limits. People rarely see such discussions and hence have little basis on which to think about new abstractions. Another difficulty is that when building an abstraction, yo

5 0.70442766 464 hunch net-2012-05-03-Microsoft Research, New York City

Introduction: Yahoo! laid off people . Unlike every previous time there have been layoffs, this is serious for Yahoo! Research . We had advanced warning from Prabhakar through the simple act of leaving . Yahoo! Research was a world class organization that Prabhakar recruited much of personally, so it is deeply implausible that he would spontaneously decide to leave. My first thought when I saw the news was “Uhoh, Rob said that he knew it was serious when the head of ATnT Research left.” In this case it was even more significant, because Prabhakar recruited me on the premise that Y!R was an experiment in how research should be done: via a combination of high quality people and high engagement with the company. Prabhakar’s departure is a clear end to that experiment. The result is ambiguous from a business perspective. Y!R clearly was not capable of saving the company from its illnesses. I’m not privy to the internal accounting of impact and this is the kind of subject where there c

6 0.70242471 252 hunch net-2007-07-01-Watchword: Online Learning

7 0.69288707 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

8 0.69192994 258 hunch net-2007-08-12-Exponentiated Gradient

9 0.68997741 352 hunch net-2009-05-06-Machine Learning to AI

10 0.68809581 43 hunch net-2005-03-18-Binomial Weighting

11 0.68588859 45 hunch net-2005-03-22-Active learning

12 0.684533 345 hunch net-2009-03-08-Prediction Science

13 0.68310702 337 hunch net-2009-01-21-Nearly all natural problems require nonlinearity

14 0.68092018 293 hunch net-2008-03-23-Interactive Machine Learning

15 0.67909336 229 hunch net-2007-01-26-Parallel Machine Learning Problems

16 0.67897332 311 hunch net-2008-07-26-Compositional Machine Learning Algorithm Design

17 0.6776436 304 hunch net-2008-06-27-Reviewing Horror Stories

18 0.67750895 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

19 0.67625314 400 hunch net-2010-06-13-The Good News on Exploration and Learning

20 0.67606461 41 hunch net-2005-03-15-The State of Tight Bounds