hunch_net hunch_net-2006 hunch_net-2006-155 knowledge-graph by maker-knowledge-mining

155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition


meta infos for this blog

Source: html

Introduction: Francisco Pereira points out a fun Prediction Competition . Francisco says: DARPA is sponsoring a competition to analyze data from an unusual functional Magnetic Resonance Imaging experiment. Subjects watch videos inside the scanner while fMRI data are acquired. Unbeknownst to these subjects, the videos have been seen by a panel of other subjects that labeled each instant with labels in categories such as representation (are there tools, body parts, motion, sound), location, presence of actors, emotional content, etc. The challenge is to predict all of these different labels on an instant-by-instant basis from the fMRI data. A few reasons why this is particularly interesting: This is beyond the current state of the art, but not inconceivably hard. This is a new type of experiment design current analysis methods cannot deal with. This is an opportunity to work with a heavily examined and preprocessed neuroimaging dataset. DARPA is offering prizes!


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Francisco Pereira points out a fun Prediction Competition . [sent-1, score-0.163]

2 Francisco says: DARPA is sponsoring a competition to analyze data from an unusual functional Magnetic Resonance Imaging experiment. [sent-2, score-0.774]

3 Subjects watch videos inside the scanner while fMRI data are acquired. [sent-3, score-0.764]

4 Unbeknownst to these subjects, the videos have been seen by a panel of other subjects that labeled each instant with labels in categories such as representation (are there tools, body parts, motion, sound), location, presence of actors, emotional content, etc. [sent-4, score-1.566]

5 The challenge is to predict all of these different labels on an instant-by-instant basis from the fMRI data. [sent-5, score-0.427]

6 A few reasons why this is particularly interesting: This is beyond the current state of the art, but not inconceivably hard. [sent-6, score-0.379]

7 This is a new type of experiment design current analysis methods cannot deal with. [sent-7, score-0.588]

8 This is an opportunity to work with a heavily examined and preprocessed neuroimaging dataset. [sent-8, score-0.209]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('fmri', 0.317), ('subjects', 0.308), ('francisco', 0.294), ('videos', 0.264), ('darpa', 0.224), ('competition', 0.187), ('labels', 0.176), ('panel', 0.158), ('instant', 0.158), ('scanner', 0.158), ('unusual', 0.158), ('current', 0.148), ('actors', 0.147), ('offering', 0.147), ('pereira', 0.147), ('watch', 0.139), ('presence', 0.139), ('sponsoring', 0.139), ('prizes', 0.132), ('art', 0.127), ('inside', 0.122), ('categories', 0.119), ('heavily', 0.115), ('functional', 0.112), ('parts', 0.109), ('basis', 0.109), ('type', 0.099), ('fun', 0.099), ('analyze', 0.097), ('location', 0.097), ('tools', 0.095), ('sound', 0.095), ('opportunity', 0.094), ('labeled', 0.094), ('beyond', 0.094), ('says', 0.091), ('content', 0.09), ('experiment', 0.09), ('challenge', 0.083), ('representation', 0.082), ('data', 0.081), ('state', 0.074), ('seen', 0.068), ('deal', 0.064), ('points', 0.064), ('reasons', 0.063), ('design', 0.063), ('methods', 0.062), ('analysis', 0.062), ('predict', 0.059)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition

Introduction: Francisco Pereira points out a fun Prediction Competition . Francisco says: DARPA is sponsoring a competition to analyze data from an unusual functional Magnetic Resonance Imaging experiment. Subjects watch videos inside the scanner while fMRI data are acquired. Unbeknownst to these subjects, the videos have been seen by a panel of other subjects that labeled each instant with labels in categories such as representation (are there tools, body parts, motion, sound), location, presence of actors, emotional content, etc. The challenge is to predict all of these different labels on an instant-by-instant basis from the fMRI data. A few reasons why this is particularly interesting: This is beyond the current state of the art, but not inconceivably hard. This is a new type of experiment design current analysis methods cannot deal with. This is an opportunity to work with a heavily examined and preprocessed neuroimaging dataset. DARPA is offering prizes!

2 0.2018138 487 hunch net-2013-07-24-ICML 2012 videos lost

Introduction: A big ouch—all the videos for ICML 2012 were lost in a shuffle. Rajnish sends the below, but if anyone can help that would be greatly appreciated. —————————————————————————— Sincere apologies to ICML community for loosing 2012 archived videos What happened: In order to publish 2013 videos, we decided to move 2012 videos to another server. We have a weekly backup service from the provider but after removing the videos from the current server, when we tried to retrieve the 2012 videos from backup service, the backup did not work because of provider-specific requirements that we had ignored while removing the data from previous server. What are we doing about this: At this point, we are still looking into raw footage to find if we can retrieve some of the videos, but following are the steps we are taking to make sure this does not happen again in future: (1) We are going to create a channel on Vimeo (and potentially on YouTube) and we will publish there the p-in-p- or slide-vers

3 0.10904682 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

Introduction: The Second Annual Reinforcement Learning Competition is about to get started. The aim of the competition is to facilitate direct comparisons between various learning methods on important and realistic domains. This year’s event will feature well-known benchmark domains as well as more challenging problems of real-world complexity, such as helicopter control and robot soccer keepaway. The competition begins on November 1st, 2007 when training software is released. Results must be submitted by July 1st, 2008. The competition will culminate in an event at ICML-08 in Helsinki, Finland, at which the winners will be announced. For more information, visit the competition website.

4 0.097161345 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

Introduction: This is a summary of the workshop on Learning Problem Design which Alina and I ran at NIPS this year. The first question many people have is “What is learning problem design?” This workshop is about admitting that solving learning problems does not start with labeled data, but rather somewhere before. When humans are hired to produce labels, this is usually not a serious problem because you can tell them precisely what semantics you want the labels to have, and we can fix some set of features in advance. However, when other methods are used this becomes more problematic. This focus is important for Machine Learning because there are very large quantities of data which are not labeled by a hired human. The title of the workshop was a bit ambitious, because a workshop is not long enough to synthesize a diversity of approaches into a coherent set of principles. For me, the posters at the end of the workshop were quite helpful in getting approaches to gel. Here are some an

5 0.093196504 325 hunch net-2008-11-10-ICML Reviewing Criteria

Introduction: Michael Littman and Leon Bottou have decided to use a franchise program chair approach to reviewing at ICML this year. I’ll be one of the area chairs, so I wanted to mention a few things if you are thinking about naming me. I take reviewing seriously. That means papers to be reviewed are read, the implications are considered, and decisions are only made after that. I do my best to be fair, and there are zero subjects that I consider categorical rejects. I don’t consider several arguments for rejection-not-on-the-merits reasonable . I am generally interested in papers that (a) analyze new models of machine learning, (b) provide new algorithms, and (c) show that they work empirically on plausibly real problems. If a paper has the trifecta, I’m particularly interested. With 2 out of 3, I might be interested. I often find papers with only one element harder to accept, including papers with just (a). I’m a bit tough. I rarely jump-up-and-down about a paper, because I b

6 0.091175228 45 hunch net-2005-03-22-Active learning

7 0.080574676 50 hunch net-2005-04-01-Basic computer science research takes a hit

8 0.078870237 427 hunch net-2011-03-20-KDD Cup 2011

9 0.077231415 489 hunch net-2013-09-20-No NY ML Symposium in 2013, and some good news

10 0.074098125 389 hunch net-2010-02-26-Yahoo! ML events

11 0.071180254 92 hunch net-2005-07-11-AAAI blog

12 0.06469091 276 hunch net-2007-12-10-Learning Track of International Planning Competition

13 0.062053431 169 hunch net-2006-04-05-What is state?

14 0.061645731 183 hunch net-2006-06-14-Explorations of Exploration

15 0.060676351 143 hunch net-2005-12-27-Automated Labeling

16 0.056966171 483 hunch net-2013-06-10-The Large Scale Learning class notes

17 0.05642917 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification

18 0.055772882 63 hunch net-2005-04-27-DARPA project: LAGR

19 0.055063084 383 hunch net-2009-12-09-Inherent Uncertainty

20 0.055043969 83 hunch net-2005-06-18-Lower Bounds for Learning Reductions


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.092), (1, 0.012), (2, -0.046), (3, -0.01), (4, 0.022), (5, -0.017), (6, -0.017), (7, -0.029), (8, -0.002), (9, -0.073), (10, -0.045), (11, 0.063), (12, -0.031), (13, 0.012), (14, -0.073), (15, 0.015), (16, -0.008), (17, -0.013), (18, -0.037), (19, 0.013), (20, 0.051), (21, -0.041), (22, -0.029), (23, -0.033), (24, 0.004), (25, -0.02), (26, -0.05), (27, 0.038), (28, -0.037), (29, -0.016), (30, 0.128), (31, -0.004), (32, -0.018), (33, -0.041), (34, -0.132), (35, -0.108), (36, 0.034), (37, 0.116), (38, -0.036), (39, -0.084), (40, -0.035), (41, 0.071), (42, 0.147), (43, -0.052), (44, 0.048), (45, 0.118), (46, 0.002), (47, 0.003), (48, 0.085), (49, 0.093)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98163563 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition

Introduction: Francisco Pereira points out a fun Prediction Competition . Francisco says: DARPA is sponsoring a competition to analyze data from an unusual functional Magnetic Resonance Imaging experiment. Subjects watch videos inside the scanner while fMRI data are acquired. Unbeknownst to these subjects, the videos have been seen by a panel of other subjects that labeled each instant with labels in categories such as representation (are there tools, body parts, motion, sound), location, presence of actors, emotional content, etc. The challenge is to predict all of these different labels on an instant-by-instant basis from the fMRI data. A few reasons why this is particularly interesting: This is beyond the current state of the art, but not inconceivably hard. This is a new type of experiment design current analysis methods cannot deal with. This is an opportunity to work with a heavily examined and preprocessed neuroimaging dataset. DARPA is offering prizes!

2 0.61062133 487 hunch net-2013-07-24-ICML 2012 videos lost

Introduction: A big ouch—all the videos for ICML 2012 were lost in a shuffle. Rajnish sends the below, but if anyone can help that would be greatly appreciated. —————————————————————————— Sincere apologies to ICML community for loosing 2012 archived videos What happened: In order to publish 2013 videos, we decided to move 2012 videos to another server. We have a weekly backup service from the provider but after removing the videos from the current server, when we tried to retrieve the 2012 videos from backup service, the backup did not work because of provider-specific requirements that we had ignored while removing the data from previous server. What are we doing about this: At this point, we are still looking into raw footage to find if we can retrieve some of the videos, but following are the steps we are taking to make sure this does not happen again in future: (1) We are going to create a channel on Vimeo (and potentially on YouTube) and we will publish there the p-in-p- or slide-vers

3 0.56230688 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

Introduction: The Second Annual Reinforcement Learning Competition is about to get started. The aim of the competition is to facilitate direct comparisons between various learning methods on important and realistic domains. This year’s event will feature well-known benchmark domains as well as more challenging problems of real-world complexity, such as helicopter control and robot soccer keepaway. The competition begins on November 1st, 2007 when training software is released. Results must be submitted by July 1st, 2008. The competition will culminate in an event at ICML-08 in Helsinki, Finland, at which the winners will be announced. For more information, visit the competition website.

4 0.50705928 63 hunch net-2005-04-27-DARPA project: LAGR

Introduction: Larry Jackal has set up the LAGR (“Learning Applied to Ground Robotics”) project (and competition) which seems to be quite well designed. Features include: Many participants (8 going on 12?) Standardized hardware. In the DARPA grand challenge contestants entering with motorcycles are at a severe disadvantage to those entering with a Hummer. Similarly, contestants using more powerful sensors can gain huge advantages. Monthly contests, with full feedback (but since the hardware is standardized, only code is shipped). One of the premises of the program is that robust systems are desired. Monthly evaluations at different locations can help measure this and provide data. Attacks a known hard problem. (cross country driving)

5 0.46427894 143 hunch net-2005-12-27-Automated Labeling

Introduction: One of the common trends in machine learning has been an emphasis on the use of unlabeled data. The argument goes something like “there aren’t many labeled web pages out there, but there are a huge number of web pages, so we must find a way to take advantage of them.” There are several standard approaches for doing this: Unsupervised Learning . You use only unlabeled data. In a typical application, you cluster the data and hope that the clusters somehow correspond to what you care about. Semisupervised Learning. You use both unlabeled and labeled data to build a predictor. The unlabeled data influences the learned predictor in some way. Active Learning . You have unlabeled data and access to a labeling oracle. You interactively choose which examples to label so as to optimize prediction accuracy. It seems there is a fourth approach worth serious investigation—automated labeling. The approach goes as follows: Identify some subset of observed values to predict

6 0.45904407 276 hunch net-2007-12-10-Learning Track of International Planning Competition

7 0.45381296 446 hunch net-2011-10-03-Monday announcements

8 0.43539184 45 hunch net-2005-03-22-Active learning

9 0.40404007 190 hunch net-2006-07-06-Branch Prediction Competition

10 0.39400011 418 hunch net-2010-12-02-Traffic Prediction Problem

11 0.37988237 224 hunch net-2006-12-12-Interesting Papers at NIPS 2006

12 0.36453304 390 hunch net-2010-03-12-Netflix Challenge 2 Canceled

13 0.3634941 161 hunch net-2006-03-05-“Structural” Learning

14 0.36214042 408 hunch net-2010-08-24-Alex Smola starts a blog

15 0.36141887 183 hunch net-2006-06-14-Explorations of Exploration

16 0.34196663 6 hunch net-2005-01-27-Learning Complete Problems

17 0.33116591 83 hunch net-2005-06-18-Lower Bounds for Learning Reductions

18 0.32694149 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification

19 0.32243472 56 hunch net-2005-04-14-Families of Learning Theory Statements

20 0.31735173 61 hunch net-2005-04-25-Embeddings: what are they good for?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.2), (38, 0.062), (53, 0.059), (55, 0.051), (64, 0.387), (95, 0.116)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95168078 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition

Introduction: Francisco Pereira points out a fun Prediction Competition . Francisco says: DARPA is sponsoring a competition to analyze data from an unusual functional Magnetic Resonance Imaging experiment. Subjects watch videos inside the scanner while fMRI data are acquired. Unbeknownst to these subjects, the videos have been seen by a panel of other subjects that labeled each instant with labels in categories such as representation (are there tools, body parts, motion, sound), location, presence of actors, emotional content, etc. The challenge is to predict all of these different labels on an instant-by-instant basis from the fMRI data. A few reasons why this is particularly interesting: This is beyond the current state of the art, but not inconceivably hard. This is a new type of experiment design current analysis methods cannot deal with. This is an opportunity to work with a heavily examined and preprocessed neuroimaging dataset. DARPA is offering prizes!

2 0.84351921 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

Introduction: Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. This tutorial should interest anyone trying to use machine learning on significant quantities of data, anyone interested in developing algorithms for such, and of course who has bragging rights to the fastest learning algorithm on planet earth (Also note the Modeling with Hadoop tutorial just before ours which deals with one way of trying to speed up learning algorithms. We have almost no

3 0.83828461 291 hunch net-2008-03-07-Spock Challenge Winners

Introduction: The spock challenge for named entity recognition was won by Berno Stein , Sven Eissen, Tino Rub, Hagen Tonnies, Christof Braeutigam, and Martin Potthast .

4 0.79442537 420 hunch net-2010-12-26-NIPS 2010

Introduction: I enjoyed attending NIPS this year, with several things interesting me. For the conference itself: Peter Welinder , Steve Branson , Serge Belongie , and Pietro Perona , The Multidimensional Wisdom of Crowds . This paper is about using mechanical turk to get label information, with results superior to a majority vote approach. David McAllester , Tamir Hazan , and Joseph Keshet Direct Loss Minimization for Structured Prediction . This is about another technique for directly optimizing the loss in structured prediction, with an application to speech recognition. Mohammad Saberian and Nuno Vasconcelos Boosting Classifier Cascades . This is about an algorithm for simultaneously optimizing loss and computation in a classifier cascade construction. There were several other papers on cascades which are worth looking at if interested. Alan Fern and Prasad Tadepalli , A Computational Decision Theory for Interactive Assistants . This paper carves out some

5 0.78832102 18 hunch net-2005-02-12-ROC vs. Accuracy vs. AROC

Introduction: Foster Provost and I discussed the merits of ROC curves vs. accuracy estimation. Here is a quick summary of our discussion. The “Receiver Operating Characteristic” (ROC) curve is an alternative to accuracy for the evaluation of learning algorithms on natural datasets. The ROC curve is a curve and not a single number statistic. In particular, this means that the comparison of two algorithms on a dataset does not always produce an obvious order. Accuracy (= 1 – error rate) is a standard method used to evaluate learning algorithms. It is a single-number summary of performance. AROC is the area under the ROC curve. It is a single number summary of performance. The comparison of these metrics is a subtle affair, because in machine learning, they are compared on different natural datasets. This makes some sense if we accept the hypothesis “Performance on past learning problems (roughly) predicts performance on future learning problems.” The ROC vs. accuracy discussion is o

6 0.78061724 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

7 0.76454145 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations

8 0.56511825 343 hunch net-2009-02-18-Decision by Vetocracy

9 0.52935666 360 hunch net-2009-06-15-In Active Learning, the question changes

10 0.52751207 194 hunch net-2006-07-11-New Models

11 0.52705199 26 hunch net-2005-02-21-Problem: Cross Validation

12 0.52547282 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

13 0.52319753 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

14 0.52237648 406 hunch net-2010-08-22-KDD 2010

15 0.5205127 373 hunch net-2009-10-03-Static vs. Dynamic multiclass prediction

16 0.51976365 131 hunch net-2005-11-16-The Everything Ensemble Edge

17 0.51781464 432 hunch net-2011-04-20-The End of the Beginning of Active Learning

18 0.51656944 351 hunch net-2009-05-02-Wielding a New Abstraction

19 0.51414245 344 hunch net-2009-02-22-Effective Research Funding

20 0.51278073 378 hunch net-2009-11-15-The Other Online Learning