hunch_net hunch_net-2005 hunch_net-2005-10 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Machine learning makes the New Scientist . From the article: COMPUTERS can learn the meaning of words simply by plugging into Google. The finding could bring forward the day that true artificial intelligence is developed‌. But Paul Vitanyi and Rudi Cilibrasi of the National Institute for Mathematics and Computer Science in Amsterdam, the Netherlands, realised that a Google search can be used to measure how closely two words relate to each other. For instance, imagine a computer needs to understand what a hat is. You can read the paper at KC Google . Hat tip: Kolmogorov Mailing List Any thoughts on the paper?
sentIndex sentText sentNum sentScore
1 From the article: COMPUTERS can learn the meaning of words simply by plugging into Google. [sent-2, score-0.677]
2 The finding could bring forward the day that true artificial intelligence is developed‌. [sent-3, score-1.015]
3 But Paul Vitanyi and Rudi Cilibrasi of the National Institute for Mathematics and Computer Science in Amsterdam, the Netherlands, realised that a Google search can be used to measure how closely two words relate to each other. [sent-4, score-0.934]
4 For instance, imagine a computer needs to understand what a hat is. [sent-5, score-0.87]
5 Hat tip: Kolmogorov Mailing List Any thoughts on the paper? [sent-7, score-0.143]
wordName wordTfidf (topN-words)
[('hat', 0.4), ('words', 0.248), ('google', 0.244), ('kolmogorov', 0.2), ('national', 0.2), ('scientist', 0.2), ('computer', 0.191), ('relate', 0.189), ('artificial', 0.18), ('institute', 0.173), ('plugging', 0.173), ('mailing', 0.167), ('closely', 0.162), ('bring', 0.157), ('forward', 0.157), ('intelligence', 0.157), ('paul', 0.157), ('thoughts', 0.143), ('article', 0.143), ('instance', 0.143), ('mathematics', 0.143), ('computers', 0.137), ('developed', 0.13), ('meaning', 0.126), ('needs', 0.115), ('search', 0.115), ('measure', 0.112), ('day', 0.112), ('read', 0.107), ('paper', 0.103), ('finding', 0.099), ('list', 0.094), ('imagine', 0.092), ('science', 0.091), ('true', 0.091), ('makes', 0.077), ('understand', 0.072), ('learn', 0.071), ('could', 0.062), ('used', 0.059), ('simply', 0.059), ('two', 0.049), ('new', 0.038), ('machine', 0.031), ('learning', 0.013)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling
Introduction: Machine learning makes the New Scientist . From the article: COMPUTERS can learn the meaning of words simply by plugging into Google. The finding could bring forward the day that true artificial intelligence is developed‌. But Paul Vitanyi and Rudi Cilibrasi of the National Institute for Mathematics and Computer Science in Amsterdam, the Netherlands, realised that a Google search can be used to measure how closely two words relate to each other. For instance, imagine a computer needs to understand what a hat is. You can read the paper at KC Google . Hat tip: Kolmogorov Mailing List Any thoughts on the paper?
2 0.13079618 278 hunch net-2007-12-17-New Machine Learning mailing list
Introduction: IMLS (which is the nonprofit running ICML) has setup a new mailing list for Machine Learning News . The list address is ML-news@googlegroups.com, and signup requires a google account (which you can create). Only members can send messages.
3 0.12523831 342 hunch net-2009-02-16-KDNuggets
Introduction: Eric Zaetsch points out KDNuggets which is a well-developed mailing list/news site with a KDD flavor. This might particularly interest people looking for industrial jobs in machine learning, as the mailing list has many such.
4 0.10435653 423 hunch net-2011-02-02-User preferences for search engines
Introduction: I want to comment on the “Bing copies Google” discussion here , here , and here , because there are data-related issues which the general public may not understand, and some of the framing seems substantially misleading to me. As a not-distant-outsider, let me mention the sources of bias I may have. I work at Yahoo! , which has started using Bing . This might predispose me towards Bing, but on the other hand I’m still at Yahoo!, and have been using Linux exclusively as an OS for many years, including even a couple minor kernel patches. And, on the gripping hand , I’ve spent quite a bit of time thinking about the basic principles of incorporating user feedback in machine learning . Also note, this post is not related to official Yahoo! policy, it’s just my personal view. The issue Google engineers inserted synthetic responses to synthetic queries on google.com, then executed the synthetic searches on google.com using Internet Explorer with the Bing toolbar and later
5 0.09451481 120 hunch net-2005-10-10-Predictive Search is Coming
Introduction: “Search” is the other branch of AI research which has been succesful. Concrete examples include Deep Blue which beat the world chess champion and Chinook the champion checkers program. A set of core search techniques exist including A * , alpha-beta pruning, and others that can be applied to any of many different search problems. Given this, it may be surprising to learn that there has been relatively little succesful work on combining prediction and search. Given also that humans typically solve search problems using a number of predictive heuristics to narrow in on a solution, we might be surprised again. However, the big successful search-based systems have typically not used “smart” search algorithms. Insteady they have optimized for very fast search. This is not for lack of trying… many people have tried to synthesize search and prediction to various degrees of success. For example, Knightcap achieves good-but-not-stellar chess playing performance, and TD-gammon
6 0.094097644 228 hunch net-2007-01-15-The Machine Learning Department
7 0.083570473 353 hunch net-2009-05-08-Computability in Artificial Intelligence
8 0.083050951 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning
9 0.081220672 142 hunch net-2005-12-22-Yes , I am applying
10 0.079576746 494 hunch net-2014-03-11-The New York ML Symposium, take 2
11 0.072804093 178 hunch net-2006-05-08-Big machine learning
12 0.071965992 344 hunch net-2009-02-22-Effective Research Funding
13 0.070577875 134 hunch net-2005-12-01-The Webscience Future
14 0.070260853 412 hunch net-2010-09-28-Machined Learnings
15 0.068760626 330 hunch net-2008-12-07-A NIPS paper
16 0.068728805 440 hunch net-2011-08-06-Interesting thing at UAI 2011
17 0.067920491 64 hunch net-2005-04-28-Science Fiction and Research
18 0.067726396 322 hunch net-2008-10-20-New York’s ML Day
19 0.066483691 106 hunch net-2005-09-04-Science in the Government
20 0.064715788 55 hunch net-2005-04-10-Is the Goal Understanding or Prediction?
topicId topicWeight
[(0, 0.109), (1, -0.016), (2, -0.047), (3, 0.06), (4, -0.036), (5, 0.001), (6, -0.005), (7, 0.025), (8, -0.041), (9, -0.068), (10, 0.03), (11, -0.012), (12, -0.024), (13, -0.017), (14, -0.057), (15, -0.008), (16, -0.03), (17, 0.021), (18, 0.116), (19, -0.01), (20, 0.083), (21, 0.075), (22, -0.089), (23, 0.081), (24, -0.053), (25, -0.006), (26, 0.144), (27, 0.03), (28, -0.036), (29, 0.012), (30, -0.058), (31, 0.152), (32, 0.013), (33, -0.001), (34, 0.04), (35, 0.009), (36, -0.05), (37, 0.131), (38, 0.037), (39, -0.031), (40, -0.114), (41, -0.03), (42, -0.08), (43, -0.04), (44, 0.015), (45, -0.094), (46, -0.011), (47, 0.014), (48, 0.017), (49, -0.068)]
simIndex simValue blogId blogTitle
same-blog 1 0.9798454 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling
Introduction: Machine learning makes the New Scientist . From the article: COMPUTERS can learn the meaning of words simply by plugging into Google. The finding could bring forward the day that true artificial intelligence is developed‌. But Paul Vitanyi and Rudi Cilibrasi of the National Institute for Mathematics and Computer Science in Amsterdam, the Netherlands, realised that a Google search can be used to measure how closely two words relate to each other. For instance, imagine a computer needs to understand what a hat is. You can read the paper at KC Google . Hat tip: Kolmogorov Mailing List Any thoughts on the paper?
2 0.61768544 342 hunch net-2009-02-16-KDNuggets
Introduction: Eric Zaetsch points out KDNuggets which is a well-developed mailing list/news site with a KDD flavor. This might particularly interest people looking for industrial jobs in machine learning, as the mailing list has many such.
3 0.58495653 278 hunch net-2007-12-17-New Machine Learning mailing list
Introduction: IMLS (which is the nonprofit running ICML) has setup a new mailing list for Machine Learning News . The list address is ML-news@googlegroups.com, and signup requires a google account (which you can create). Only members can send messages.
4 0.53498465 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning
Introduction: Reviewers and students are sometimes greatly concerned by the distinction between: An open set and a closed set . A Supremum and a Maximum . An event which happens with probability 1 and an event that always happens. I don’t appreciate this distinction in machine learning & learning theory. All machine learning takes place (by definition) on a machine where every parameter has finite precision. Consequently, every set is closed, a maximal element always exists, and probability 1 events always happen. The fundamental issue here is that substantial parts of mathematics don’t appear well-matched to computation in the physical world, because the mathematics has concerns which are unphysical. This mismatched mathematics makes irrelevant distinctions. We can ask “what mathematics is appropriate to computation?” Andrej has convinced me that a pretty good answer to this question is constructive mathematics . So, here’s a basic challenge: Can anyone name a situati
5 0.52632356 55 hunch net-2005-04-10-Is the Goal Understanding or Prediction?
Introduction: Steve Smale and I have a debate about goals of learning theory. Steve likes theorems with a dependence on unobservable quantities. For example, if D is a distribution over a space X x [0,1] , you can state a theorem about the error rate dependent on the variance, E (x,y)~D (y-E y’~D|x [y']) 2 . I dislike this, because I want to use the theorems to produce code solving learning problems. Since I don’t know (and can’t measure) the variance, a theorem depending on the variance does not help me—I would not know what variance to plug into the learning algorithm. Recast more broadly, this is a debate between “declarative” and “operative” mathematics. A strong example of “declarative” mathematics is “a new kind of science” . Roughly speaking, the goal of this kind of approach seems to be finding a way to explain the observations we make. Examples include “some things are unpredictable”, “a phase transition exists”, etc… “Operative” mathematics helps you make predictions a
6 0.51931763 64 hunch net-2005-04-28-Science Fiction and Research
7 0.50003809 24 hunch net-2005-02-19-Machine learning reading groups
8 0.48246813 178 hunch net-2006-05-08-Big machine learning
9 0.478111 228 hunch net-2007-01-15-The Machine Learning Department
10 0.45971605 173 hunch net-2006-04-17-Rexa is live
11 0.4547357 423 hunch net-2011-02-02-User preferences for search engines
12 0.43857431 172 hunch net-2006-04-14-JMLR is a success
13 0.4384779 120 hunch net-2005-10-10-Predictive Search is Coming
14 0.43587524 90 hunch net-2005-07-07-The Limits of Learning Theory
15 0.42019102 106 hunch net-2005-09-04-Science in the Government
16 0.418897 193 hunch net-2006-07-09-The Stock Prediction Machine Learning Problem
17 0.41565213 440 hunch net-2011-08-06-Interesting thing at UAI 2011
18 0.41421989 190 hunch net-2006-07-06-Branch Prediction Competition
19 0.40934289 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs
20 0.40480253 414 hunch net-2010-10-17-Partha Niyogi has died
topicId topicWeight
[(4, 0.305), (27, 0.177), (53, 0.015), (54, 0.069), (55, 0.068), (61, 0.022), (74, 0.087), (94, 0.076), (95, 0.05)]
simIndex simValue blogId blogTitle
same-blog 1 0.93019909 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling
Introduction: Machine learning makes the New Scientist . From the article: COMPUTERS can learn the meaning of words simply by plugging into Google. The finding could bring forward the day that true artificial intelligence is developed‌. But Paul Vitanyi and Rudi Cilibrasi of the National Institute for Mathematics and Computer Science in Amsterdam, the Netherlands, realised that a Google search can be used to measure how closely two words relate to each other. For instance, imagine a computer needs to understand what a hat is. You can read the paper at KC Google . Hat tip: Kolmogorov Mailing List Any thoughts on the paper?
2 0.90779084 108 hunch net-2005-09-06-A link
Introduction: I read through some of the essays of Michael Nielsen today, and recommend them. Principles of Effective Research and Extreme Thinking are both relevant to several discussions here.
3 0.88331395 88 hunch net-2005-07-01-The Role of Impromptu Talks
Introduction: COLT had an impromptu session which seemed as interesting or more interesting than any other single technical session (despite being only an hour long). There are several roles that an impromptu session can play including: Announcing new work since the paper deadline. Letting this happen now rather than later helps aid the process of research. Discussing a paper that was rejected. Reviewers err sometimes and an impromptu session provides a means to remedy that. Entertainment. We all like to have a bit of fun. For design, the following seem important: Impromptu speakers should not have much time. At COLT, it was 8 minutes, but I have seen even 5 work well. The entire impromptu session should not last too long because the format is dense and promotes restlessness. A half hour or hour can work well. Impromptu talks are a mechanism to let a little bit of chaos into the schedule. They will be chaotic in content, presentation, and usefulness. The fundamental adv
4 0.81926394 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification
Introduction: I realized that the tools needed to solve the problem just posted were just created. I tried to sketch out the solution here (also in .lyx and .tex ). It is still quite sketchy (and probably only the few people who understand reductions well can follow). One of the reasons why I started this weblog was to experiment with “research in the open”, and this is an opportunity to do so. Over the next few days, I’ll be filling in details and trying to get things to make sense. If you have additions or ideas, please propose them.
5 0.81437671 86 hunch net-2005-06-28-The cross validation problem: cash reward
Introduction: I just presented the cross validation problem at COLT . The problem now has a cash prize (up to $500) associated with it—see the presentation for details. The write-up for colt .
6 0.72918177 75 hunch net-2005-05-28-Running A Machine Learning Summer School
7 0.72357655 129 hunch net-2005-11-07-Prediction Competitions
8 0.56493533 217 hunch net-2006-11-06-Data Linkage Problems
9 0.56292742 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions
10 0.55224079 336 hunch net-2009-01-19-Netflix prize within epsilon
11 0.54447454 370 hunch net-2009-09-18-Necessary and Sufficient Research
12 0.543109 335 hunch net-2009-01-08-Predictive Analytics World
13 0.54182863 33 hunch net-2005-02-28-Regularization
14 0.54126763 51 hunch net-2005-04-01-The Producer-Consumer Model of Research
15 0.5330776 67 hunch net-2005-05-06-Don’t mix the solution into the problem
16 0.53000319 360 hunch net-2009-06-15-In Active Learning, the question changes
17 0.52950394 143 hunch net-2005-12-27-Automated Labeling
18 0.52797961 132 hunch net-2005-11-26-The Design of an Optimal Research Environment
19 0.52694368 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds
20 0.52563262 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006