hunch_net hunch_net-2009 hunch_net-2009-353 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to
sentIndex sentText sentNum sentScore
1 Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. [sent-2, score-0.431]
2 The general attitude is that AI is about finding efficient smart algorithms. [sent-3, score-0.476]
3 For large parts of machine learning, the same attitude is not too dangerous. [sent-4, score-0.186]
4 If you want to concentrate on conceptual problems, simply become a statistician. [sent-5, score-0.166]
5 Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e. [sent-8, score-0.585]
6 set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to be less computable than previous ones. [sent-10, score-0.438]
7 Of course, once a subject has been formalized, further research (a) analyzes the structure of the theory and (b) tries to compute efficient approximations. [sent-11, score-0.262]
8 Only in (b) do computational aspects play a role. [sent-12, score-0.395]
9 Here are some unconvincing arguments I’ve heard: A) Because AI is a subfield of computer science , and the task of computer scientists is to find (efficient) algorithms for well-defined problems? [sent-14, score-0.411]
10 I think it does not do any (real-world) problem any good to confine it to computer science. [sent-15, score-0.166]
11 B) Because formalizing AI and finding efficient smart programs goes hand-in-hand? [sent-17, score-0.432]
12 I am not aware of any convincing argument that separating the issues of “axiomatizing a field” and “finding efficient solutions” will (likely) fail for AI. [sent-19, score-0.499]
13 For instance, von Neumann’s minimax solution for games, albeit infeasible for most games, is the cornerstone of most practical approximations. [sent-22, score-0.166]
14 Sure, you could say that intelligence is by definition about computationally efficient decision making. [sent-24, score-0.363]
15 D) Because AI is trivial if computational issues are ignored? [sent-29, score-0.373]
16 All conceptual problems have already been solved? [sent-30, score-0.244]
17 For instance, optimal minimax play of a zero-sum game or solving NP complete problems are conceptually trivial, i. [sent-33, score-0.472]
18 The Turing test is informal (involves a human judge in the loop), maximizing expected reward (the true distribution is not known, so expectation w. [sent-37, score-0.179]
19 Conceptual and computational problems in AI should be studied jointly as well as separately , but the latter is not (yet) fashionable. [sent-44, score-0.238]
20 When AI was more logic oriented, some good logicians helped develop the foundations of “deductive” AI. [sent-45, score-0.213]
wordName wordTfidf (topN-words)
[('ai', 0.466), ('efficient', 0.172), ('minimax', 0.166), ('conceptual', 0.166), ('play', 0.161), ('computational', 0.16), ('games', 0.143), ('mathematicians', 0.134), ('primes', 0.134), ('separating', 0.134), ('intelligence', 0.13), ('issues', 0.126), ('logic', 0.121), ('informal', 0.119), ('smart', 0.118), ('course', 0.11), ('computer', 0.106), ('unconvincing', 0.104), ('attitude', 0.104), ('agents', 0.099), ('scientists', 0.095), ('generic', 0.092), ('foundations', 0.092), ('field', 0.091), ('theory', 0.09), ('trivial', 0.087), ('role', 0.084), ('physics', 0.082), ('parts', 0.082), ('finding', 0.082), ('instance', 0.079), ('problems', 0.078), ('aspects', 0.074), ('modern', 0.074), ('argument', 0.067), ('complete', 0.067), ('blog', 0.065), ('solutions', 0.061), ('mathematical', 0.061), ('definition', 0.061), ('formalizing', 0.06), ('induction', 0.06), ('forecasting', 0.06), ('exhaustive', 0.06), ('maximizing', 0.06), ('sides', 0.06), ('kindly', 0.06), ('confine', 0.06), ('aixi', 0.06), ('computability', 0.06)]
simIndex simValue blogId blogTitle
same-blog 1 0.99999988 353 hunch net-2009-05-08-Computability in Artificial Intelligence
Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to
2 0.33126578 380 hunch net-2009-11-29-AI Safety
Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di
3 0.27320382 352 hunch net-2009-05-06-Machine Learning to AI
Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont
4 0.14935979 295 hunch net-2008-04-12-It Doesn’t Stop
Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu
5 0.12786889 287 hunch net-2008-01-28-Sufficient Computation
Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther
6 0.12422533 95 hunch net-2005-07-14-What Learning Theory might do
7 0.11942103 286 hunch net-2008-01-25-Turing’s Club for Machine Learning
8 0.11482451 100 hunch net-2005-08-04-Why Reinforcement Learning is Important
9 0.1146489 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”
10 0.11029387 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms
11 0.10850855 293 hunch net-2008-03-23-Interactive Machine Learning
12 0.090463094 440 hunch net-2011-08-06-Interesting thing at UAI 2011
13 0.089869693 454 hunch net-2012-01-30-ICML Posters and Scope
14 0.088278525 270 hunch net-2007-11-02-The Machine Learning Award goes to …
15 0.088107646 93 hunch net-2005-07-13-“Sister Conference” presentations
16 0.088048168 439 hunch net-2011-08-01-Interesting papers at COLT 2011
17 0.087740675 332 hunch net-2008-12-23-Use of Learning Theory
18 0.087386742 235 hunch net-2007-03-03-All Models of Learning have Flaws
19 0.085274711 345 hunch net-2009-03-08-Prediction Science
20 0.084549159 106 hunch net-2005-09-04-Science in the Government
topicId topicWeight
[(0, 0.207), (1, 0.03), (2, -0.08), (3, 0.102), (4, -0.025), (5, -0.038), (6, 0.012), (7, 0.001), (8, 0.064), (9, -0.046), (10, -0.005), (11, -0.046), (12, -0.022), (13, 0.061), (14, -0.056), (15, 0.019), (16, 0.159), (17, -0.058), (18, 0.117), (19, 0.066), (20, -0.012), (21, 0.144), (22, -0.132), (23, 0.202), (24, 0.142), (25, -0.082), (26, 0.006), (27, -0.143), (28, 0.144), (29, -0.009), (30, 0.042), (31, -0.021), (32, -0.025), (33, -0.049), (34, 0.091), (35, 0.002), (36, -0.089), (37, 0.043), (38, 0.043), (39, 0.024), (40, -0.1), (41, -0.06), (42, -0.019), (43, -0.128), (44, -0.04), (45, 0.04), (46, 0.019), (47, -0.031), (48, -0.0), (49, -0.047)]
simIndex simValue blogId blogTitle
same-blog 1 0.96500093 353 hunch net-2009-05-08-Computability in Artificial Intelligence
Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to
2 0.9298963 380 hunch net-2009-11-29-AI Safety
Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di
3 0.8249985 352 hunch net-2009-05-06-Machine Learning to AI
Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont
4 0.75030971 287 hunch net-2008-01-28-Sufficient Computation
Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther
5 0.73152059 295 hunch net-2008-04-12-It Doesn’t Stop
Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu
6 0.59728563 153 hunch net-2006-02-02-Introspectionism as a Disease
7 0.57378072 424 hunch net-2011-02-17-What does Watson mean?
8 0.46248907 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning
9 0.45563698 168 hunch net-2006-04-02-Mad (Neuro)science
10 0.44817463 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge
11 0.43376946 440 hunch net-2011-08-06-Interesting thing at UAI 2011
12 0.42975235 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs
13 0.4259522 120 hunch net-2005-10-10-Predictive Search is Coming
14 0.40499878 112 hunch net-2005-09-14-The Predictionist Viewpoint
15 0.39986381 270 hunch net-2007-11-02-The Machine Learning Award goes to …
16 0.3954224 257 hunch net-2007-07-28-Asking questions
17 0.38828468 397 hunch net-2010-05-02-What’s the difference between gambling and rewarding good prediction?
18 0.38586202 366 hunch net-2009-08-03-Carbon in Computer Science Research
19 0.38287115 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?
20 0.38104245 370 hunch net-2009-09-18-Necessary and Sufficient Research
topicId topicWeight
[(10, 0.017), (24, 0.021), (27, 0.209), (38, 0.391), (51, 0.015), (53, 0.051), (55, 0.087), (77, 0.032), (93, 0.015), (94, 0.051), (95, 0.031)]
simIndex simValue blogId blogTitle
1 0.9817009 181 hunch net-2006-05-23-What is the best regret transform reduction from multiclass to binary?
Introduction: This post is about an open problem in learning reductions. Background A reduction might transform a a multiclass prediction problem where there are k possible labels into a binary learning problem where there are only 2 possible labels. On this induced binary problem we might learn a binary classifier with some error rate e . After subtracting the minimum possible (Bayes) error rate b , we get a regret r = e – b . The PECOC (Probabilistic Error Correcting Output Code) reduction has the property that binary regret r implies multiclass regret at most 4r 0.5 . The problem This is not the “rightest” answer. Consider the k=2 case, where we reduce binary to binary. There exists a reduction (the identity) with the property that regret r implies regret r . This is substantially superior to the transform given by the PECOC reduction, which suggests that a better reduction may exist for general k . For example, we can not rule out the possibility that a reduction
2 0.98071319 83 hunch net-2005-06-18-Lower Bounds for Learning Reductions
Introduction: Learning reductions transform a solver of one type of learning problem into a solver of another type of learning problem. When we analyze these for robustness we can make statement of the form “Reduction R has the property that regret r (or loss) on subproblems of type A implies regret at most f ( r ) on the original problem of type B “. A lower bound for a learning reduction would have the form “for all reductions R , there exists a learning problem of type B and learning algorithm for problems of type A where regret r on induced problems implies at least regret f ( r ) for B “. The pursuit of lower bounds is often questionable because, unlike upper bounds, they do not yield practical algorithms. Nevertheless, they may be helpful as a tool for thinking about what is learnable and how learnable it is. This has already come up here and here . At the moment, there is no coherent theory of lower bounds for learning reductions, and we have little understa
3 0.98027283 339 hunch net-2009-01-27-Key Scientific Challenges
Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…
4 0.96850139 170 hunch net-2006-04-06-Bounds greater than 1
Introduction: Nati Srebro and Shai Ben-David have a paper at COLT which, in the appendix, proves something very striking: several previous error bounds are always greater than 1. Background One branch of learning theory focuses on theorems which Assume samples are drawn IID from an unknown distribution D . Fix a set of classifiers Find a high probability bound on the maximum true error rate (with respect to D ) as a function of the empirical error rate on the training set. Many of these bounds become extremely complex and hairy. Current Everyone working on this subject wants “tighter bounds”, however there are different definitions of “tighter”. Some groups focus on “functional tightness” (getting the right functional dependency between the size of the training set and a parameterization of the hypothesis space) while others focus on “practical tightness” (finding bounds which work well on practical problems). (I am definitely in the second camp.) One of the da
5 0.95959061 125 hunch net-2005-10-20-Machine Learning in the News
Introduction: The New York Times had a short interview about machine learning in datamining being used pervasively by the IRS and large corporations to predict who to audit and who to target for various marketing campaigns. This is a big application area of machine learning. It can be harmful (learning + databases = another way to invade privacy) or beneficial (as google demonstrates, better targeting of marketing campaigns is far less annoying). This is yet more evidence that we can not rely upon “I’m just another fish in the school” logic for our expectations about treatment by government and large corporations.
6 0.94997156 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
same-blog 7 0.93442297 353 hunch net-2009-05-08-Computability in Artificial Intelligence
8 0.9307009 233 hunch net-2007-02-16-The Forgetting
9 0.89183635 236 hunch net-2007-03-15-Alternative Machine Learning Reductions Definitions
10 0.86873186 251 hunch net-2007-06-24-Interesting Papers at ICML 2007
11 0.79860401 72 hunch net-2005-05-16-Regret minimizing vs error limiting reductions
12 0.75770187 26 hunch net-2005-02-21-Problem: Cross Validation
13 0.73359317 19 hunch net-2005-02-14-Clever Methods of Overfitting
14 0.73263896 82 hunch net-2005-06-17-Reopening RL->Classification
15 0.72921985 131 hunch net-2005-11-16-The Everything Ensemble Edge
16 0.72061527 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?
17 0.70118433 239 hunch net-2007-04-18-$50K Spock Challenge
18 0.70049661 259 hunch net-2007-08-19-Choice of Metrics
19 0.69951022 14 hunch net-2005-02-07-The State of the Reduction
20 0.69345039 439 hunch net-2011-08-01-Interesting papers at COLT 2011