hunch_net hunch_net-2006 hunch_net-2006-153 knowledge-graph by maker-knowledge-mining

153 hunch net-2006-02-02-Introspectionism as a Disease


meta infos for this blog

Source: html

Introduction: In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism. Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. It’s easiest to show how introspectionism arises by an example. Suppose we want to solve the problem of navigating a robot from point A to point B given a camera. Then, the following research action plan might seem natural when you examine your own capabilities: Build an edge detector for still images. Build an object recognition system given the edge detector. Build a system to predict distance and orientation to objects given the object recognition system. Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. [sent-1, score-0.203]

2 Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. [sent-4, score-0.157]

3 It’s easiest to show how introspectionism arises by an example. [sent-5, score-0.392]

4 Then, the following research action plan might seem natural when you examine your own capabilities: Build an edge detector for still images. [sent-7, score-0.8]

5 Build an object recognition system given the edge detector. [sent-8, score-0.59]

6 Build a system to predict distance and orientation to objects given the object recognition system. [sent-9, score-0.696]

7 Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions. [sent-10, score-0.506]

8 It is particularly difficult to detect here because it is easy to confuse capability with reuse . [sent-16, score-0.241]

9 Humans, via experimental tests, can be shown capable of executing each step above, but this does not imply they reuse these computations in the next step. [sent-17, score-0.186]

10 There are reasonable evolution-based reasons to believe that brains minimize the amount of computation required to accomplish goals. [sent-18, score-0.356]

11 Computation costs energy, and since a human brain might consume 20% of the energy budget , we can be fairly sure that the evolutionary impetus to minimize computation is significant. [sent-19, score-0.761]

12 An energy consevative version of the above example might look similar, but with very loose approximations. [sent-21, score-0.314]

13 The brain might (by default) use a pathetically weak edge detector that is lazily refined into something more effective using time-sequenced images (since edges in moving scenes tend to stand out more). [sent-22, score-1.023]

14 The puny edge detector might be used to fill a rough “obstacle-or-not” fill map that coarsens dramatically with distance. [sent-23, score-0.823]

15 Given this, a decision about which direction to go next (rather than a full path) might be made. [sent-24, score-0.268]

16 This strategy avoids the need to build a good edge detector for still scenes, avoids the need to recognize objects, avoids the need to place them with high precision in a scene, and avoids the need to make a full path plan. [sent-25, score-2.208]

17 All of these avoidances might result in more tractable computation or learning problems. [sent-26, score-0.243]

18 Note that we can’t (and shouldn’t) say that the energy conservative path “must” be right because that would also be introspectionism. [sent-27, score-0.423]

19 However, it does exhibit an alternative showing the failure of imagination in introspectionism on the first approach. [sent-28, score-0.517]

20 It is reasonable to take introspection derived ideas as suggestions for how to go about building a (learning) system. [sent-29, score-0.465]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('introspectionism', 0.392), ('detector', 0.278), ('introspection', 0.235), ('edge', 0.225), ('path', 0.215), ('energy', 0.208), ('avoids', 0.202), ('build', 0.188), ('orientation', 0.139), ('scene', 0.139), ('computation', 0.137), ('object', 0.132), ('scenes', 0.122), ('reuse', 0.116), ('examine', 0.107), ('fill', 0.107), ('might', 0.106), ('need', 0.103), ('distance', 0.096), ('objects', 0.096), ('parts', 0.096), ('brain', 0.092), ('recognition', 0.088), ('minimize', 0.087), ('plan', 0.084), ('full', 0.082), ('suggestions', 0.082), ('go', 0.08), ('argument', 0.079), ('given', 0.077), ('weak', 0.075), ('executing', 0.07), ('impetus', 0.07), ('reasonable', 0.068), ('system', 0.068), ('must', 0.065), ('detect', 0.064), ('brains', 0.064), ('begins', 0.064), ('edges', 0.064), ('imagination', 0.064), ('shouldn', 0.061), ('repeat', 0.061), ('evolutionary', 0.061), ('exhibit', 0.061), ('telling', 0.061), ('confuse', 0.061), ('constantly', 0.061), ('stand', 0.061), ('wasted', 0.061)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999964 153 hunch net-2006-02-02-Introspectionism as a Disease

Introduction: In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism. Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. It’s easiest to show how introspectionism arises by an example. Suppose we want to solve the problem of navigating a robot from point A to point B given a camera. Then, the following research action plan might seem natural when you examine your own capabilities: Build an edge detector for still images. Build an object recognition system given the edge detector. Build a system to predict distance and orientation to objects given the object recognition system. Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.

2 0.11105656 102 hunch net-2005-08-11-Why Manifold-Based Dimension Reduction Techniques?

Introduction: Manifold based dimension-reduction algorithms share the following general outline. Given: a metric d() and a set of points S Construct a graph with a point in every node and every edge connecting to the node of one of the k -nearest neighbors. Associate with the edge a weight which is the distance between the points in the connected nodes. Digest the graph. This might include computing the shortest path between all points or figuring out how to linearly interpolate the point from it’s neighbors. Find a set of points in a low dimensional space which preserve the digested properties. Examples include LLE, Isomap (which I worked on), Hessian-LLE, SDE, and many others. The hope with these algorithms is that they can recover the low dimensional structure of point sets in high dimensional spaces. Many of them can be shown to work in interesting ways producing various compelling pictures. Despite doing some early work in this direction, I suffer from a motivational

3 0.086647898 143 hunch net-2005-12-27-Automated Labeling

Introduction: One of the common trends in machine learning has been an emphasis on the use of unlabeled data. The argument goes something like “there aren’t many labeled web pages out there, but there are a huge number of web pages, so we must find a way to take advantage of them.” There are several standard approaches for doing this: Unsupervised Learning . You use only unlabeled data. In a typical application, you cluster the data and hope that the clusters somehow correspond to what you care about. Semisupervised Learning. You use both unlabeled and labeled data to build a predictor. The unlabeled data influences the learned predictor in some way. Active Learning . You have unlabeled data and access to a labeling oracle. You interactively choose which examples to label so as to optimize prediction accuracy. It seems there is a fourth approach worth serious investigation—automated labeling. The approach goes as follows: Identify some subset of observed values to predict

4 0.080451228 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

5 0.080419503 235 hunch net-2007-03-03-All Models of Learning have Flaws

Introduction: Attempts to abstract and study machine learning are within some given framework or mathematical model. It turns out that all of these models are significantly flawed for the purpose of studying machine learning. I’ve created a table (below) outlining the major flaws in some common models of machine learning. The point here is not simply “woe unto us”. There are several implications which seem important. The multitude of models is a point of continuing confusion. It is common for people to learn about machine learning within one framework which often becomes there “home framework” through which they attempt to filter all machine learning. (Have you met people who can only think in terms of kernels? Only via Bayes Law? Only via PAC Learning?) Explicitly understanding the existence of these other frameworks can help resolve the confusion. This is particularly important when reviewing and particularly important for students. Algorithms which conform to multiple approaches c

6 0.07428512 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

7 0.07169731 282 hunch net-2008-01-06-Research Political Issues

8 0.070207939 58 hunch net-2005-04-21-Dynamic Programming Generalizations and Their Use

9 0.068633527 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

10 0.068564296 366 hunch net-2009-08-03-Carbon in Computer Science Research

11 0.068389066 351 hunch net-2009-05-02-Wielding a New Abstraction

12 0.066823214 380 hunch net-2009-11-29-AI Safety

13 0.065305002 424 hunch net-2011-02-17-What does Watson mean?

14 0.064789236 378 hunch net-2009-11-15-The Other Online Learning

15 0.064235494 437 hunch net-2011-07-10-ICML 2011 and the future

16 0.063378744 183 hunch net-2006-06-14-Explorations of Exploration

17 0.063173018 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

18 0.061182488 388 hunch net-2010-01-24-Specializations of the Master Problem

19 0.061105501 201 hunch net-2006-08-07-The Call of the Deep

20 0.060472578 353 hunch net-2009-05-08-Computability in Artificial Intelligence


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.159), (1, 0.021), (2, -0.035), (3, 0.069), (4, -0.006), (5, -0.023), (6, -0.01), (7, 0.054), (8, 0.035), (9, 0.008), (10, -0.068), (11, -0.05), (12, -0.019), (13, 0.001), (14, -0.006), (15, 0.011), (16, 0.042), (17, -0.01), (18, 0.039), (19, 0.044), (20, 0.005), (21, 0.034), (22, -0.034), (23, 0.034), (24, 0.014), (25, 0.002), (26, -0.025), (27, -0.048), (28, 0.029), (29, -0.04), (30, 0.019), (31, -0.022), (32, 0.03), (33, -0.011), (34, -0.004), (35, -0.024), (36, -0.033), (37, 0.031), (38, -0.005), (39, -0.011), (40, 0.004), (41, -0.012), (42, 0.077), (43, 0.014), (44, 0.002), (45, 0.036), (46, -0.01), (47, 0.01), (48, -0.024), (49, -0.067)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95546693 153 hunch net-2006-02-02-Introspectionism as a Disease

Introduction: In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism. Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. It’s easiest to show how introspectionism arises by an example. Suppose we want to solve the problem of navigating a robot from point A to point B given a camera. Then, the following research action plan might seem natural when you examine your own capabilities: Build an edge detector for still images. Build an object recognition system given the edge detector. Build a system to predict distance and orientation to objects given the object recognition system. Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.

2 0.76348978 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

3 0.69016993 287 hunch net-2008-01-28-Sufficient Computation

Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther

4 0.68704528 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

5 0.6822812 380 hunch net-2009-11-29-AI Safety

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di

6 0.67229414 353 hunch net-2009-05-08-Computability in Artificial Intelligence

7 0.65932018 102 hunch net-2005-08-11-Why Manifold-Based Dimension Reduction Techniques?

8 0.62379646 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

9 0.61848611 424 hunch net-2011-02-17-What does Watson mean?

10 0.61724347 370 hunch net-2009-09-18-Necessary and Sufficient Research

11 0.59925133 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

12 0.58473873 262 hunch net-2007-09-16-Optimizing Machine Learning Programs

13 0.56598288 314 hunch net-2008-08-24-Mass Customized Medicine in the Future?

14 0.55799395 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

15 0.55034637 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

16 0.54841018 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

17 0.5472244 241 hunch net-2007-04-28-The Coming Patent Apocalypse

18 0.53934205 91 hunch net-2005-07-10-Thinking the Unthought

19 0.53804755 358 hunch net-2009-06-01-Multitask Poisoning

20 0.53697217 237 hunch net-2007-04-02-Contextual Scaling


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.013), (10, 0.024), (27, 0.175), (34, 0.36), (38, 0.043), (53, 0.079), (55, 0.104), (64, 0.024), (94, 0.053), (95, 0.019)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.86877191 415 hunch net-2010-10-28-NY ML Symposium 2010

Introduction: About 200 people attended the 2010 NYAS ML Symposium this year. (It was about 170 last year .) I particularly enjoyed several talks. Yann has a new live demo of (limited) real-time object recognition learning. Sanjoy gave a fairly convincing and comprehensible explanation of why a modified form of single-linkage clustering is consistent in higher dimensions, and why consistency is a critical feature for clustering algorithms. I’m curious how well this algorithm works in practice. Matt Hoffman ‘s poster covering online LDA seemed pretty convincing to me as an algorithmic improvement. This year, we allocated more time towards posters & poster spotlights. For next year, we are considering some further changes. The format has traditionally been 4 invited Professor speakers, with posters and poster spotlight for students. Demand from other parties to participate is growing, for example from postdocs and startups in the area. Another growing concern is the fa

same-blog 2 0.86523527 153 hunch net-2006-02-02-Introspectionism as a Disease

Introduction: In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism. Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. It’s easiest to show how introspectionism arises by an example. Suppose we want to solve the problem of navigating a robot from point A to point B given a camera. Then, the following research action plan might seem natural when you examine your own capabilities: Build an edge detector for still images. Build an object recognition system given the edge detector. Build a system to predict distance and orientation to objects given the object recognition system. Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.

3 0.84409177 188 hunch net-2006-06-30-ICML papers

Introduction: Here are some ICML papers which interested me. Arindam Banerjee had a paper which notes that PAC-Bayes bounds, a core theorem in online learning, and the optimality of Bayesian learning statements share a core inequality in their proof. Pieter Abbeel , Morgan Quigley and Andrew Y. Ng have a paper discussing RL techniques for learning given a bad (but not too bad) model of the world. Nina Balcan and Avrim Blum have a paper which discusses how to learn given a similarity function rather than a kernel. A similarity function requires less structure than a kernel, implying that a learning algorithm using a similarity function might be applied in situations where no effective kernel is evident. Nathan Ratliff , Drew Bagnell , and Marty Zinkevich have a paper describing an algorithm which attempts to fuse A * path planning with learning of transition costs based on human demonstration. Papers (2), (3), and (4), all seem like an initial pass at solving in

4 0.84001905 15 hunch net-2005-02-08-Some Links

Introduction: Yaroslav Bulatov collects some links to other technical blogs.

5 0.7845 82 hunch net-2005-06-17-Reopening RL->Classification

Introduction: In research, it’s often the case that solving a problem helps you realize that it wasn’t the right problem to solve. This is the case for the “ reduce RL to classification ” problem with the solution hinted at here and turned into a paper here . The essential difficulty is that the method of stating and analyzing reductions ends up being nonalgorithmic (unlike previous reductions) unless you work with learning from teleoperated robots as Greg Grudic does. The difficulty here is due to the reduction being dependent on the optimal policy (which a human teleoperator might simulate, but which is otherwise unavailable). So, this problem is “open” again with the caveat that this time we want a more algorithmic solution. Whether or not this is feasible at all is still unclear and evidence in either direction would greatly interest me. A positive answer might have many practical implications in the long run.

6 0.7402972 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

7 0.55708694 437 hunch net-2011-07-10-ICML 2011 and the future

8 0.54145616 424 hunch net-2011-02-17-What does Watson mean?

9 0.53986639 144 hunch net-2005-12-28-Yet more nips thoughts

10 0.53967142 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

11 0.53637129 333 hunch net-2008-12-27-Adversarial Academia

12 0.52664512 343 hunch net-2009-02-18-Decision by Vetocracy

13 0.52134323 454 hunch net-2012-01-30-ICML Posters and Scope

14 0.52130705 225 hunch net-2007-01-02-Retrospective

15 0.51984215 297 hunch net-2008-04-22-Taking the next step

16 0.51976949 207 hunch net-2006-09-12-Incentive Compatible Reviewing

17 0.51969826 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

18 0.51798666 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

19 0.51746613 44 hunch net-2005-03-21-Research Styles in Machine Learning

20 0.51728123 194 hunch net-2006-07-11-New Models