hunch_net hunch_net-2008 hunch_net-2008-287 knowledge-graph by maker-knowledge-mining

287 hunch net-2008-01-28-Sufficient Computation


meta infos for this blog

Source: html

Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. [sent-2, score-0.352]

2 A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. [sent-3, score-1.84]

3 A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. [sent-4, score-1.856]

4 The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. [sent-5, score-0.27]

5 For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). [sent-6, score-0.478]

6 5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . [sent-8, score-0.811]

7 There are many criticisms (both positive and negative) for this argument. [sent-9, score-0.108]

8 Simulation of a human brain might require substantially more detail. [sent-10, score-0.481]

9 Perhaps an additional 10 2 is required per neuron. [sent-11, score-0.345]

10 We may not need to simulate a human brain to achieve AI. [sent-12, score-0.71]

11 There are certainly many examples where we have been able to design systems that work much better than evolved systems. [sent-13, score-0.202]

12 The internet can be viewed as a supercluster with 10 9 or so CPUs, easily satisfying the computational requirements. [sent-14, score-0.269]

13 Satisfying the computational requirement is not enough—bandwidth and latency requirements must also be satisfied. [sent-15, score-0.387]

14 These sorts of order-of-magnitude calculations appear sloppy, but they work out a remarkable number of times when tested elsewhere . [sent-16, score-0.552]

15 I wouldn’t be surprised to see it work out here. [sent-17, score-0.16]

16 Even with sufficient harrdware, we are missing a vital ingredient: knowing how to do things. [sent-18, score-0.303]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('per', 0.345), ('ops', 0.278), ('brain', 0.244), ('human', 0.237), ('operates', 0.216), ('satisfying', 0.175), ('magnitude', 0.163), ('bits', 0.148), ('ai', 0.148), ('implying', 0.13), ('cpus', 0.123), ('cycle', 0.123), ('flops', 0.123), ('calculations', 0.123), ('evolved', 0.123), ('processed', 0.123), ('sufficient', 0.121), ('achieve', 0.115), ('ingredient', 0.114), ('simulate', 0.114), ('simulation', 0.114), ('peak', 0.114), ('translates', 0.114), ('computer', 0.109), ('criticisms', 0.108), ('orders', 0.103), ('hardware', 0.103), ('requirement', 0.103), ('bandwidth', 0.099), ('remarkable', 0.099), ('nodes', 0.099), ('vital', 0.099), ('recorded', 0.099), ('second', 0.097), ('wouldn', 0.095), ('latency', 0.095), ('requirements', 0.095), ('computational', 0.094), ('runs', 0.092), ('operate', 0.09), ('gap', 0.087), ('tested', 0.087), ('elsewhere', 0.087), ('knowing', 0.083), ('surprised', 0.081), ('cluster', 0.08), ('work', 0.079), ('sorts', 0.077), ('modern', 0.077), ('bit', 0.074)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 287 hunch net-2008-01-28-Sufficient Computation

Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther

2 0.14729436 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

Introduction: All branches of machine learning seem to be united in the idea of using data to make predictions. However, people disagree to some extent about what this means. One way to categorize these different goals is on an axis, where one extreme is “tools to aid a human in using data to do prediction” and the other extreme is “tools to do prediction with no human intervention”. Here is my estimate of where various elements of machine learning fall on this spectrum. Human Necessary Human partially necessary Human unnecessary Clustering, data visualization Bayesian Learning, Probabilistic Models, Graphical Models Kernel Learning (SVM’s, etc..) Decision Trees? Reinforcement Learning The exact position of each element is of course debatable. My reasoning is that clustering and data visualization are nearly useless for prediction without a human in the loop. Bayesian/probabilistic models/graphical models generally require a human to sit and think about what

3 0.12786889 353 hunch net-2009-05-08-Computability in Artificial Intelligence

Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to

4 0.12107984 380 hunch net-2009-11-29-AI Safety

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di

5 0.11178423 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

6 0.10123573 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

7 0.10009576 229 hunch net-2007-01-26-Parallel Machine Learning Problems

8 0.092253327 424 hunch net-2011-02-17-What does Watson mean?

9 0.091908306 370 hunch net-2009-09-18-Necessary and Sufficient Research

10 0.091498137 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

11 0.087417237 120 hunch net-2005-10-10-Predictive Search is Coming

12 0.083025374 112 hunch net-2005-09-14-The Predictionist Viewpoint

13 0.080597468 168 hunch net-2006-04-02-Mad (Neuro)science

14 0.079399735 295 hunch net-2008-04-12-It Doesn’t Stop

15 0.078907199 159 hunch net-2006-02-27-The Peekaboom Dataset

16 0.078044556 440 hunch net-2011-08-06-Interesting thing at UAI 2011

17 0.077547923 227 hunch net-2007-01-10-A Deep Belief Net Learning Problem

18 0.07560268 237 hunch net-2007-04-02-Contextual Scaling

19 0.074847788 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

20 0.073263668 366 hunch net-2009-08-03-Carbon in Computer Science Research


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.143), (1, 0.035), (2, -0.055), (3, 0.07), (4, 0.02), (5, -0.02), (6, -0.03), (7, 0.02), (8, 0.018), (9, 0.032), (10, -0.112), (11, -0.024), (12, -0.018), (13, 0.008), (14, -0.062), (15, -0.017), (16, 0.065), (17, -0.081), (18, 0.062), (19, 0.056), (20, -0.015), (21, 0.034), (22, -0.052), (23, 0.152), (24, 0.124), (25, 0.048), (26, 0.031), (27, -0.064), (28, 0.138), (29, -0.041), (30, 0.052), (31, -0.058), (32, 0.006), (33, -0.067), (34, 0.101), (35, 0.019), (36, -0.04), (37, 0.024), (38, -0.015), (39, -0.012), (40, -0.01), (41, -0.069), (42, 0.003), (43, -0.006), (44, 0.033), (45, -0.022), (46, 0.038), (47, 0.039), (48, 0.035), (49, 0.026)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98453873 287 hunch net-2008-01-28-Sufficient Computation

Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther

2 0.81817144 380 hunch net-2009-11-29-AI Safety

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di

3 0.70791066 353 hunch net-2009-05-08-Computability in Artificial Intelligence

Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to

4 0.68944263 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

5 0.63805735 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

6 0.56747311 153 hunch net-2006-02-02-Introspectionism as a Disease

7 0.55391276 120 hunch net-2005-10-10-Predictive Search is Coming

8 0.55095333 424 hunch net-2011-02-17-What does Watson mean?

9 0.52364957 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

10 0.51458842 128 hunch net-2005-11-05-The design of a computing cluster

11 0.44335169 102 hunch net-2005-08-11-Why Manifold-Based Dimension Reduction Techniques?

12 0.44206923 366 hunch net-2009-08-03-Carbon in Computer Science Research

13 0.43688381 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

14 0.42472419 209 hunch net-2006-09-19-Luis von Ahn is awarded a MacArthur fellowship.

15 0.39705253 229 hunch net-2007-01-26-Parallel Machine Learning Problems

16 0.39556259 237 hunch net-2007-04-02-Contextual Scaling

17 0.39182413 39 hunch net-2005-03-10-Breaking Abstractions

18 0.39133972 32 hunch net-2005-02-27-Antilearning: When proximity goes bad

19 0.38854221 440 hunch net-2011-08-06-Interesting thing at UAI 2011

20 0.38688824 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(11, 0.414), (27, 0.223), (38, 0.046), (53, 0.019), (55, 0.072), (64, 0.016), (94, 0.067), (95, 0.044)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.9615767 487 hunch net-2013-07-24-ICML 2012 videos lost

Introduction: A big ouch—all the videos for ICML 2012 were lost in a shuffle. Rajnish sends the below, but if anyone can help that would be greatly appreciated. —————————————————————————— Sincere apologies to ICML community for loosing 2012 archived videos What happened: In order to publish 2013 videos, we decided to move 2012 videos to another server. We have a weekly backup service from the provider but after removing the videos from the current server, when we tried to retrieve the 2012 videos from backup service, the backup did not work because of provider-specific requirements that we had ignored while removing the data from previous server. What are we doing about this: At this point, we are still looking into raw footage to find if we can retrieve some of the videos, but following are the steps we are taking to make sure this does not happen again in future: (1) We are going to create a channel on Vimeo (and potentially on YouTube) and we will publish there the p-in-p- or slide-vers

same-blog 2 0.91717148 287 hunch net-2008-01-28-Sufficient Computation

Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther

3 0.81807482 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

Introduction: Alina and I are organizing a workshop on Learning Problem Design at NIPS . What is learning problem design? It’s about being clever in creating learning problems from otherwise unlabeled data. Read the webpage above for examples. I want to participate! Email us before Nov. 1 with a description of what you want to talk about.

4 0.79647106 402 hunch net-2010-07-02-MetaOptimize

Introduction: Joseph Turian creates MetaOptimize for discussion of NLP and ML on big datasets. This includes a blog , but perhaps more importantly a question and answer section . I’m hopeful it will take off.

5 0.71523458 264 hunch net-2007-09-30-NIPS workshops are out.

Introduction: Here . I’m particularly interested in the Web Search , Efficient ML , and (of course) Learning Problem Design workshops but there are many others to check out as well. Workshops are a great chance to make progress on or learn about a topic. Relevance and interaction amongst diverse people can sometimes be magical.

6 0.5266118 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

7 0.51514 360 hunch net-2009-06-15-In Active Learning, the question changes

8 0.51154339 343 hunch net-2009-02-18-Decision by Vetocracy

9 0.50788122 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

10 0.50762069 351 hunch net-2009-05-02-Wielding a New Abstraction

11 0.50723994 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

12 0.50718927 194 hunch net-2006-07-11-New Models

13 0.5066216 235 hunch net-2007-03-03-All Models of Learning have Flaws

14 0.50647271 220 hunch net-2006-11-27-Continuizing Solutions

15 0.50542545 378 hunch net-2009-11-15-The Other Online Learning

16 0.50534439 325 hunch net-2008-11-10-ICML Reviewing Criteria

17 0.50503081 304 hunch net-2008-06-27-Reviewing Horror Stories

18 0.50501025 95 hunch net-2005-07-14-What Learning Theory might do

19 0.50350147 371 hunch net-2009-09-21-Netflix finishes (and starts)

20 0.50318557 132 hunch net-2005-11-26-The Design of an Optimal Research Environment