hunch_net hunch_net-2009 hunch_net-2009-380 knowledge-graph by maker-knowledge-mining

380 hunch net-2009-11-29-AI Safety


meta infos for this blog

Source: html

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . [sent-2, score-0.844]

2 A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. [sent-5, score-0.295]

3 The AI promptly starts trading in various markets to make money. [sent-6, score-0.339]

4 To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. [sent-7, score-0.196]

5 The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely digital. [sent-8, score-0.126]

6 Robot cars and construction teams complete the process, so that any human with money can order anything cheaply and quickly, but no jobs remain for humans. [sent-10, score-0.244]

7 At this point, the AI is stuck—it can eventually extract all the money from the economic system, and that’s all there is. [sent-11, score-0.174]

8 It simply funds appropriate political campaigns so that in some country a measure passes granting the AI the right to make money, which it promptly does, mushrooming it’s wealth from trillions to the maximum number representable in all computers simultaneously. [sent-13, score-0.448]

9 To remove this obstacle, the AI promptly starts making more computers on a worldwide scale until all available power sources are used up. [sent-14, score-0.428]

10 To add more power, the AI starts a space program with beamed power. [sent-15, score-0.281]

11 Unfortunately, it finds the pesky atmosphere an obstacle to space travel, so it chemically binds the atmosphere in the crust of the earth allowing many Gauss Guns to efficiently project material into space where solar sails are used for orbital positioning. [sent-16, score-1.033]

12 This process continues, slowed perhaps by the need to cool the Earth’s core, until the earth and other viable rocky bodies in the solar system are discorporated into a Dyson sphere . [sent-17, score-0.392]

13 Then, the AI goes interstellar with the same program. [sent-18, score-0.158]

14 Somewhere in this process, certainly by the time the atmosphere is chemically bound, all life on earth (except the AI if you count it) is extinct. [sent-19, score-0.436]

15 One element of understanding AI safety seems to be understanding what an AI could do. [sent-21, score-0.15]

16 The general problem is related to the wish problem: How do you specify a wish in a manner so that it can’t be misinterpreted? [sent-25, score-0.255]

17 Applied to AI, this approach also has limits because any limit imposed by a person can and eventually will be removed by a person given sufficient opportunity. [sent-27, score-0.277]

18 Applied to AI, the idea would be that we make many AIs programmed to behave well either via laws or wish tricks, with an additional element of aggressively enforcing this behavior in other AIs. [sent-31, score-0.571]

19 Furthermore, the default must be that AIs are programmed to not harm or cause harm to humans, enforcing that behavior in other AIs. [sent-35, score-0.439]

20 Getting the programming right is the hard part, and I’m not clear on how viable this is, or how difficult it is compared to simply creating an AI, which of course I haven’t managed. [sent-36, score-0.178]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('ai', 0.646), ('ais', 0.21), ('earth', 0.173), ('atmosphere', 0.158), ('interstellar', 0.158), ('promptly', 0.158), ('starts', 0.129), ('money', 0.114), ('chemically', 0.105), ('enforcing', 0.105), ('programmed', 0.105), ('space', 0.103), ('travel', 0.099), ('wish', 0.099), ('obstacle', 0.093), ('solar', 0.093), ('computers', 0.089), ('safety', 0.087), ('harm', 0.082), ('singularity', 0.082), ('laws', 0.082), ('safe', 0.082), ('resources', 0.075), ('jobs', 0.072), ('viable', 0.072), ('robotics', 0.07), ('attack', 0.066), ('behavior', 0.065), ('element', 0.063), ('imposed', 0.063), ('necessary', 0.062), ('furthermore', 0.06), ('eventually', 0.06), ('anything', 0.058), ('manner', 0.057), ('approach', 0.056), ('right', 0.055), ('process', 0.054), ('constraints', 0.053), ('power', 0.052), ('make', 0.052), ('compared', 0.051), ('improve', 0.049), ('add', 0.049), ('person', 0.049), ('complementary', 0.047), ('corrupted', 0.047), ('funds', 0.047), ('granting', 0.047), ('sails', 0.047)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 380 hunch net-2009-11-29-AI Safety

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di

2 0.33126578 353 hunch net-2009-05-08-Computability in Artificial Intelligence

Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to

3 0.28121743 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

4 0.21200822 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

5 0.12107984 287 hunch net-2008-01-28-Sufficient Computation

Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther

6 0.11759627 100 hunch net-2005-08-04-Why Reinforcement Learning is Important

7 0.11049274 316 hunch net-2008-09-04-Fall ML Conferences

8 0.088993371 93 hunch net-2005-07-13-“Sister Conference” presentations

9 0.087107718 424 hunch net-2011-02-17-What does Watson mean?

10 0.074242331 120 hunch net-2005-10-10-Predictive Search is Coming

11 0.0737551 293 hunch net-2008-03-23-Interactive Machine Learning

12 0.073253706 270 hunch net-2007-11-02-The Machine Learning Award goes to …

13 0.071255401 370 hunch net-2009-09-18-Necessary and Sufficient Research

14 0.070627503 282 hunch net-2008-01-06-Research Political Issues

15 0.070472099 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

16 0.07035248 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

17 0.069196612 397 hunch net-2010-05-02-What’s the difference between gambling and rewarding good prediction?

18 0.068399943 486 hunch net-2013-07-10-Thoughts on Artificial Intelligence

19 0.066823214 153 hunch net-2006-02-02-Introspectionism as a Disease

20 0.066668212 237 hunch net-2007-04-02-Contextual Scaling


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.155), (1, 0.015), (2, -0.064), (3, 0.094), (4, -0.032), (5, -0.025), (6, -0.007), (7, 0.036), (8, 0.026), (9, -0.032), (10, -0.05), (11, -0.024), (12, 0.009), (13, 0.075), (14, -0.043), (15, -0.013), (16, 0.087), (17, -0.059), (18, 0.126), (19, 0.072), (20, -0.007), (21, 0.078), (22, -0.158), (23, 0.217), (24, 0.147), (25, -0.065), (26, 0.013), (27, -0.136), (28, 0.159), (29, -0.023), (30, 0.108), (31, -0.096), (32, -0.004), (33, -0.104), (34, 0.134), (35, 0.007), (36, -0.139), (37, 0.062), (38, -0.005), (39, 0.014), (40, -0.065), (41, -0.059), (42, 0.02), (43, -0.167), (44, -0.047), (45, 0.028), (46, 0.003), (47, -0.012), (48, 0.035), (49, -0.065)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98625475 380 hunch net-2009-11-29-AI Safety

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di

2 0.84646261 353 hunch net-2009-05-08-Computability in Artificial Intelligence

Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to

3 0.78259969 287 hunch net-2008-01-28-Sufficient Computation

Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther

4 0.76430851 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

5 0.7237708 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

6 0.5355143 153 hunch net-2006-02-02-Introspectionism as a Disease

7 0.49857086 424 hunch net-2011-02-17-What does Watson mean?

8 0.42003831 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

9 0.38697407 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs

10 0.38680223 120 hunch net-2005-10-10-Predictive Search is Coming

11 0.35196307 316 hunch net-2008-09-04-Fall ML Conferences

12 0.34454641 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

13 0.34296477 366 hunch net-2009-08-03-Carbon in Computer Science Research

14 0.3286134 323 hunch net-2008-11-04-Rise of the Machines

15 0.32663187 112 hunch net-2005-09-14-The Predictionist Viewpoint

16 0.31838796 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?

17 0.31252217 282 hunch net-2008-01-06-Research Political Issues

18 0.30872607 270 hunch net-2007-11-02-The Machine Learning Award goes to …

19 0.29251766 93 hunch net-2005-07-13-“Sister Conference” presentations

20 0.29184008 102 hunch net-2005-08-11-Why Manifold-Based Dimension Reduction Techniques?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.039), (14, 0.27), (16, 0.031), (27, 0.142), (37, 0.016), (38, 0.06), (48, 0.012), (53, 0.072), (55, 0.069), (56, 0.023), (63, 0.017), (64, 0.016), (68, 0.021), (94, 0.067), (95, 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.91668648 94 hunch net-2005-07-13-Text Entailment at AAAI

Introduction: Rajat Raina presented a paper on the technique they used for the PASCAL Recognizing Textual Entailment challenge. “Text entailment” is the problem of deciding if one sentence implies another. For example the previous sentence entails: Text entailment is a decision problem. One sentence can imply another. The challenge was of the form: given an original sentence and another sentence predict whether there was an entailment. All current techniques for predicting correctness of an entailment are at the “flail” stage—accuracies of around 58% where humans could achieve near 100% accuracy, so there is much room to improve. Apparently, there may be another PASCAL challenge on this problem in the near future.

2 0.86080718 471 hunch net-2012-08-24-Patterns for research in machine learning

Introduction: There are a handful of basic code patterns that I wish I was more aware of when I started research in machine learning. Each on its own may seem pointless, but collectively they go a long way towards making the typical research workflow more efficient. Here they are: Separate code from data. Separate input data, working data and output data. Save everything to disk frequently. Separate options from parameters. Do not use global variables. Record the options used to generate each run of the algorithm. Make it easy to sweep options. Make it easy to execute only portions of the code. Use checkpointing. Write demos and tests. Click here for discussion and examples for each item. Also see Charles Sutton’s and HackerNews’ thoughts on the same topic. My guess is that these patterns will not only be useful for machine learning, but also any other computational work that involves either a) processing large amounts of data, or b) algorithms that take a signif

same-blog 3 0.84663838 380 hunch net-2009-11-29-AI Safety

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di

4 0.78538477 407 hunch net-2010-08-23-Boosted Decision Trees for Deep Learning

Introduction: About 4 years ago, I speculated that decision trees qualify as a deep learning algorithm because they can make decisions which are substantially nonlinear in the input representation. Ping Li has proved this correct, empirically at UAI by showing that boosted decision trees can beat deep belief networks on versions of Mnist which are artificially hardened so as to make them solvable only by deep learning algorithms. This is an important point, because the ability to solve these sorts of problems is probably the best objective definition of a deep learning algorithm we have. I’m not that surprised. In my experience, if you can accept the computational drawbacks of a boosted decision tree, they can achieve pretty good performance. Geoff Hinton once told me that the great thing about deep belief networks is that they work. I understand that Ping had very substantial difficulty in getting this published, so I hope some reviewers step up to the standard of valuing wha

5 0.764754 430 hunch net-2011-04-11-The Heritage Health Prize

Introduction: The Heritage Health Prize is potentially the largest prediction prize yet at $3M, which is sure to get many people interested. Several elements of the competition may be worth discussing. The most straightforward way for HPN to deploy this predictor is in determining who to cover with insurance. This might easily cover the costs of running the contest itself, but the value to the health system of a whole is minimal, as people not covered still exist. While HPN itself is a provider network, they have active relationships with a number of insurance companies, and the right to resell any entrant. It’s worth keeping in mind that the research and development may nevertheless end up being useful in the longer term, especially as entrants also keep the right to their code. The judging metric is something I haven’t seen previously. If a patient has probability 0.5 of being in the hospital 0 days and probability 0.5 of being in the hospital ~53.6 days, the optimal prediction in e

6 0.59046054 131 hunch net-2005-11-16-The Everything Ensemble Edge

7 0.58824515 19 hunch net-2005-02-14-Clever Methods of Overfitting

8 0.57350475 194 hunch net-2006-07-11-New Models

9 0.57344604 202 hunch net-2006-08-10-Precision is not accuracy

10 0.57254988 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

11 0.57107574 414 hunch net-2010-10-17-Partha Niyogi has died

12 0.57019204 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

13 0.56850576 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

14 0.56835335 141 hunch net-2005-12-17-Workshops as Franchise Conferences

15 0.56834209 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.5679847 370 hunch net-2009-09-18-Necessary and Sufficient Research

17 0.56734174 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

18 0.56625509 36 hunch net-2005-03-05-Funding Research

19 0.56624091 134 hunch net-2005-12-01-The Webscience Future

20 0.56617469 12 hunch net-2005-02-03-Learning Theory, by assumption