hunch_net hunch_net-2008 hunch_net-2008-295 knowledge-graph by maker-knowledge-mining

295 hunch net-2008-04-12-It Doesn’t Stop


meta infos for this blog

Source: html

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? [sent-2, score-0.408]

2 In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. [sent-3, score-0.216]

3 This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. [sent-5, score-0.332]

4 The United States, has been experimenting with trying to stop research on stem cells . [sent-6, score-0.245]

5 It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. [sent-7, score-0.306]

6 Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a munition . [sent-8, score-0.255]

7 This slowed the development of encryption tools, but I now routinely use tools such as ssh and GPG . [sent-9, score-0.217]

8 Although the strategy of preventing research may be doomed, it does bring up a Bill Joy type of question : should we actively chose to do research in a field where knowledge can be used to great harm? [sent-10, score-0.516]

9 Many researchers avoid this question by not thinking about it, but this is a substantial question of concern to society at large, and whether or not society supports a direction of research. [sent-12, score-0.372]

10 These radical changes in the abilities of a civilization are strong evidence against stability. [sent-19, score-0.382]

11 The fundamental driver here is light speed latency: if it takes years for two groups to communicate, then it is unlikely they’ll manage to coordinate (with malevolence or accident) a simultaneous doomsday. [sent-29, score-0.401]

12 Getting from one star system to another with known physics turns out to be very hard. [sent-31, score-0.229]

13 The best approaches I know involve giant lasers and multiple solar sails or fusion powered rockets, taking many years. [sent-32, score-0.204]

14 Merely getting there, of course, is not enough—we need to get there with a kernel of civilization, capable of growing anew in the new system. [sent-33, score-0.139]

15 Any adjacent star system may not have an earth-like planet implying the need to support a space-based civilization. [sent-34, score-0.391]

16 Since travel between star systems is so prohibitively difficult, a basic question is: how small can we make a kernel of civilization? [sent-35, score-0.58]

17 Many science fiction writers and readers think of generation ships , which would necessarily be enormous to support the air, food, and water requirements of a self-sustaining human population. [sent-36, score-0.358]

18 Eventually, hallowed out asteroids could support human life if the requisite materials (Oxygen, Carbon, Hydrogen, etc. [sent-39, score-0.227]

19 The fundamental observation here is that intelligence and knowledge require very little mass. [sent-42, score-0.154]

20 I hope we manage to crack AI, opening the door to real space travel, so that civilization doesn’t stop. [sent-43, score-0.539]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('civilization', 0.314), ('star', 0.229), ('travel', 0.192), ('ai', 0.163), ('accident', 0.153), ('doomsday', 0.153), ('instability', 0.153), ('malevolence', 0.153), ('terminator', 0.153), ('intelligent', 0.137), ('scenarios', 0.136), ('encryption', 0.136), ('solar', 0.136), ('stop', 0.126), ('preventing', 0.126), ('research', 0.119), ('united', 0.105), ('society', 0.101), ('support', 0.094), ('manage', 0.089), ('fundamental', 0.087), ('question', 0.085), ('tools', 0.081), ('states', 0.081), ('machines', 0.079), ('kernel', 0.074), ('takes', 0.072), ('opening', 0.068), ('black', 0.068), ('smell', 0.068), ('alpha', 0.068), ('abilities', 0.068), ('door', 0.068), ('generation', 0.068), ('requisite', 0.068), ('accumulation', 0.068), ('carbon', 0.068), ('enormous', 0.068), ('hydrogen', 0.068), ('neglecting', 0.068), ('planet', 0.068), ('retarding', 0.068), ('sails', 0.068), ('triggered', 0.068), ('knowledge', 0.067), ('human', 0.065), ('getting', 0.065), ('writer', 0.063), ('writers', 0.063), ('joy', 0.063)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999952 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

2 0.21200822 380 hunch net-2009-11-29-AI Safety

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di

3 0.15218934 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

4 0.14935979 353 hunch net-2009-05-08-Computability in Artificial Intelligence

Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to

5 0.1304425 344 hunch net-2009-02-22-Effective Research Funding

Introduction: With a worldwide recession on, my impression is that the carnage in research has not been as severe as might be feared, at least in the United States. I know of two notable negative impacts: It’s quite difficult to get a job this year, as many companies and universities simply aren’t hiring. This is particularly tough on graduating students. Perhaps 10% of IBM research was fired. In contrast, around the time of the dot com bust, ATnT Research and Lucent had one or several 50% size firings wiping out much of the remainder of Bell Labs , triggering a notable diaspora for the respected machine learning group there. As the recession progresses, we may easily see more firings as companies in particular reach a point where they can no longer support research. There are a couple positives to the recession as well. Both the implosion of Wall Street (which siphoned off smart people) and the general difficulty of getting a job coming out of an undergraduate education s

6 0.11797839 366 hunch net-2009-08-03-Carbon in Computer Science Research

7 0.1137107 323 hunch net-2008-11-04-Rise of the Machines

8 0.11210536 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

9 0.10390373 424 hunch net-2011-02-17-What does Watson mean?

10 0.10223598 282 hunch net-2008-01-06-Research Political Issues

11 0.095221967 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

12 0.089466989 358 hunch net-2009-06-01-Multitask Poisoning

13 0.088379249 110 hunch net-2005-09-10-“Failure” is an option

14 0.086758845 120 hunch net-2005-10-10-Predictive Search is Coming

15 0.085622713 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

16 0.085596591 449 hunch net-2011-11-26-Giving Thanks

17 0.084971935 454 hunch net-2012-01-30-ICML Posters and Scope

18 0.084746778 36 hunch net-2005-03-05-Funding Research

19 0.084441081 134 hunch net-2005-12-01-The Webscience Future

20 0.082632162 351 hunch net-2009-05-02-Wielding a New Abstraction


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.209), (1, -0.022), (2, -0.094), (3, 0.137), (4, -0.07), (5, -0.045), (6, 0.014), (7, 0.059), (8, 0.017), (9, 0.035), (10, -0.03), (11, -0.041), (12, -0.015), (13, 0.027), (14, -0.005), (15, 0.0), (16, 0.058), (17, -0.002), (18, 0.076), (19, 0.003), (20, -0.016), (21, 0.049), (22, -0.126), (23, 0.145), (24, 0.097), (25, -0.096), (26, 0.011), (27, -0.087), (28, 0.087), (29, -0.031), (30, 0.078), (31, -0.086), (32, 0.022), (33, -0.077), (34, 0.084), (35, -0.001), (36, -0.102), (37, 0.013), (38, -0.054), (39, 0.031), (40, -0.016), (41, 0.025), (42, -0.046), (43, -0.067), (44, -0.032), (45, 0.03), (46, 0.026), (47, -0.008), (48, -0.005), (49, -0.008)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95053113 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

2 0.89827746 380 hunch net-2009-11-29-AI Safety

Introduction: Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for . I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel . On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad. A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely di

3 0.83029366 353 hunch net-2009-05-08-Computability in Artificial Intelligence

Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to

4 0.76448935 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

5 0.74172127 287 hunch net-2008-01-28-Sufficient Computation

Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther

6 0.65473235 153 hunch net-2006-02-02-Introspectionism as a Disease

7 0.60713577 424 hunch net-2011-02-17-What does Watson mean?

8 0.57619315 366 hunch net-2009-08-03-Carbon in Computer Science Research

9 0.56497955 282 hunch net-2008-01-06-Research Political Issues

10 0.54535216 241 hunch net-2007-04-28-The Coming Patent Apocalypse

11 0.53679913 76 hunch net-2005-05-29-Bad ideas

12 0.53483015 257 hunch net-2007-07-28-Asking questions

13 0.52155983 344 hunch net-2009-02-22-Effective Research Funding

14 0.51077014 323 hunch net-2008-11-04-Rise of the Machines

15 0.50955224 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

16 0.50771403 358 hunch net-2009-06-01-Multitask Poisoning

17 0.50520939 208 hunch net-2006-09-18-What is missing for online collaborative research?

18 0.49981558 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

19 0.49972495 370 hunch net-2009-09-18-Necessary and Sufficient Research

20 0.49899331 397 hunch net-2010-05-02-What’s the difference between gambling and rewarding good prediction?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(0, 0.012), (3, 0.021), (10, 0.03), (27, 0.185), (37, 0.011), (38, 0.065), (42, 0.012), (53, 0.046), (55, 0.087), (68, 0.021), (88, 0.316), (94, 0.077), (95, 0.021)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.89327401 469 hunch net-2012-07-09-Videolectures

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.

2 0.88943213 168 hunch net-2006-04-02-Mad (Neuro)science

Introduction: One of the questions facing machine learning as a field is “Can we produce a generalized learning system that can solve a wide array of standard learning problems?” The answer is trivial: “yes, just have children”. Of course, that wasn’t really the question. The refined question is “Are there simple-to-implement generalized learning systems that can solve a wide array of standard learning problems?” The answer to this is less clear. The ability of animals (and people ) to learn might be due to megabytes encoded in the DNA. If this algorithmic complexity is necessary to solve machine learning, the field faces a daunting task in replicating it on a computer. This observation suggests a possibility: if you can show that few bits of DNA are needed for learning in animals, then this provides evidence that machine learning (as a field) has a hope of big success with relatively little effort. It is well known that specific portions of the brain have specific functionality across

3 0.88914162 93 hunch net-2005-07-13-“Sister Conference” presentations

Introduction: Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else. Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE , IUI , and ACL . There was also some discussion of having a super-colocation event similar to FCRC , but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.

same-blog 4 0.85926604 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

5 0.84356385 13 hunch net-2005-02-04-JMLG

Introduction: The Journal of Machine Learning Gossip has some fine satire about learning research. In particular, the guides are amusing and remarkably true. As in all things, it’s easy to criticize the way things are and harder to make them better.

6 0.71505237 371 hunch net-2009-09-21-Netflix finishes (and starts)

7 0.58395362 95 hunch net-2005-07-14-What Learning Theory might do

8 0.58206326 343 hunch net-2009-02-18-Decision by Vetocracy

9 0.57665169 194 hunch net-2006-07-11-New Models

10 0.575427 437 hunch net-2011-07-10-ICML 2011 and the future

11 0.57540518 98 hunch net-2005-07-27-Not goal metrics

12 0.57524687 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

13 0.57522386 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

14 0.57334119 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

15 0.5722369 235 hunch net-2007-03-03-All Models of Learning have Flaws

16 0.57191116 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

17 0.57166123 351 hunch net-2009-05-02-Wielding a New Abstraction

18 0.57147092 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

19 0.57124573 44 hunch net-2005-03-21-Research Styles in Machine Learning

20 0.5704183 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”