hunch_net hunch_net-2006 hunch_net-2006-168 knowledge-graph by maker-knowledge-mining

168 hunch net-2006-04-02-Mad (Neuro)science


meta infos for this blog

Source: html

Introduction: One of the questions facing machine learning as a field is “Can we produce a generalized learning system that can solve a wide array of standard learning problems?” The answer is trivial: “yes, just have children”. Of course, that wasn’t really the question. The refined question is “Are there simple-to-implement generalized learning systems that can solve a wide array of standard learning problems?” The answer to this is less clear. The ability of animals (and people ) to learn might be due to megabytes encoded in the DNA. If this algorithmic complexity is necessary to solve machine learning, the field faces a daunting task in replicating it on a computer. This observation suggests a possibility: if you can show that few bits of DNA are needed for learning in animals, then this provides evidence that machine learning (as a field) has a hope of big success with relatively little effort. It is well known that specific portions of the brain have specific functionality across


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 One of the questions facing machine learning as a field is “Can we produce a generalized learning system that can solve a wide array of standard learning problems? [sent-1, score-1.1]

2 ” The answer is trivial: “yes, just have children”. [sent-2, score-0.102]

3 The refined question is “Are there simple-to-implement generalized learning systems that can solve a wide array of standard learning problems? [sent-4, score-0.953]

4 The ability of animals (and people ) to learn might be due to megabytes encoded in the DNA. [sent-6, score-0.417]

5 If this algorithmic complexity is necessary to solve machine learning, the field faces a daunting task in replicating it on a computer. [sent-7, score-0.333]

6 This observation suggests a possibility: if you can show that few bits of DNA are needed for learning in animals, then this provides evidence that machine learning (as a field) has a hope of big success with relatively little effort. [sent-8, score-0.321]

7 It is well known that specific portions of the brain have specific functionality across individuals. [sent-9, score-1.144]

8 There are two ways this observation can be explained: Maybe the specific functionality areas are encoded in the DNA. [sent-10, score-0.923]

9 Maybe the specific functionality areas arise from the learning process of the brain. [sent-11, score-0.808]

10 This is the answer that machine learning would like to hear because it agrees with the hypothesis that a simple general learning system exists. [sent-12, score-0.336]

11 It’s important to realize that these choices actually specify a spectrum rather than a dichotomy. [sent-13, score-0.078]

12 There are surely some problem-specific learning hacks in the brain and there is surely some generalized learning ability. [sent-14, score-0.93]

13 The question is: to what degree is learning encoded by genetic heritage vs personal experience? [sent-15, score-0.663]

14 It is anecdotally well known that people (especially children) can recover from fairly severe brain damage, but of course we would prefer to avoid anecdotal evidence. [sent-16, score-0.636]

15 There are also neuroscience experiments addressing this question. [sent-17, score-0.443]

16 This paper by Jitendra Sharma, Alessandra Angelucci , and Mriganka Sur provides some evidence. [sent-18, score-0.08]

17 In a nutshell, they rewire the optic nerve of ferrets into the auditory region of the brain. [sent-19, score-0.525]

18 They observe that structures similar to the visual specific region of the brain arise in the auditory region after rewiring (although the new regions may be less capable). [sent-20, score-1.645]

19 There are doubtless many other experiments addressing this question, but my knowledge of Neuroscience is lacking. [sent-21, score-0.311]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('region', 0.315), ('specific', 0.256), ('brain', 0.246), ('encoded', 0.244), ('functionality', 0.224), ('auditory', 0.21), ('neuroscience', 0.21), ('generalized', 0.203), ('animals', 0.173), ('children', 0.173), ('field', 0.142), ('arise', 0.136), ('array', 0.136), ('addressing', 0.132), ('wide', 0.121), ('surely', 0.116), ('areas', 0.114), ('maybe', 0.104), ('answer', 0.102), ('experiments', 0.101), ('solve', 0.098), ('hacks', 0.093), ('facing', 0.093), ('daunting', 0.093), ('maneesh', 0.093), ('nutshell', 0.093), ('question', 0.088), ('visual', 0.086), ('genetic', 0.086), ('heritage', 0.086), ('portions', 0.086), ('course', 0.086), ('observation', 0.085), ('anecdotally', 0.081), ('structures', 0.081), ('vs', 0.081), ('provides', 0.08), ('spectrum', 0.078), ('doubtless', 0.078), ('agrees', 0.078), ('refined', 0.078), ('learning', 0.078), ('known', 0.076), ('recover', 0.075), ('standard', 0.073), ('anecdotal', 0.072), ('explained', 0.072), ('trivial', 0.068), ('thanks', 0.068), ('pointing', 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 168 hunch net-2006-04-02-Mad (Neuro)science

Introduction: One of the questions facing machine learning as a field is “Can we produce a generalized learning system that can solve a wide array of standard learning problems?” The answer is trivial: “yes, just have children”. Of course, that wasn’t really the question. The refined question is “Are there simple-to-implement generalized learning systems that can solve a wide array of standard learning problems?” The answer to this is less clear. The ability of animals (and people ) to learn might be due to megabytes encoded in the DNA. If this algorithmic complexity is necessary to solve machine learning, the field faces a daunting task in replicating it on a computer. This observation suggests a possibility: if you can show that few bits of DNA are needed for learning in animals, then this provides evidence that machine learning (as a field) has a hope of big success with relatively little effort. It is well known that specific portions of the brain have specific functionality across

2 0.10087798 370 hunch net-2009-09-18-Necessary and Sufficient Research

Introduction: Researchers are typically confronted with big problems that they have no idea how to solve. In trying to come up with a solution, a natural approach is to decompose the big problem into a set of subproblems whose solution yields a solution to the larger problem. This approach can go wrong in several ways. Decomposition failure . The solution to the decomposition does not in fact yield a solution to the overall problem. Artificial hardness . The subproblems created are sufficient if solved to solve the overall problem, but they are harder than necessary. As you can see, computational complexity forms a relatively new (in research-history) razor by which to judge an approach sufficient but not necessary. In my experience, the artificial hardness problem is very common. Many researchers abdicate the responsibility of choosing a problem to work on to other people. This process starts very naturally as a graduate student, when an incoming student might have relatively l

3 0.09444993 68 hunch net-2005-05-10-Learning Reductions are Reductionist

Introduction: This is about a fundamental motivation for the investigation of reductions in learning. It applies to many pieces of work other than my own. The reductionist approach to problem solving is characterized by taking a problem, decomposing it into as-small-as-possible subproblems, discovering how to solve the subproblems, and then discovering how to use the solutions to the subproblems to solve larger problems. The reductionist approach to solving problems has often payed off very well. Computer science related examples of the reductionist approach include: Reducing computation to the transistor. All of our CPUs are built from transistors. Reducing rendering of images to rendering a triangle (or other simple polygons). Computers can now render near-realistic scenes in real time. The big breakthrough came from learning how to render many triangles quickly. This approach to problem solving extends well beyond computer science. Many fields of science focus on theories mak

4 0.091104634 90 hunch net-2005-07-07-The Limits of Learning Theory

Introduction: Suppose we had an infinitely powerful mathematician sitting in a room and proving theorems about learning. Could he solve machine learning? The answer is “no”. This answer is both obvious and sometimes underappreciated. There are several ways to conclude that some bias is necessary in order to succesfully learn. For example, suppose we are trying to solve classification. At prediction time, we observe some features X and want to make a prediction of either 0 or 1 . Bias is what makes us prefer one answer over the other based on past experience. In order to learn we must: Have a bias. Always predicting 0 is as likely as 1 is useless. Have the “right” bias. Predicting 1 when the answer is 0 is also not helpful. The implication of “have a bias” is that we can not design effective learning algorithms with “a uniform prior over all possibilities”. The implication of “have the ‘right’ bias” is that our mathematician fails since “right” is defined wi

5 0.085413918 355 hunch net-2009-05-19-CI Fellows

Introduction: Lev Reyzin points out the CI Fellows Project . Essentially, NSF is funding 60 postdocs in computer science for graduates from a wide array of US places to a wide array of US places. This is particularly welcome given a tough year for new hires. I expect some fraction of these postdocs will be in ML. The time frame is quite short, so those interested should look it over immediately.

6 0.081595413 228 hunch net-2007-01-15-The Machine Learning Department

7 0.080597468 287 hunch net-2008-01-28-Sufficient Computation

8 0.076687433 159 hunch net-2006-02-27-The Peekaboom Dataset

9 0.073296145 347 hunch net-2009-03-26-Machine Learning is too easy

10 0.073079333 235 hunch net-2007-03-03-All Models of Learning have Flaws

11 0.070744492 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

12 0.069399923 440 hunch net-2011-08-06-Interesting thing at UAI 2011

13 0.066699877 353 hunch net-2009-05-08-Computability in Artificial Intelligence

14 0.066561319 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

15 0.066504866 424 hunch net-2011-02-17-What does Watson mean?

16 0.065785304 454 hunch net-2012-01-30-ICML Posters and Scope

17 0.065308437 262 hunch net-2007-09-16-Optimizing Machine Learning Programs

18 0.064981036 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

19 0.064561337 95 hunch net-2005-07-14-What Learning Theory might do

20 0.064228334 149 hunch net-2006-01-18-Is Multitask Learning Black-Boxable?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.149), (1, 0.015), (2, -0.035), (3, 0.042), (4, 0.016), (5, -0.019), (6, 0.009), (7, 0.036), (8, 0.032), (9, -0.034), (10, -0.028), (11, -0.054), (12, -0.06), (13, 0.027), (14, -0.052), (15, -0.021), (16, 0.053), (17, -0.052), (18, 0.013), (19, 0.021), (20, 0.002), (21, 0.042), (22, -0.008), (23, -0.01), (24, 0.057), (25, -0.045), (26, 0.041), (27, -0.027), (28, 0.012), (29, 0.036), (30, -0.023), (31, 0.007), (32, 0.043), (33, -0.024), (34, -0.017), (35, 0.033), (36, 0.01), (37, 0.014), (38, -0.049), (39, -0.005), (40, -0.044), (41, 0.022), (42, -0.016), (43, 0.039), (44, 0.018), (45, -0.038), (46, 0.065), (47, 0.008), (48, -0.031), (49, -0.02)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.8733623 168 hunch net-2006-04-02-Mad (Neuro)science

Introduction: One of the questions facing machine learning as a field is “Can we produce a generalized learning system that can solve a wide array of standard learning problems?” The answer is trivial: “yes, just have children”. Of course, that wasn’t really the question. The refined question is “Are there simple-to-implement generalized learning systems that can solve a wide array of standard learning problems?” The answer to this is less clear. The ability of animals (and people ) to learn might be due to megabytes encoded in the DNA. If this algorithmic complexity is necessary to solve machine learning, the field faces a daunting task in replicating it on a computer. This observation suggests a possibility: if you can show that few bits of DNA are needed for learning in animals, then this provides evidence that machine learning (as a field) has a hope of big success with relatively little effort. It is well known that specific portions of the brain have specific functionality across

2 0.68638718 90 hunch net-2005-07-07-The Limits of Learning Theory

Introduction: Suppose we had an infinitely powerful mathematician sitting in a room and proving theorems about learning. Could he solve machine learning? The answer is “no”. This answer is both obvious and sometimes underappreciated. There are several ways to conclude that some bias is necessary in order to succesfully learn. For example, suppose we are trying to solve classification. At prediction time, we observe some features X and want to make a prediction of either 0 or 1 . Bias is what makes us prefer one answer over the other based on past experience. In order to learn we must: Have a bias. Always predicting 0 is as likely as 1 is useless. Have the “right” bias. Predicting 1 when the answer is 0 is also not helpful. The implication of “have a bias” is that we can not design effective learning algorithms with “a uniform prior over all possibilities”. The implication of “have the ‘right’ bias” is that our mathematician fails since “right” is defined wi

3 0.6721105 149 hunch net-2006-01-18-Is Multitask Learning Black-Boxable?

Introduction: Multitask learning is the learning to predict multiple outputs given the same input. Mathematically, we might think of this as trying to learn a function f:X -> {0,1} n . Structured learning is similar at this level of abstraction. Many people have worked on solving multitask learning (for example Rich Caruana ) using methods which share an internal representation. On other words, the the computation and learning of the i th prediction is shared with the computation and learning of the j th prediction. Another way to ask this question is: can we avoid sharing the internal representation? For example, it might be feasible to solve multitask learning by some process feeding the i th prediction f(x) i into the j th predictor f(x,f(x) i ) j , If the answer is “no”, then it implies we can not take binary classification as a basic primitive in the process of solving prediction problems. If the answer is “yes”, then we can reuse binary classification algorithms to

4 0.62507957 153 hunch net-2006-02-02-Introspectionism as a Disease

Introduction: In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism. Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. It’s easiest to show how introspectionism arises by an example. Suppose we want to solve the problem of navigating a robot from point A to point B given a camera. Then, the following research action plan might seem natural when you examine your own capabilities: Build an edge detector for still images. Build an object recognition system given the edge detector. Build a system to predict distance and orientation to objects given the object recognition system. Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.

5 0.60659826 352 hunch net-2009-05-06-Machine Learning to AI

Introduction: I recently had fun discussions with both Vikash Mansinghka and Thomas Breuel about approaching AI with machine learning. The general interest in taking a crack at AI with machine learning seems to be rising on many fronts including DARPA . As a matter of history, there was a great deal of interest in AI which died down before I began research. There remain many projects and conferences spawned in this earlier AI wave, as well as a good bit of experience about what did not work, or at least did not work yet. Here are a few examples of failure modes that people seem to run into: Supply/Product confusion . Sometimes we think “Intelligences use X, so I’ll create X and have an Intelligence.” An example of this is the Cyc Project which inspires some people as “intelligences use ontologies, so I’ll create an ontology and a system using it to have an Intelligence.” The flaw here is that Intelligences create ontologies, which they use, and without the ability to create ont

6 0.59465551 68 hunch net-2005-05-10-Learning Reductions are Reductionist

7 0.59413987 353 hunch net-2009-05-08-Computability in Artificial Intelligence

8 0.58913493 95 hunch net-2005-07-14-What Learning Theory might do

9 0.57158148 164 hunch net-2006-03-17-Multitask learning is Black-Boxable

10 0.56291211 257 hunch net-2007-07-28-Asking questions

11 0.56028247 228 hunch net-2007-01-15-The Machine Learning Department

12 0.55944169 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

13 0.55894011 440 hunch net-2011-08-06-Interesting thing at UAI 2011

14 0.55708891 102 hunch net-2005-08-11-Why Manifold-Based Dimension Reduction Techniques?

15 0.55522782 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

16 0.54886663 424 hunch net-2011-02-17-What does Watson mean?

17 0.54701465 152 hunch net-2006-01-30-Should the Input Representation be a Vector?

18 0.54657394 295 hunch net-2008-04-12-It Doesn’t Stop

19 0.53935504 262 hunch net-2007-09-16-Optimizing Machine Learning Programs

20 0.53893882 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.015), (27, 0.206), (38, 0.046), (53, 0.039), (55, 0.084), (88, 0.438), (94, 0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.87718487 469 hunch net-2012-07-09-Videolectures

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.

same-blog 2 0.84114718 168 hunch net-2006-04-02-Mad (Neuro)science

Introduction: One of the questions facing machine learning as a field is “Can we produce a generalized learning system that can solve a wide array of standard learning problems?” The answer is trivial: “yes, just have children”. Of course, that wasn’t really the question. The refined question is “Are there simple-to-implement generalized learning systems that can solve a wide array of standard learning problems?” The answer to this is less clear. The ability of animals (and people ) to learn might be due to megabytes encoded in the DNA. If this algorithmic complexity is necessary to solve machine learning, the field faces a daunting task in replicating it on a computer. This observation suggests a possibility: if you can show that few bits of DNA are needed for learning in animals, then this provides evidence that machine learning (as a field) has a hope of big success with relatively little effort. It is well known that specific portions of the brain have specific functionality across

3 0.83965957 93 hunch net-2005-07-13-“Sister Conference” presentations

Introduction: Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else. Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE , IUI , and ACL . There was also some discussion of having a super-colocation event similar to FCRC , but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.

4 0.80917597 13 hunch net-2005-02-04-JMLG

Introduction: The Journal of Machine Learning Gossip has some fine satire about learning research. In particular, the guides are amusing and remarkably true. As in all things, it’s easy to criticize the way things are and harder to make them better.

5 0.78297323 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

6 0.61999083 371 hunch net-2009-09-21-Netflix finishes (and starts)

7 0.46780992 95 hunch net-2005-07-14-What Learning Theory might do

8 0.46744391 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

9 0.46717617 44 hunch net-2005-03-21-Research Styles in Machine Learning

10 0.46684733 351 hunch net-2009-05-02-Wielding a New Abstraction

11 0.46646997 343 hunch net-2009-02-18-Decision by Vetocracy

12 0.4660407 325 hunch net-2008-11-10-ICML Reviewing Criteria

13 0.46575353 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

14 0.46476683 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

15 0.46394667 194 hunch net-2006-07-11-New Models

16 0.46367562 259 hunch net-2007-08-19-Choice of Metrics

17 0.46294087 337 hunch net-2009-01-21-Nearly all natural problems require nonlinearity

18 0.46201241 235 hunch net-2007-03-03-All Models of Learning have Flaws

19 0.46194661 98 hunch net-2005-07-27-Not goal metrics

20 0.46173811 304 hunch net-2008-06-27-Reviewing Horror Stories