hunch_net hunch_net-2012 hunch_net-2012-469 knowledge-graph by maker-knowledge-mining

469 hunch net-2012-07-09-Videolectures


meta infos for this blog

Source: html

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Yaser points out some nicely videotaped machine learning lectures at Caltech . [sent-1, score-1.042]

2 Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. [sent-2, score-1.211]

3 Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful. [sent-3, score-1.33]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('yaser', 0.547), ('lectures', 0.376), ('union', 0.243), ('videotaped', 0.243), ('nicely', 0.225), ('caltech', 0.195), ('ng', 0.177), ('emphasis', 0.172), ('picking', 0.172), ('taught', 0.172), ('andrew', 0.154), ('benefit', 0.154), ('areas', 0.149), ('relative', 0.135), ('choosing', 0.133), ('topic', 0.124), ('somewhat', 0.119), ('ml', 0.11), ('class', 0.108), ('clear', 0.102), ('helpful', 0.101), ('points', 0.098), ('expect', 0.093), ('found', 0.089), ('always', 0.086), ('interesting', 0.071), ('machine', 0.071), ('different', 0.062), ('may', 0.048), ('people', 0.038), ('many', 0.03), ('learning', 0.029)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 469 hunch net-2012-07-09-Videolectures

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.

2 0.13356975 483 hunch net-2013-06-10-The Large Scale Learning class notes

Introduction: The large scale machine learning class I taught with Yann LeCun has finished. As I expected, it took quite a bit of time . We had about 25 people attending in person on average and 400 regularly watching the recorded lectures which is substantially more sustained interest than I expected for an advanced ML class. We also had some fun with class projects—I’m hopeful that several will eventually turn into papers. I expect there are a number of professors interested in lecturing on this and related topics. Everyone will have their personal taste in subjects of course, but hopefully there will be some convergence to common course materials as well. To help with this, I am making the sources to my presentations available . Feel free to use/improve/embelish/ridicule/etc… in the pursuit of the perfect course.

3 0.11769047 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

Introduction: Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. Students will be able to run and experiment with large scale learning algorithms since Yahoo! has donated servers which are being configured into a small scale Hadoop cluster. We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. For me, this is a chance to teach on many topics of past research. In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through th

4 0.10057291 240 hunch net-2007-04-21-Videolectures.net

Introduction: Davor has been working to setup videolectures.net which is the new site for the many lectures mentioned here . (Tragically, they seem to only be available in windows media format.) I went through my own projects and added a few links to the videos. The day when every result is a set of {paper, slides, video} isn’t quite here yet, but it’s within sight. (For many papers, of course, code is a 4th component.)

5 0.09373609 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

Introduction: Maybe it’s too early to call, but with four separate Neural Network sessions at this year’s ICML , it looks like Neural Networks are making a comeback. Here are my highlights of these sessions. In general, my feeling is that these papers both demystify deep learning and show its broader applicability. The first observation I made is that the once disreputable “Neural” nomenclature is being used again in lieu of “deep learning”. Maybe it’s because Adam Coates et al. showed that single layer networks can work surprisingly well. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , Adam Coates , Honglak Lee , Andrew Y. Ng (AISTATS 2011) The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization , Adam Coates , Andrew Y. Ng (ICML 2011) Another surprising result out of Andrew Ng’s group comes from Andrew Saxe et al. who show that certain convolutional pooling architectures can obtain close to state-of-the-art pe

6 0.076802664 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems

7 0.073437467 154 hunch net-2006-02-04-Research Budget Changes

8 0.065522827 378 hunch net-2009-11-15-The Other Online Learning

9 0.064552449 454 hunch net-2012-01-30-ICML Posters and Scope

10 0.058494005 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

11 0.056269906 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

12 0.056173902 37 hunch net-2005-03-08-Fast Physics for Learning

13 0.05552689 106 hunch net-2005-09-04-Science in the Government

14 0.054168031 193 hunch net-2006-07-09-The Stock Prediction Machine Learning Problem

15 0.050213732 216 hunch net-2006-11-02-2006 NIPS workshops

16 0.050074078 449 hunch net-2011-11-26-Giving Thanks

17 0.049599599 234 hunch net-2007-02-22-Create Your Own ICML Workshop

18 0.049290311 8 hunch net-2005-02-01-NIPS: Online Bayes

19 0.048584692 418 hunch net-2010-12-02-Traffic Prediction Problem

20 0.046516772 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.08), (1, -0.024), (2, -0.058), (3, 0.011), (4, 0.003), (5, 0.016), (6, -0.015), (7, -0.023), (8, -0.004), (9, -0.057), (10, 0.024), (11, -0.019), (12, 0.011), (13, -0.03), (14, 0.016), (15, 0.012), (16, 0.002), (17, 0.008), (18, 0.038), (19, 0.047), (20, 0.064), (21, -0.034), (22, -0.052), (23, -0.01), (24, 0.075), (25, -0.061), (26, 0.028), (27, 0.08), (28, -0.131), (29, 0.119), (30, -0.019), (31, -0.129), (32, 0.013), (33, -0.06), (34, 0.023), (35, -0.065), (36, 0.002), (37, 0.049), (38, 0.006), (39, -0.079), (40, 0.029), (41, -0.043), (42, 0.038), (43, 0.049), (44, 0.03), (45, -0.028), (46, -0.055), (47, -0.081), (48, 0.0), (49, 0.084)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93654484 469 hunch net-2012-07-09-Videolectures

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.

2 0.78146553 483 hunch net-2013-06-10-The Large Scale Learning class notes

Introduction: The large scale machine learning class I taught with Yann LeCun has finished. As I expected, it took quite a bit of time . We had about 25 people attending in person on average and 400 regularly watching the recorded lectures which is substantially more sustained interest than I expected for an advanced ML class. We also had some fun with class projects—I’m hopeful that several will eventually turn into papers. I expect there are a number of professors interested in lecturing on this and related topics. Everyone will have their personal taste in subjects of course, but hopefully there will be some convergence to common course materials as well. To help with this, I am making the sources to my presentations available . Feel free to use/improve/embelish/ridicule/etc… in the pursuit of the perfect course.

3 0.66396505 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

Introduction: Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. Students will be able to run and experiment with large scale learning algorithms since Yahoo! has donated servers which are being configured into a small scale Hadoop cluster. We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. For me, this is a chance to teach on many topics of past research. In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through th

4 0.64421368 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

5 0.49473524 240 hunch net-2007-04-21-Videolectures.net

Introduction: Davor has been working to setup videolectures.net which is the new site for the many lectures mentioned here . (Tragically, they seem to only be available in windows media format.) I went through my own projects and added a few links to the videos. The day when every result is a set of {paper, slides, video} isn’t quite here yet, but it’s within sight. (For many papers, of course, code is a 4th component.)

6 0.4485437 322 hunch net-2008-10-20-New York’s ML Day

7 0.44226155 13 hunch net-2005-02-04-JMLG

8 0.43307763 37 hunch net-2005-03-08-Fast Physics for Learning

9 0.42010522 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

10 0.41739857 192 hunch net-2006-07-08-Some recent papers

11 0.40698919 261 hunch net-2007-08-28-Live ML Class

12 0.3952764 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

13 0.39524409 6 hunch net-2005-01-27-Learning Complete Problems

14 0.39051011 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

15 0.38516778 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

16 0.38105381 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

17 0.36747065 448 hunch net-2011-10-24-2011 ML symposium and the bears

18 0.36609352 373 hunch net-2009-10-03-Static vs. Dynamic multiclass prediction

19 0.35917374 410 hunch net-2010-09-17-New York Area Machine Learning Events

20 0.35343546 386 hunch net-2010-01-13-Sam Roweis died


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(53, 0.099), (55, 0.161), (88, 0.517), (95, 0.053)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.84368432 469 hunch net-2012-07-09-Videolectures

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.

2 0.6167869 13 hunch net-2005-02-04-JMLG

Introduction: The Journal of Machine Learning Gossip has some fine satire about learning research. In particular, the guides are amusing and remarkably true. As in all things, it’s easy to criticize the way things are and harder to make them better.

3 0.61057943 93 hunch net-2005-07-13-“Sister Conference” presentations

Introduction: Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else. Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE , IUI , and ACL . There was also some discussion of having a super-colocation event similar to FCRC , but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.

4 0.55143255 168 hunch net-2006-04-02-Mad (Neuro)science

Introduction: One of the questions facing machine learning as a field is “Can we produce a generalized learning system that can solve a wide array of standard learning problems?” The answer is trivial: “yes, just have children”. Of course, that wasn’t really the question. The refined question is “Are there simple-to-implement generalized learning systems that can solve a wide array of standard learning problems?” The answer to this is less clear. The ability of animals (and people ) to learn might be due to megabytes encoded in the DNA. If this algorithmic complexity is necessary to solve machine learning, the field faces a daunting task in replicating it on a computer. This observation suggests a possibility: if you can show that few bits of DNA are needed for learning in animals, then this provides evidence that machine learning (as a field) has a hope of big success with relatively little effort. It is well known that specific portions of the brain have specific functionality across

5 0.49523267 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

6 0.31601262 331 hunch net-2008-12-12-Summer Conferences

7 0.29988012 371 hunch net-2009-09-21-Netflix finishes (and starts)

8 0.29607552 387 hunch net-2010-01-19-Deadline Season, 2010

9 0.29188189 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

10 0.28993824 326 hunch net-2008-11-11-COLT CFP

11 0.28993824 465 hunch net-2012-05-12-ICML accepted papers and early registration

12 0.28677675 159 hunch net-2006-02-27-The Peekaboom Dataset

13 0.28671896 90 hunch net-2005-07-07-The Limits of Learning Theory

14 0.28455934 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

15 0.2827971 20 hunch net-2005-02-15-ESPgame and image labeling

16 0.28140786 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

17 0.28064004 448 hunch net-2011-10-24-2011 ML symposium and the bears

18 0.27844989 356 hunch net-2009-05-24-2009 ICML discussion site

19 0.2780323 446 hunch net-2011-10-03-Monday announcements

20 0.27786645 145 hunch net-2005-12-29-Deadline Season