hunch_net hunch_net-2013 hunch_net-2013-478 knowledge-graph by maker-knowledge-mining

478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class


meta infos for this blog

Source: html

Introduction: Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. Students will be able to run and experiment with large scale learning algorithms since Yahoo! has donated servers which are being configured into a small scale Hadoop cluster. We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. For me, this is a chance to teach on many topics of past research. In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through th


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . [sent-1, score-0.54]

2 This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. [sent-2, score-0.964]

3 This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. [sent-3, score-0.612]

4 Students will be able to run and experiment with large scale learning algorithms since Yahoo! [sent-4, score-0.159]

5 has donated servers which are being configured into a small scale Hadoop cluster. [sent-5, score-0.159]

6 We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. [sent-6, score-1.058]

7 For me, this is a chance to teach on many topics of past research. [sent-7, score-0.102]

8 In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through that lens. [sent-8, score-0.368]

9 More generally, I expect there is quite a bit of interest: figuring out how to use data to make predictions well is a topic of growing interest to many fields. [sent-9, score-0.251]

10 In 2007, this was true , and demand is much stronger now. [sent-10, score-0.08]

11 Yann and I also come from quite different viewpoints, so I’m looking forward to learning from him as well. [sent-11, score-0.084]

12 We plan to videotape lectures and put them (as well as slides) online, but this is not a MOOC in the sense of online grading and class certificates. [sent-12, score-0.743]

13 I’d prefer that it was, but there are two obstacles: NYU is still figuring out what to do as a University here, and this is not a class that has ever been taught before. [sent-13, score-0.796]

14 Turning previous tutorials and class fragments into coherent subject matter for the 50 students we can support at NYU will be pretty challenging as is. [sent-14, score-0.944]

15 My preference, however, is to enable external participation where it’s easily possible. [sent-15, score-0.363]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('class', 0.463), ('nyu', 0.392), ('figuring', 0.164), ('scale', 0.159), ('yann', 0.147), ('cover', 0.135), ('students', 0.12), ('frontier', 0.116), ('videotape', 0.116), ('mooc', 0.116), ('engage', 0.108), ('obstacles', 0.108), ('occasional', 0.108), ('support', 0.107), ('teach', 0.102), ('turning', 0.102), ('enabling', 0.097), ('enable', 0.093), ('scalable', 0.093), ('viewpoints', 0.093), ('easily', 0.09), ('lectures', 0.09), ('coherent', 0.09), ('external', 0.09), ('participation', 0.09), ('tricks', 0.09), ('ever', 0.087), ('hadoop', 0.087), ('interest', 0.087), ('forward', 0.084), ('projects', 0.084), ('challenging', 0.084), ('welcome', 0.084), ('slides', 0.082), ('taught', 0.082), ('lecun', 0.08), ('demand', 0.08), ('tutorials', 0.08), ('january', 0.078), ('late', 0.077), ('thoughts', 0.077), ('beginning', 0.077), ('university', 0.077), ('research', 0.077), ('teaching', 0.075), ('preference', 0.075), ('online', 0.074), ('classes', 0.074), ('follow', 0.072), ('along', 0.072)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

Introduction: Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. Students will be able to run and experiment with large scale learning algorithms since Yahoo! has donated servers which are being configured into a small scale Hadoop cluster. We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. For me, this is a chance to teach on many topics of past research. In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through th

2 0.20349143 483 hunch net-2013-06-10-The Large Scale Learning class notes

Introduction: The large scale machine learning class I taught with Yann LeCun has finished. As I expected, it took quite a bit of time . We had about 25 people attending in person on average and 400 regularly watching the recorded lectures which is substantially more sustained interest than I expected for an advanced ML class. We also had some fun with class projects—I’m hopeful that several will eventually turn into papers. I expect there are a number of professors interested in lecturing on this and related topics. Everyone will have their personal taste in subjects of course, but hopefully there will be some convergence to common course materials as well. To help with this, I am making the sources to my presentations available . Feel free to use/improve/embelish/ridicule/etc… in the pursuit of the perfect course.

3 0.17956743 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

Introduction: Since we last discussed the other online learning , Stanford has very visibly started pushing mass teaching in AI , Machine Learning , and Databases . In retrospect, it’s not too surprising that the next step up in serious online teaching experiments are occurring at the computer science department of a university embedded in the land of startups. Numbers on the order of 100000 are quite significant—similar in scale to the number of computer science undergraduate students/year in the US. Although these populations surely differ, the fact that they could overlap is worth considering for the future. It’s too soon to say how successful these classes will be and there are many easy criticisms to make: Registration != Learning … but if only 1/10th complete these classes, the scale of teaching still surpasses the scale of any traditional process. 1st year excitement != nth year routine … but if only 1/10th take future classes, the scale of teaching still surpass

4 0.14755423 6 hunch net-2005-01-27-Learning Complete Problems

Introduction: Let’s define a learning problem as making predictions given past data. There are several ways to attack the learning problem which seem to be equivalent to solving the learning problem. Find the Invariant This viewpoint says that learning is all about learning (or incorporating) transformations of objects that do not change the correct prediction. The best possible invariant is the one which says “all things of the same class are the same”. Finding this is equivalent to learning. This viewpoint is particularly common when working with image features. Feature Selection This viewpoint says that the way to learn is by finding the right features to input to a learning algorithm. The best feature is the one which is the class to predict. Finding this is equivalent to learning for all reasonable learning algorithms. This viewpoint is common in several applications of machine learning. See Gilad’s and Bianca’s comments . Find the Representation This is almost the same a

5 0.14160389 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

6 0.12715858 378 hunch net-2009-11-15-The Other Online Learning

7 0.12451397 31 hunch net-2005-02-26-Problem: Reductions and Relative Ranking Metrics

8 0.11769047 469 hunch net-2012-07-09-Videolectures

9 0.10768878 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

10 0.10644984 454 hunch net-2012-01-30-ICML Posters and Scope

11 0.1015065 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

12 0.097874947 464 hunch net-2012-05-03-Microsoft Research, New York City

13 0.090737902 134 hunch net-2005-12-01-The Webscience Future

14 0.090307243 267 hunch net-2007-10-17-Online as the new adjective

15 0.088468641 344 hunch net-2009-02-22-Effective Research Funding

16 0.087564349 8 hunch net-2005-02-01-NIPS: Online Bayes

17 0.086759135 36 hunch net-2005-03-05-Funding Research

18 0.083515748 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

19 0.083348744 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

20 0.083167136 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.198), (1, -0.019), (2, -0.137), (3, 0.035), (4, -0.005), (5, 0.006), (6, -0.054), (7, -0.006), (8, -0.053), (9, 0.046), (10, -0.007), (11, -0.022), (12, 0.05), (13, -0.065), (14, 0.078), (15, 0.073), (16, -0.013), (17, -0.015), (18, -0.002), (19, 0.004), (20, 0.136), (21, -0.012), (22, -0.038), (23, -0.107), (24, 0.126), (25, -0.007), (26, 0.024), (27, -0.021), (28, -0.131), (29, 0.107), (30, 0.022), (31, -0.19), (32, -0.056), (33, -0.038), (34, 0.011), (35, -0.101), (36, 0.101), (37, 0.104), (38, 0.021), (39, -0.014), (40, 0.036), (41, 0.039), (42, -0.001), (43, -0.046), (44, 0.02), (45, -0.077), (46, -0.06), (47, -0.151), (48, 0.034), (49, 0.063)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95503318 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

Introduction: Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. Students will be able to run and experiment with large scale learning algorithms since Yahoo! has donated servers which are being configured into a small scale Hadoop cluster. We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. For me, this is a chance to teach on many topics of past research. In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through th

2 0.85373497 483 hunch net-2013-06-10-The Large Scale Learning class notes

Introduction: The large scale machine learning class I taught with Yann LeCun has finished. As I expected, it took quite a bit of time . We had about 25 people attending in person on average and 400 regularly watching the recorded lectures which is substantially more sustained interest than I expected for an advanced ML class. We also had some fun with class projects—I’m hopeful that several will eventually turn into papers. I expect there are a number of professors interested in lecturing on this and related topics. Everyone will have their personal taste in subjects of course, but hopefully there will be some convergence to common course materials as well. To help with this, I am making the sources to my presentations available . Feel free to use/improve/embelish/ridicule/etc… in the pursuit of the perfect course.

3 0.72541046 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

4 0.71406782 469 hunch net-2012-07-09-Videolectures

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.

5 0.6636495 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

Introduction: Since we last discussed the other online learning , Stanford has very visibly started pushing mass teaching in AI , Machine Learning , and Databases . In retrospect, it’s not too surprising that the next step up in serious online teaching experiments are occurring at the computer science department of a university embedded in the land of startups. Numbers on the order of 100000 are quite significant—similar in scale to the number of computer science undergraduate students/year in the US. Although these populations surely differ, the fact that they could overlap is worth considering for the future. It’s too soon to say how successful these classes will be and there are many easy criticisms to make: Registration != Learning … but if only 1/10th complete these classes, the scale of teaching still surpasses the scale of any traditional process. 1st year excitement != nth year routine … but if only 1/10th take future classes, the scale of teaching still surpass

6 0.61806953 6 hunch net-2005-01-27-Learning Complete Problems

7 0.59312361 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

8 0.58810437 378 hunch net-2009-11-15-The Other Online Learning

9 0.50086021 240 hunch net-2007-04-21-Videolectures.net

10 0.47444403 31 hunch net-2005-02-26-Problem: Reductions and Relative Ranking Metrics

11 0.45544812 13 hunch net-2005-02-04-JMLG

12 0.44958398 464 hunch net-2012-05-03-Microsoft Research, New York City

13 0.44593528 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

14 0.43933025 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

15 0.43615657 142 hunch net-2005-12-22-Yes , I am applying

16 0.43136796 448 hunch net-2011-10-24-2011 ML symposium and the bears

17 0.42577177 172 hunch net-2006-04-14-JMLR is a success

18 0.42209572 185 hunch net-2006-06-16-Regularization = Robustness

19 0.41922218 37 hunch net-2005-03-08-Fast Physics for Learning

20 0.41508573 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.02), (27, 0.206), (33, 0.065), (38, 0.053), (48, 0.016), (53, 0.155), (55, 0.096), (67, 0.022), (94, 0.047), (95, 0.124), (96, 0.078)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93561447 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

Introduction: Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. Students will be able to run and experiment with large scale learning algorithms since Yahoo! has donated servers which are being configured into a small scale Hadoop cluster. We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. For me, this is a chance to teach on many topics of past research. In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through th

2 0.88656467 151 hunch net-2006-01-25-1 year

Introduction: At the one year (+5 days) anniversary, the natural question is: “Was it helpful for research?” Answer: Yes, and so it shall continue. Some evidence is provided by noticing that I am about a factor of 2 more overloaded with paper ideas than I’ve ever previously been. It is always hard to estimate counterfactual worlds, but I expect that this is also a factor of 2 more than “What if I had not started the blog?” As for “Why?”, there seem to be two primary effects. A blog is a mechanism for connecting with people who either think like you or are interested in the same problems. This allows for concentration of thinking which is very helpful in solving problems. The process of stating things you don’t understand publicly is very helpful in understanding them. Sometimes you are simply forced to express them in a way which aids understanding. Sometimes someone else says something which helps. And sometimes you discover that someone else has already solved the problem. The

3 0.88536257 141 hunch net-2005-12-17-Workshops as Franchise Conferences

Introduction: Founding a successful new conference is extraordinarily difficult. As a conference founder, you must manage to attract a significant number of good papers—enough to entice the participants into participating next year and to (generally) to grow the conference. For someone choosing to participate in a new conference, there is a very significant decision to make: do you send a paper to some new conference with no guarantee that the conference will work out? Or do you send it to another (possibly less related) conference that you are sure will work? The conference founding problem is a joint agreement problem with a very significant barrier. Workshops are a way around this problem, and workshops attached to conferences are a particularly effective means for this. A workshop at a conference is sure to have people available to speak and attend and is sure to have a large audience available. Presenting work at a workshop is not generally exclusive: it can also be presented at a confe

4 0.88393432 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

Introduction: Martin Pool and I recently discussed the similarities and differences between academia and open source programming. Similarities: Cost profile Research and programming share approximately the same cost profile: A large upfront effort is required to produce something useful, and then “anyone” can use it. (The “anyone” is not quite right for either group because only sufficiently technical people could use it.) Wealth profile A “wealthy” academic or open source programmer is someone who has contributed a lot to other people in research or programs. Much of academia is a “gift culture”: whoever gives the most is most respected. Problems Both academia and open source programming suffer from similar problems. Whether or not (and which) open source program is used are perhaps too-often personality driven rather than driven by capability or usefulness. Similar phenomena can happen in academia with respect to directions of research. Funding is often a problem for

5 0.88337231 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

Introduction: In recent years, there’s been an explosion of free educational resources that make high-level knowledge and skills accessible to an ever-wider group of people. In your own field, you probably have a good idea of where to look for the answer to any particular question. But outside your areas of expertise, sifting through textbooks, Wikipedia articles, research papers, and online lectures can be bewildering (unless you’re fortunate enough to have a knowledgeable colleague to consult). What are the key concepts in the field, how do they relate to each other, which ones should you learn, and where should you learn them? Courses are a major vehicle for packaging educational materials for a broad audience. The trouble is that they’re typically meant to be consumed linearly, regardless of your specific background or goals. Also, unless thousands of other people have had the same background and learning goals, there may not even be a course that fits your needs. Recently, we ( Roger Grosse

6 0.87399864 466 hunch net-2012-06-05-ICML acceptance statistics

7 0.87340504 370 hunch net-2009-09-18-Necessary and Sufficient Research

8 0.87062645 134 hunch net-2005-12-01-The Webscience Future

9 0.86723942 12 hunch net-2005-02-03-Learning Theory, by assumption

10 0.86494797 104 hunch net-2005-08-22-Do you believe in induction?

11 0.85582399 344 hunch net-2009-02-22-Effective Research Funding

12 0.85395503 19 hunch net-2005-02-14-Clever Methods of Overfitting

13 0.85388553 225 hunch net-2007-01-02-Retrospective

14 0.85370886 201 hunch net-2006-08-07-The Call of the Deep

15 0.85303235 194 hunch net-2006-07-11-New Models

16 0.85038567 191 hunch net-2006-07-08-MaxEnt contradicts Bayes Rule?

17 0.85010952 382 hunch net-2009-12-09-Future Publication Models @ NIPS

18 0.84995723 25 hunch net-2005-02-20-At One Month

19 0.8497597 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

20 0.84665066 403 hunch net-2010-07-18-ICML & COLT 2010