hunch_net hunch_net-2013 hunch_net-2013-479 knowledge-graph by maker-knowledge-mining

479 hunch net-2013-01-31-Remote large scale learning class participation


meta infos for this blog

Source: html

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. [sent-1, score-1.282]

2 Videos will be posted with about a 1 day delay on techtalks . [sent-2, score-0.635]

3 This is a side-by-side capture of video+slides from Weyond . [sent-3, score-0.131]

4 We are experimenting with Piazza as a discussion forum. [sent-4, score-0.221]

5 Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. [sent-5, score-0.517]

6 The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references. [sent-7, score-1.407]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('piazza', 0.457), ('slides', 0.288), ('revised', 0.203), ('techtalks', 0.203), ('references', 0.188), ('fixes', 0.188), ('typos', 0.188), ('arranged', 0.188), ('weyond', 0.188), ('subscribe', 0.178), ('delay', 0.169), ('videos', 0.169), ('rounds', 0.169), ('posted', 0.157), ('lecture', 0.152), ('video', 0.148), ('welcome', 0.148), ('experimenting', 0.14), ('sign', 0.14), ('attend', 0.137), ('capture', 0.131), ('yann', 0.129), ('follow', 0.126), ('along', 0.126), ('day', 0.106), ('person', 0.106), ('ask', 0.103), ('able', 0.101), ('scale', 0.093), ('class', 0.09), ('questions', 0.088), ('version', 0.088), ('anyone', 0.084), ('including', 0.082), ('discussion', 0.081), ('via', 0.077), ('interested', 0.071), ('first', 0.049), ('large', 0.048), ('two', 0.046), ('people', 0.032), ('machine', 0.029), ('learning', 0.012)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

2 0.16184956 487 hunch net-2013-07-24-ICML 2012 videos lost

Introduction: A big ouch—all the videos for ICML 2012 were lost in a shuffle. Rajnish sends the below, but if anyone can help that would be greatly appreciated. —————————————————————————— Sincere apologies to ICML community for loosing 2012 archived videos What happened: In order to publish 2013 videos, we decided to move 2012 videos to another server. We have a weekly backup service from the provider but after removing the videos from the current server, when we tried to retrieve the 2012 videos from backup service, the backup did not work because of provider-specific requirements that we had ignored while removing the data from previous server. What are we doing about this: At this point, we are still looking into raw footage to find if we can retrieve some of the videos, but following are the steps we are taking to make sure this does not happen again in future: (1) We are going to create a channel on Vimeo (and potentially on YouTube) and we will publish there the p-in-p- or slide-vers

3 0.14160389 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

Introduction: Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. Students will be able to run and experiment with large scale learning algorithms since Yahoo! has donated servers which are being configured into a small scale Hadoop cluster. We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. For me, this is a chance to teach on many topics of past research. In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through th

4 0.12381206 240 hunch net-2007-04-21-Videolectures.net

Introduction: Davor has been working to setup videolectures.net which is the new site for the many lectures mentioned here . (Tragically, they seem to only be available in windows media format.) I went through my own projects and added a few links to the videos. The day when every result is a set of {paper, slides, video} isn’t quite here yet, but it’s within sight. (For many papers, of course, code is a 4th component.)

5 0.0776655 382 hunch net-2009-12-09-Future Publication Models @ NIPS

Introduction: Yesterday, there was a discussion about future publication models at NIPS . Yann and Zoubin have specific detailed proposals which I’ll add links to when I get them ( Yann’s proposal and Zoubin’s proposal ). What struck me about the discussion is that there are many simultaneous concerns as well as many simultaneous proposals, which makes it difficult to keep all the distinctions straight in a verbal conversation. It also seemed like people were serious enough about this that we may see some real movement. Certainly, my personal experience motivates that as I’ve posted many times about the substantial flaws in our review process, including some very poor personal experiences. Concerns include the following: (Several) Reviewers are overloaded, boosting the noise in decision making. ( Yann ) A new system should run with as little built-in delay and friction to the process of research as possible. ( Hanna Wallach (updated)) Double-blind review is particularly impor

6 0.069486305 281 hunch net-2007-12-21-Vowpal Wabbit Code Release

7 0.066739544 483 hunch net-2013-06-10-The Large Scale Learning class notes

8 0.064075641 322 hunch net-2008-10-20-New York’s ML Day

9 0.061586536 75 hunch net-2005-05-28-Running A Machine Learning Summer School

10 0.061355442 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification

11 0.058569457 261 hunch net-2007-08-28-Live ML Class

12 0.057920013 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

13 0.056038596 401 hunch net-2010-06-20-2010 ICML discussion site

14 0.055360489 199 hunch net-2006-07-26-Two more UAI papers of interest

15 0.055268064 447 hunch net-2011-10-10-ML Symposium and ICML details

16 0.054358661 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

17 0.051686838 416 hunch net-2010-10-29-To Vidoelecture or not

18 0.048590526 378 hunch net-2009-11-15-The Other Online Learning

19 0.04838302 273 hunch net-2007-11-16-MLSS 2008

20 0.047553655 448 hunch net-2011-10-24-2011 ML symposium and the bears


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.071), (1, -0.035), (2, -0.055), (3, 0.012), (4, 0.006), (5, 0.026), (6, -0.034), (7, -0.015), (8, -0.072), (9, -0.017), (10, -0.029), (11, -0.048), (12, 0.003), (13, 0.007), (14, 0.049), (15, 0.008), (16, -0.041), (17, 0.006), (18, 0.067), (19, 0.082), (20, 0.046), (21, 0.024), (22, -0.037), (23, -0.08), (24, 0.043), (25, -0.021), (26, -0.054), (27, 0.036), (28, -0.05), (29, 0.038), (30, 0.002), (31, -0.127), (32, 0.024), (33, -0.055), (34, -0.023), (35, -0.074), (36, 0.002), (37, 0.071), (38, 0.036), (39, -0.037), (40, 0.027), (41, 0.001), (42, -0.09), (43, -0.022), (44, 0.118), (45, 0.0), (46, 0.021), (47, -0.084), (48, 0.125), (49, 0.072)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97200143 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

2 0.69918245 483 hunch net-2013-06-10-The Large Scale Learning class notes

Introduction: The large scale machine learning class I taught with Yann LeCun has finished. As I expected, it took quite a bit of time . We had about 25 people attending in person on average and 400 regularly watching the recorded lectures which is substantially more sustained interest than I expected for an advanced ML class. We also had some fun with class projects—I’m hopeful that several will eventually turn into papers. I expect there are a number of professors interested in lecturing on this and related topics. Everyone will have their personal taste in subjects of course, but hopefully there will be some convergence to common course materials as well. To help with this, I am making the sources to my presentations available . Feel free to use/improve/embelish/ridicule/etc… in the pursuit of the perfect course.

3 0.68771493 240 hunch net-2007-04-21-Videolectures.net

Introduction: Davor has been working to setup videolectures.net which is the new site for the many lectures mentioned here . (Tragically, they seem to only be available in windows media format.) I went through my own projects and added a few links to the videos. The day when every result is a set of {paper, slides, video} isn’t quite here yet, but it’s within sight. (For many papers, of course, code is a 4th component.)

4 0.63927239 487 hunch net-2013-07-24-ICML 2012 videos lost

Introduction: A big ouch—all the videos for ICML 2012 were lost in a shuffle. Rajnish sends the below, but if anyone can help that would be greatly appreciated. —————————————————————————— Sincere apologies to ICML community for loosing 2012 archived videos What happened: In order to publish 2013 videos, we decided to move 2012 videos to another server. We have a weekly backup service from the provider but after removing the videos from the current server, when we tried to retrieve the 2012 videos from backup service, the backup did not work because of provider-specific requirements that we had ignored while removing the data from previous server. What are we doing about this: At this point, we are still looking into raw footage to find if we can retrieve some of the videos, but following are the steps we are taking to make sure this does not happen again in future: (1) We are going to create a channel on Vimeo (and potentially on YouTube) and we will publish there the p-in-p- or slide-vers

5 0.61645371 261 hunch net-2007-08-28-Live ML Class

Introduction: Davor and Chunnan point out that MLSS 2007 in Tuebingen has live video for the majority of the world that is not there (heh).

6 0.54050064 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

7 0.53213805 469 hunch net-2012-07-09-Videolectures

8 0.41986862 382 hunch net-2009-12-09-Future Publication Models @ NIPS

9 0.40860701 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

10 0.40333658 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

11 0.39513272 322 hunch net-2008-10-20-New York’s ML Day

12 0.39020687 447 hunch net-2011-10-10-ML Symposium and ICML details

13 0.38417459 448 hunch net-2011-10-24-2011 ML symposium and the bears

14 0.36366794 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

15 0.36356196 13 hunch net-2005-02-04-JMLG

16 0.35077211 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

17 0.34964305 15 hunch net-2005-02-08-Some Links

18 0.34240335 75 hunch net-2005-05-28-Running A Machine Learning Summer School

19 0.33198369 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

20 0.3211011 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(55, 0.105), (95, 0.749)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98582846 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

2 0.92746866 390 hunch net-2010-03-12-Netflix Challenge 2 Canceled

Introduction: The second Netflix prize is canceled due to privacy problems . I continue to believe my original assessment of this paper, that the privacy break was somewhat overstated. I still haven’t seen any serious privacy failures on the scale of the AOL search log release . I expect privacy concerns to continue to be a big issue when dealing with data releases by companies or governments. The theory of maintaining privacy while using data is improving, but it is not yet in a state where the limits of what’s possible are clear let alone how to achieve these limits in a manner friendly to a prediction competition.

3 0.91790146 319 hunch net-2008-10-01-NIPS 2008 workshop on ‘Learning over Empirical Hypothesis Spaces’

Introduction: This workshop asks for insights how far we may/can push the theoretical boundary of using data in the design of learning machines. Can we express our classification rule in terms of the sample, or do we have to stick to a core assumption of classical statistical learning theory, namely that the hypothesis space is to be defined independent from the sample? This workshop is particularly interested in – but not restricted to – the ‘luckiness framework’ and the recently introduced notion of ‘compatibility functions’ in a semi-supervised learning context (more information can be found at http://www.kuleuven.be/wehys ).

4 0.86190802 30 hunch net-2005-02-25-Why Papers?

Introduction: Makc asked a good question in comments—”Why bother to make a paper, at all?” There are several reasons for writing papers which may not be immediately obvious to people not in academia. The basic idea is that papers have considerably more utility than the obvious “present an idea”. Papers are a formalized units of work. Academics (especially young ones) are often judged on the number of papers they produce. Papers have a formalized method of citing and crediting other—the bibliography. Academics (especially older ones) are often judged on the number of citations they receive. Papers enable a “more fair” anonymous review. Conferences receive many papers, from which a subset are selected. Discussion forums are inherently not anonymous for anyone who wants to build a reputation for good work. Papers are an excuse to meet your friends. Papers are the content of conferences, but much of what you do is talk to friends about interesting problems while there. Sometimes yo

5 0.82705462 389 hunch net-2010-02-26-Yahoo! ML events

Introduction: Yahoo! is sponsoring two machine learning events that might interest people. The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. There are 7 days left. The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. If you think you know how to rank, you should give it a try, and we might all learn something. There are 3 months left.

6 0.75900966 456 hunch net-2012-02-24-ICML+50%

7 0.72823954 127 hunch net-2005-11-02-Progress in Active Learning

8 0.7009728 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

9 0.65674949 344 hunch net-2009-02-22-Effective Research Funding

10 0.62480104 234 hunch net-2007-02-22-Create Your Own ICML Workshop

11 0.61777663 373 hunch net-2009-10-03-Static vs. Dynamic multiclass prediction

12 0.54600805 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

13 0.4479655 216 hunch net-2006-11-02-2006 NIPS workshops

14 0.42533994 433 hunch net-2011-04-23-ICML workshops due

15 0.39959714 7 hunch net-2005-01-31-Watchword: Assumption

16 0.39616615 476 hunch net-2012-12-29-Simons Institute Big Data Program

17 0.39446828 377 hunch net-2009-11-09-NYAS ML Symposium this year.

18 0.37558091 46 hunch net-2005-03-24-The Role of Workshops

19 0.37511498 290 hunch net-2008-02-27-The Stats Handicap

20 0.37303102 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop