hunch_net hunch_net-2007 hunch_net-2007-261 knowledge-graph by maker-knowledge-mining

261 hunch net-2007-08-28-Live ML Class


meta infos for this blog

Source: html

Introduction: Davor and Chunnan point out that MLSS 2007 in Tuebingen has live video for the majority of the world that is not there (heh).


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Davor and Chunnan point out that MLSS 2007 in Tuebingen has live video for the majority of the world that is not there (heh). [sent-1, score-1.546]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('davor', 0.546), ('live', 0.437), ('mlss', 0.408), ('video', 0.397), ('majority', 0.346), ('world', 0.204), ('point', 0.162)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 261 hunch net-2007-08-28-Live ML Class

Introduction: Davor and Chunnan point out that MLSS 2007 in Tuebingen has live video for the majority of the world that is not there (heh).

2 0.28203782 240 hunch net-2007-04-21-Videolectures.net

Introduction: Davor has been working to setup videolectures.net which is the new site for the many lectures mentioned here . (Tragically, they seem to only be available in windows media format.) I went through my own projects and added a few links to the videos. The day when every result is a set of {paper, slides, video} isn’t quite here yet, but it’s within sight. (For many papers, of course, code is a 4th component.)

3 0.16868258 130 hunch net-2005-11-16-MLSS 2006

Introduction: There will be two machine learning summer schools in 2006. One is in Canberra, Australia from February 6 to February 17 (Aussie summer). The webpage is fully ‘live’ so you should actively consider it now. The other is in Taipei, Taiwan from July 24 to August 4. This one is still in the planning phase, but that should be settled soon. Attending an MLSS is probably the quickest and easiest way to bootstrap yourself into a reasonable initial understanding of the field of machine learning.

4 0.12460285 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

5 0.0796149 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

6 0.058659844 416 hunch net-2010-10-29-To Vidoelecture or not

7 0.058569457 479 hunch net-2013-01-31-Remote large scale learning class participation

8 0.052209046 110 hunch net-2005-09-10-“Failure” is an option

9 0.050728545 357 hunch net-2009-05-30-Many ways to Learn this summer

10 0.045777906 219 hunch net-2006-11-22-Explicit Randomization in Learning algorithms

11 0.041092601 56 hunch net-2005-04-14-Families of Learning Theory Statements

12 0.041042879 96 hunch net-2005-07-21-Six Months

13 0.040368743 388 hunch net-2010-01-24-Specializations of the Master Problem

14 0.039337866 43 hunch net-2005-03-18-Binomial Weighting

15 0.0381479 28 hunch net-2005-02-25-Problem: Online Learning

16 0.03658855 358 hunch net-2009-06-01-Multitask Poisoning

17 0.035894461 415 hunch net-2010-10-28-NY ML Symposium 2010

18 0.032511819 235 hunch net-2007-03-03-All Models of Learning have Flaws

19 0.032381922 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

20 0.031913735 406 hunch net-2010-08-22-KDD 2010


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.032), (1, -0.005), (2, -0.032), (3, -0.003), (4, -0.016), (5, -0.019), (6, -0.041), (7, -0.009), (8, -0.03), (9, -0.01), (10, 0.02), (11, -0.017), (12, 0.052), (13, -0.002), (14, 0.042), (15, -0.018), (16, -0.043), (17, 0.046), (18, 0.083), (19, 0.09), (20, 0.023), (21, 0.021), (22, 0.038), (23, -0.104), (24, -0.01), (25, -0.0), (26, -0.024), (27, 0.04), (28, -0.016), (29, 0.009), (30, -0.048), (31, -0.111), (32, 0.099), (33, -0.073), (34, 0.076), (35, -0.046), (36, -0.032), (37, -0.038), (38, 0.05), (39, -0.061), (40, -0.025), (41, -0.012), (42, -0.092), (43, -0.047), (44, 0.013), (45, 0.054), (46, -0.076), (47, -0.085), (48, 0.054), (49, 0.037)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98727298 261 hunch net-2007-08-28-Live ML Class

Introduction: Davor and Chunnan point out that MLSS 2007 in Tuebingen has live video for the majority of the world that is not there (heh).

2 0.74706191 240 hunch net-2007-04-21-Videolectures.net

Introduction: Davor has been working to setup videolectures.net which is the new site for the many lectures mentioned here . (Tragically, they seem to only be available in windows media format.) I went through my own projects and added a few links to the videos. The day when every result is a set of {paper, slides, video} isn’t quite here yet, but it’s within sight. (For many papers, of course, code is a 4th component.)

3 0.61452657 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

4 0.48328352 483 hunch net-2013-06-10-The Large Scale Learning class notes

Introduction: The large scale machine learning class I taught with Yann LeCun has finished. As I expected, it took quite a bit of time . We had about 25 people attending in person on average and 400 regularly watching the recorded lectures which is substantially more sustained interest than I expected for an advanced ML class. We also had some fun with class projects—I’m hopeful that several will eventually turn into papers. I expect there are a number of professors interested in lecturing on this and related topics. Everyone will have their personal taste in subjects of course, but hopefully there will be some convergence to common course materials as well. To help with this, I am making the sources to my presentations available . Feel free to use/improve/embelish/ridicule/etc… in the pursuit of the perfect course.

5 0.46435419 130 hunch net-2005-11-16-MLSS 2006

Introduction: There will be two machine learning summer schools in 2006. One is in Canberra, Australia from February 6 to February 17 (Aussie summer). The webpage is fully ‘live’ so you should actively consider it now. The other is in Taipei, Taiwan from July 24 to August 4. This one is still in the planning phase, but that should be settled soon. Attending an MLSS is probably the quickest and easiest way to bootstrap yourself into a reasonable initial understanding of the field of machine learning.

6 0.46253216 322 hunch net-2008-10-20-New York’s ML Day

7 0.46197632 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

8 0.43206194 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

9 0.42399877 75 hunch net-2005-05-28-Running A Machine Learning Summer School

10 0.41425917 15 hunch net-2005-02-08-Some Links

11 0.37501544 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

12 0.34857476 469 hunch net-2012-07-09-Videolectures

13 0.34523782 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

14 0.29082653 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

15 0.28814867 273 hunch net-2007-11-16-MLSS 2008

16 0.26912597 357 hunch net-2009-05-30-Many ways to Learn this summer

17 0.25821581 487 hunch net-2013-07-24-ICML 2012 videos lost

18 0.25213975 249 hunch net-2007-06-21-Presentation Preparation

19 0.25100285 173 hunch net-2006-04-17-Rexa is live

20 0.2345648 4 hunch net-2005-01-26-Summer Schools


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(83, 0.717)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93745053 261 hunch net-2007-08-28-Live ML Class

Introduction: Davor and Chunnan point out that MLSS 2007 in Tuebingen has live video for the majority of the world that is not there (heh).

2 0.67387629 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

Introduction: We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24 . More detail can be found here . Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh

3 0.47057384 52 hunch net-2005-04-04-Grounds for Rejection

Introduction: It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. Hopefully, this will help us all write better papers. The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. Not doing so would severely harm the integrity of the conference. A paper rejected for this reason must be fixed. Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. The tactic sometimes succeeds with some reviewers (but not with me). As a reviewer, I sometimes get lost for stupid reasons. This is why an anonymized communication channel with the author can

4 0.46781883 135 hunch net-2005-12-04-Watchword: model

Introduction: In everyday use a model is a system which explains the behavior of some system, hopefully at the level where some alteration of the model predicts some alteration of the real-world system. In machine learning “model” has several variant definitions. Everyday . The common definition is sometimes used. Parameterized . Sometimes model is a short-hand for “parameterized model”. Here, it refers to a model with unspecified free parameters. In the Bayesian learning approach, you typically have a prior over (everyday) models. Predictive . Even further from everyday use is the predictive model. Examples of this are “my model is a decision tree” or “my model is a support vector machine”. Here, there is no real sense in which an SVM explains the underlying process. For example, an SVM tells us nothing in particular about how alterations to the real-world system would create a change. Which definition is being used at any particular time is important information. For examp

5 0.33139193 228 hunch net-2007-01-15-The Machine Learning Department

Introduction: Carnegie Mellon School of Computer Science has the first academic Machine Learning department . This department already existed as the Center for Automated Learning and Discovery , but recently changed it’s name. The reason for changing the name is obvious: very few people think of themselves as “Automated Learner and Discoverers”, but there are number of people who think of themselves as “Machine Learners”. Machine learning is both more succinct and recognizable—good properties for a name. A more interesting question is “Should there be a Machine Learning Department?”. Tom Mitchell has a relevant whitepaper claiming that machine learning is answering a different question than other fields or departments. The fundamental debate here is “Is machine learning different from statistics?” At a cultural level, there is no real debate: they are different. Machine learning is characterized by several very active large peer reviewed conferences, operating in a computer

6 0.22953749 98 hunch net-2005-07-27-Not goal metrics

7 0.043787528 290 hunch net-2008-02-27-The Stats Handicap

8 0.042883411 50 hunch net-2005-04-01-Basic computer science research takes a hit

9 0.041593511 147 hunch net-2006-01-08-Debugging Your Brain

10 0.03841912 345 hunch net-2009-03-08-Prediction Science

11 0.037832268 32 hunch net-2005-02-27-Antilearning: When proximity goes bad

12 0.03208714 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

13 0.030182971 112 hunch net-2005-09-14-The Predictionist Viewpoint

14 0.029050937 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

15 0.02873062 315 hunch net-2008-09-03-Bidding Problems

16 0.026465831 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

17 0.026439551 225 hunch net-2007-01-02-Retrospective

18 0.024566697 129 hunch net-2005-11-07-Prediction Competitions

19 0.024490563 324 hunch net-2008-11-09-A Healthy COLT

20 0.023449516 357 hunch net-2009-05-30-Many ways to Learn this summer