hunch_net hunch_net-2008 hunch_net-2008-322 knowledge-graph by maker-knowledge-mining

322 hunch net-2008-10-20-New York’s ML Day


meta infos for this blog

Source: html

Introduction: I’m not as naturally exuberant as Muthu 2 or David about CS/Econ day, but I believe it and ML day were certainly successful. At the CS/Econ day, I particularly enjoyed Toumas Sandholm’s talk which showed a commanding depth of understanding and application in automated auctions. For the machine learning day, I enjoyed several talks and posters (I better, I helped pick them.). What stood out to me was number of people attending: 158 registered, a level qualifying as “scramble to find seats”. My rule of thumb for workshops/conferences is that the number of attendees is often something like the number of submissions. That isn’t the case here, where there were just 4 invited speakers and 30-or-so posters. Presumably, the difference is due to a critical mass of Machine Learning interested people in the area and the ease of their attendance. Are there other areas where a local Machine Learning day would fly? It’s easy to imagine something working out in the San Franci


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I’m not as naturally exuberant as Muthu 2 or David about CS/Econ day, but I believe it and ML day were certainly successful. [sent-1, score-0.604]

2 At the CS/Econ day, I particularly enjoyed Toumas Sandholm’s talk which showed a commanding depth of understanding and application in automated auctions. [sent-2, score-0.626]

3 For the machine learning day, I enjoyed several talks and posters (I better, I helped pick them. [sent-3, score-0.704]

4 What stood out to me was number of people attending: 158 registered, a level qualifying as “scramble to find seats”. [sent-5, score-0.376]

5 My rule of thumb for workshops/conferences is that the number of attendees is often something like the number of submissions. [sent-6, score-0.627]

6 That isn’t the case here, where there were just 4 invited speakers and 30-or-so posters. [sent-7, score-0.254]

7 Presumably, the difference is due to a critical mass of Machine Learning interested people in the area and the ease of their attendance. [sent-8, score-0.475]

8 Are there other areas where a local Machine Learning day would fly? [sent-9, score-0.776]

9 It’s easy to imagine something working out in the San Francisco bay area and possibly Germany or England. [sent-10, score-0.433]

10 The basic formula for the ML day is a committee picks a few people to give talks, and posters are invited, with some of them providing short presentations. [sent-11, score-1.159]

11 The CS/Econ day was similar, except they managed to let every submitter do a presentation. [sent-12, score-0.699]

12 Are there tweaks to the format which would improve things? [sent-13, score-0.364]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('day', 0.531), ('posters', 0.212), ('enjoyed', 0.162), ('invited', 0.158), ('talks', 0.155), ('commanding', 0.145), ('qualifying', 0.145), ('scramble', 0.145), ('depth', 0.135), ('francisco', 0.135), ('thumb', 0.135), ('ml', 0.131), ('germany', 0.127), ('picks', 0.127), ('tweaks', 0.127), ('muthu', 0.127), ('bay', 0.127), ('stood', 0.127), ('registered', 0.121), ('area', 0.119), ('formula', 0.116), ('fly', 0.116), ('attendees', 0.112), ('san', 0.112), ('presumably', 0.106), ('number', 0.104), ('mass', 0.103), ('ease', 0.103), ('possibly', 0.098), ('showed', 0.098), ('speakers', 0.096), ('attending', 0.094), ('managed', 0.092), ('committee', 0.091), ('something', 0.089), ('helped', 0.089), ('areas', 0.089), ('format', 0.087), ('automated', 0.086), ('pick', 0.086), ('rule', 0.083), ('local', 0.083), ('providing', 0.082), ('david', 0.078), ('improve', 0.077), ('except', 0.076), ('critical', 0.076), ('difference', 0.074), ('certainly', 0.073), ('would', 0.073)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 322 hunch net-2008-10-20-New York’s ML Day

Introduction: I’m not as naturally exuberant as Muthu 2 or David about CS/Econ day, but I believe it and ML day were certainly successful. At the CS/Econ day, I particularly enjoyed Toumas Sandholm’s talk which showed a commanding depth of understanding and application in automated auctions. For the machine learning day, I enjoyed several talks and posters (I better, I helped pick them.). What stood out to me was number of people attending: 158 registered, a level qualifying as “scramble to find seats”. My rule of thumb for workshops/conferences is that the number of attendees is often something like the number of submissions. That isn’t the case here, where there were just 4 invited speakers and 30-or-so posters. Presumably, the difference is due to a critical mass of Machine Learning interested people in the area and the ease of their attendance. Are there other areas where a local Machine Learning day would fly? It’s easy to imagine something working out in the San Franci

2 0.15681313 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

3 0.1332525 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

4 0.11987488 447 hunch net-2011-10-10-ML Symposium and ICML details

Introduction: Everyone should have received notice for NY ML Symposium abstracts. Check carefully, as one was lost by our system. The event itself is October 21, next week. Leon Bottou , Stephen Boyd , and Yoav Freund are giving the invited talks this year, and there are many spotlights on local work spread throughout the day. Chris Wiggins has setup 6(!) ML-interested startups to follow the symposium, which should be of substantial interest to the employment interested. I also wanted to give an update on ICML 2012 . Unlike last year, our deadline is coordinated with AIStat (which is due this Friday). The paper deadline for ICML has been pushed back to February 24 which should allow significant time for finishing up papers after the winter break. Other details may interest people as well: We settled on using CMT after checking out the possibilities. I wasn’t looking for this, because I’ve often found CMT clunky in terms of easy access to the right information. Nevert

5 0.11956178 151 hunch net-2006-01-25-1 year

Introduction: At the one year (+5 days) anniversary, the natural question is: “Was it helpful for research?” Answer: Yes, and so it shall continue. Some evidence is provided by noticing that I am about a factor of 2 more overloaded with paper ideas than I’ve ever previously been. It is always hard to estimate counterfactual worlds, but I expect that this is also a factor of 2 more than “What if I had not started the blog?” As for “Why?”, there seem to be two primary effects. A blog is a mechanism for connecting with people who either think like you or are interested in the same problems. This allows for concentration of thinking which is very helpful in solving problems. The process of stating things you don’t understand publicly is very helpful in understanding them. Sometimes you are simply forced to express them in a way which aids understanding. Sometimes someone else says something which helps. And sometimes you discover that someone else has already solved the problem. The

6 0.11252384 240 hunch net-2007-04-21-Videolectures.net

7 0.11108496 316 hunch net-2008-09-04-Fall ML Conferences

8 0.10877547 481 hunch net-2013-04-15-NEML II

9 0.1062953 415 hunch net-2010-10-28-NY ML Symposium 2010

10 0.098366901 75 hunch net-2005-05-28-Running A Machine Learning Summer School

11 0.097794347 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

12 0.082703508 377 hunch net-2009-11-09-NYAS ML Symposium this year.

13 0.077281557 443 hunch net-2011-09-03-Fall Machine Learning Events

14 0.075166076 369 hunch net-2009-08-27-New York Area Machine Learning Events

15 0.073246628 378 hunch net-2009-11-15-The Other Online Learning

16 0.072859146 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

17 0.072647981 448 hunch net-2011-10-24-2011 ML symposium and the bears

18 0.072134688 470 hunch net-2012-07-17-MUCMD and BayLearn

19 0.072080731 406 hunch net-2010-08-22-KDD 2010

20 0.071018562 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.147), (1, -0.074), (2, -0.092), (3, -0.005), (4, -0.003), (5, 0.064), (6, -0.008), (7, -0.039), (8, -0.065), (9, -0.18), (10, 0.031), (11, -0.026), (12, 0.101), (13, 0.016), (14, 0.028), (15, -0.015), (16, 0.044), (17, 0.069), (18, 0.066), (19, 0.088), (20, -0.01), (21, -0.006), (22, 0.026), (23, -0.045), (24, 0.004), (25, -0.019), (26, -0.006), (27, 0.003), (28, 0.012), (29, -0.032), (30, -0.104), (31, -0.102), (32, 0.057), (33, 0.007), (34, 0.018), (35, -0.046), (36, -0.059), (37, 0.019), (38, 0.029), (39, -0.106), (40, 0.029), (41, -0.097), (42, 0.027), (43, -0.082), (44, 0.068), (45, 0.067), (46, -0.003), (47, 0.025), (48, -0.047), (49, 0.007)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96291 322 hunch net-2008-10-20-New York’s ML Day

Introduction: I’m not as naturally exuberant as Muthu 2 or David about CS/Econ day, but I believe it and ML day were certainly successful. At the CS/Econ day, I particularly enjoyed Toumas Sandholm’s talk which showed a commanding depth of understanding and application in automated auctions. For the machine learning day, I enjoyed several talks and posters (I better, I helped pick them.). What stood out to me was number of people attending: 158 registered, a level qualifying as “scramble to find seats”. My rule of thumb for workshops/conferences is that the number of attendees is often something like the number of submissions. That isn’t the case here, where there were just 4 invited speakers and 30-or-so posters. Presumably, the difference is due to a critical mass of Machine Learning interested people in the area and the ease of their attendance. Are there other areas where a local Machine Learning day would fly? It’s easy to imagine something working out in the San Franci

2 0.70624977 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

3 0.62582123 415 hunch net-2010-10-28-NY ML Symposium 2010

Introduction: About 200 people attended the 2010 NYAS ML Symposium this year. (It was about 170 last year .) I particularly enjoyed several talks. Yann has a new live demo of (limited) real-time object recognition learning. Sanjoy gave a fairly convincing and comprehensible explanation of why a modified form of single-linkage clustering is consistent in higher dimensions, and why consistency is a critical feature for clustering algorithms. I’m curious how well this algorithm works in practice. Matt Hoffman ‘s poster covering online LDA seemed pretty convincing to me as an algorithmic improvement. This year, we allocated more time towards posters & poster spotlights. For next year, we are considering some further changes. The format has traditionally been 4 invited Professor speakers, with posters and poster spotlight for students. Demand from other parties to participate is growing, for example from postdocs and startups in the area. Another growing concern is the fa

4 0.57120955 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

5 0.5408892 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

Introduction: I’ve been wanting to attend the NYC ML Meetup for some time and hope to make it next week on the 25th . Rob Schapire is talking about “Playing Repeated Games”, which in my experience is far more relevant to machine learning than the title might indicate.

6 0.51948792 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

7 0.51572502 410 hunch net-2010-09-17-New York Area Machine Learning Events

8 0.51398373 377 hunch net-2009-11-09-NYAS ML Symposium this year.

9 0.50814122 261 hunch net-2007-08-28-Live ML Class

10 0.50716889 448 hunch net-2011-10-24-2011 ML symposium and the bears

11 0.50491917 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

12 0.49775288 316 hunch net-2008-09-04-Fall ML Conferences

13 0.48133403 447 hunch net-2011-10-10-ML Symposium and ICML details

14 0.46894041 88 hunch net-2005-07-01-The Role of Impromptu Talks

15 0.46109512 369 hunch net-2009-08-27-New York Area Machine Learning Events

16 0.44478112 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

17 0.440936 479 hunch net-2013-01-31-Remote large scale learning class participation

18 0.43880066 443 hunch net-2011-09-03-Fall Machine Learning Events

19 0.43293381 481 hunch net-2013-04-15-NEML II

20 0.42963052 469 hunch net-2012-07-09-Videolectures


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.15), (38, 0.075), (53, 0.083), (55, 0.098), (94, 0.064), (95, 0.022), (98, 0.399)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.91577488 167 hunch net-2006-03-27-Gradients everywhere

Introduction: One of the basic observations from the atomic learning workshop is that gradient-based optimization is pervasive. For example, at least 7 (of 12) speakers used the word ‘gradient’ in their talk and several others may be approximating a gradient. The essential useful quality of a gradient is that it decouples local updates from global optimization. Restated: Given a gradient, we can determine how to change individual parameters of the system so as to improve overall performance. It’s easy to feel depressed about this and think “nothing has happened”, but that appears untrue. Many of the talks were about clever techniques for computing gradients where your calculus textbook breaks down. Sometimes there are clever approximations of the gradient. ( Simon Osindero ) Sometimes we can compute constrained gradients via iterated gradient/project steps. ( Ben Taskar ) Sometimes we can compute gradients anyways over mildly nondifferentiable functions. ( Drew Bagnell ) Even give

same-blog 2 0.90928543 322 hunch net-2008-10-20-New York’s ML Day

Introduction: I’m not as naturally exuberant as Muthu 2 or David about CS/Econ day, but I believe it and ML day were certainly successful. At the CS/Econ day, I particularly enjoyed Toumas Sandholm’s talk which showed a commanding depth of understanding and application in automated auctions. For the machine learning day, I enjoyed several talks and posters (I better, I helped pick them.). What stood out to me was number of people attending: 158 registered, a level qualifying as “scramble to find seats”. My rule of thumb for workshops/conferences is that the number of attendees is often something like the number of submissions. That isn’t the case here, where there were just 4 invited speakers and 30-or-so posters. Presumably, the difference is due to a critical mass of Machine Learning interested people in the area and the ease of their attendance. Are there other areas where a local Machine Learning day would fly? It’s easy to imagine something working out in the San Franci

3 0.89050949 211 hunch net-2006-10-02-$1M Netflix prediction contest

Introduction: Netflix is running a contest to improve recommender prediction systems. A 10% improvement over their current system yields a $1M prize. Failing that, the best smaller improvement yields a smaller $50K prize. This contest looks quite real, and the $50K prize money is almost certainly achievable with a bit of thought. The contest also comes with a dataset which is apparently 2 orders of magnitude larger than any other public recommendation system datasets.

4 0.82865953 231 hunch net-2007-02-10-Best Practices for Collaboration

Introduction: Many people, especially students, haven’t had an opportunity to collaborate with other researchers. Collaboration, especially with remote people can be tricky. Here are some observations of what has worked for me on collaborations involving a few people. Travel and Discuss Almost all collaborations start with in-person discussion. This implies that travel is often necessary. We can hope that in the future we’ll have better systems for starting collaborations remotely (such as blogs), but we aren’t quite there yet. Enable your collaborator . A collaboration can fall apart because one collaborator disables another. This sounds stupid (and it is), but it’s far easier than you might think. Avoid Duplication . Discovering that you and a collaborator have been editing the same thing and now need to waste time reconciling changes is annoying. The best way to avoid this to be explicit about who has write permission to what. Most of the time, a write lock is held for the e

5 0.79865646 111 hunch net-2005-09-12-Fast Gradient Descent

Introduction: Nic Schaudolph has been developing a fast gradient descent algorithm called Stochastic Meta-Descent (SMD). Gradient descent is currently untrendy in the machine learning community, but there remains a large number of people using gradient descent on neural networks or other architectures from when it was trendy in the early 1990s. There are three problems with gradient descent. Gradient descent does not necessarily produce easily reproduced results. Typical algorithms start with “set the initial parameters to small random values”. The design of the representation that gradient descent is applied to is often nontrivial. In particular, knowing exactly how to build a large neural network so that it will perform well requires knowledge which has not been made easily applicable. Gradient descent can be slow. Obviously, taking infinitesimal steps in the direction of the gradient would take forever, so some finite step size must be used. What exactly this step size should be

6 0.54399157 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

7 0.46355233 437 hunch net-2011-07-10-ICML 2011 and the future

8 0.46009257 297 hunch net-2008-04-22-Taking the next step

9 0.4599337 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

10 0.45785654 131 hunch net-2005-11-16-The Everything Ensemble Edge

11 0.45746753 19 hunch net-2005-02-14-Clever Methods of Overfitting

12 0.45539501 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

13 0.4544909 403 hunch net-2010-07-18-ICML & COLT 2010

14 0.45361707 44 hunch net-2005-03-21-Research Styles in Machine Learning

15 0.4533051 95 hunch net-2005-07-14-What Learning Theory might do

16 0.4510445 233 hunch net-2007-02-16-The Forgetting

17 0.45017844 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

18 0.44946781 423 hunch net-2011-02-02-User preferences for search engines

19 0.44935638 134 hunch net-2005-12-01-The Webscience Future

20 0.44933462 382 hunch net-2009-12-09-Future Publication Models @ NIPS