hunch_net hunch_net-2010 hunch_net-2010-405 knowledge-graph by maker-knowledge-mining

405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup


meta infos for this blog

Source: html

Introduction: I’ve been wanting to attend the NYC ML Meetup for some time and hope to make it next week on the 25th . Rob Schapire is talking about “Playing Repeated Games”, which in my experience is far more relevant to machine learning than the title might indicate.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I’ve been wanting to attend the NYC ML Meetup for some time and hope to make it next week on the 25th . [sent-1, score-1.118]

2 Rob Schapire is talking about “Playing Repeated Games”, which in my experience is far more relevant to machine learning than the title might indicate. [sent-2, score-0.974]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('nyc', 0.33), ('rob', 0.288), ('indicate', 0.275), ('repeated', 0.275), ('meetup', 0.275), ('wanting', 0.264), ('games', 0.264), ('playing', 0.24), ('week', 0.233), ('schapire', 0.223), ('attend', 0.223), ('talking', 0.218), ('title', 0.213), ('ml', 0.149), ('relevant', 0.143), ('next', 0.14), ('far', 0.13), ('experience', 0.13), ('hope', 0.119), ('ve', 0.099), ('make', 0.073), ('might', 0.072), ('time', 0.066), ('machine', 0.048), ('learning', 0.02)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

Introduction: I’ve been wanting to attend the NYC ML Meetup for some time and hope to make it next week on the 25th . Rob Schapire is talking about “Playing Repeated Games”, which in my experience is far more relevant to machine learning than the title might indicate.

2 0.13200822 415 hunch net-2010-10-28-NY ML Symposium 2010

Introduction: About 200 people attended the 2010 NYAS ML Symposium this year. (It was about 170 last year .) I particularly enjoyed several talks. Yann has a new live demo of (limited) real-time object recognition learning. Sanjoy gave a fairly convincing and comprehensible explanation of why a modified form of single-linkage clustering is consistent in higher dimensions, and why consistency is a critical feature for clustering algorithms. I’m curious how well this algorithm works in practice. Matt Hoffman ‘s poster covering online LDA seemed pretty convincing to me as an algorithmic improvement. This year, we allocated more time towards posters & poster spotlights. For next year, we are considering some further changes. The format has traditionally been 4 invited Professor speakers, with posters and poster spotlight for students. Demand from other parties to participate is growing, for example from postdocs and startups in the area. Another growing concern is the fa

3 0.087785065 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

4 0.076847211 353 hunch net-2009-05-08-Computability in Artificial Intelligence

Introduction: Normally I do not blog, but John kindly invited me to do so. Since computability issues play a major role in Artificial Intelligence and Machine Learning, I would like to take the opportunity to comment on that and raise some questions. The general attitude is that AI is about finding efficient smart algorithms. For large parts of machine learning, the same attitude is not too dangerous. If you want to concentrate on conceptual problems, simply become a statistician. There is no analogous escape for modern research on AI (as opposed to GOFAI rooted in logic). Let me show by analogy why limiting research to computational questions is bad for any field. Except in computer science, computational aspects play little role in the development of fundamental theories: Consider e.g. set theory with axiom of choice, foundations of logic, exact/full minimax for zero-sum games, quantum (field) theory, string theory, … Indeed, at least in physics, every new fundamental theory seems to

5 0.076047815 439 hunch net-2011-08-01-Interesting papers at COLT 2011

Introduction: Since John did not attend COLT this year, I have been volunteered to report back on the hot stuff at this year’s meeting. The conference seemed to have pretty high quality stuff this year, and I found plenty of interesting papers on all the three days. I’m gonna pick some of my favorites going through the program in a chronological order. The first session on matrices seemed interesting for two reasons. First, the papers were quite nice. But more interestingly, this is a topic that has had a lot of presence in Statistics and Compressed sensing literature recently. So it was good to see high-dimensional matrices finally make their entry at COLT. The paper of Ohad and Shai on Collaborative Filtering with the Trace Norm: Learning, Bounding, and Transducing provides non-trivial guarantees on trace norm regularization in an agnostic setup, while Rina and Nati show how Rademacher averages can be used to get sharper results for matrix completion problems in their paper Concentr

6 0.075240076 4 hunch net-2005-01-26-Summer Schools

7 0.066952638 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

8 0.061461136 410 hunch net-2010-09-17-New York Area Machine Learning Events

9 0.057895392 315 hunch net-2008-09-03-Bidding Problems

10 0.057836734 459 hunch net-2012-03-13-The Submodularity workshop and Lucca Professorship

11 0.054754417 185 hunch net-2006-06-16-Regularization = Robustness

12 0.054126829 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

13 0.052686125 313 hunch net-2008-08-18-Radford Neal starts a blog

14 0.051448613 91 hunch net-2005-07-10-Thinking the Unthought

15 0.050379582 443 hunch net-2011-09-03-Fall Machine Learning Events

16 0.049718399 112 hunch net-2005-09-14-The Predictionist Viewpoint

17 0.047624774 270 hunch net-2007-11-02-The Machine Learning Award goes to …

18 0.043875888 79 hunch net-2005-06-08-Question: “When is the right time to insert the loss function?”

19 0.043808848 356 hunch net-2009-05-24-2009 ICML discussion site

20 0.043164738 184 hunch net-2006-06-15-IJCAI is out of season


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.071), (1, -0.031), (2, -0.055), (3, 0.008), (4, -0.012), (5, 0.01), (6, -0.018), (7, -0.017), (8, -0.035), (9, -0.086), (10, 0.016), (11, -0.053), (12, 0.024), (13, 0.002), (14, -0.005), (15, 0.017), (16, 0.004), (17, 0.022), (18, 0.036), (19, 0.035), (20, -0.001), (21, -0.011), (22, -0.007), (23, 0.028), (24, 0.057), (25, -0.042), (26, -0.007), (27, -0.001), (28, 0.057), (29, 0.014), (30, -0.009), (31, -0.083), (32, -0.048), (33, 0.014), (34, -0.041), (35, 0.007), (36, -0.07), (37, 0.012), (38, -0.065), (39, -0.039), (40, 0.004), (41, -0.033), (42, -0.026), (43, 0.062), (44, 0.055), (45, -0.024), (46, 0.016), (47, 0.084), (48, 0.006), (49, 0.009)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9355967 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

Introduction: I’ve been wanting to attend the NYC ML Meetup for some time and hope to make it next week on the 25th . Rob Schapire is talking about “Playing Repeated Games”, which in my experience is far more relevant to machine learning than the title might indicate.

2 0.60109586 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

3 0.51267827 322 hunch net-2008-10-20-New York’s ML Day

Introduction: I’m not as naturally exuberant as Muthu 2 or David about CS/Econ day, but I believe it and ML day were certainly successful. At the CS/Econ day, I particularly enjoyed Toumas Sandholm’s talk which showed a commanding depth of understanding and application in automated auctions. For the machine learning day, I enjoyed several talks and posters (I better, I helped pick them.). What stood out to me was number of people attending: 158 registered, a level qualifying as “scramble to find seats”. My rule of thumb for workshops/conferences is that the number of attendees is often something like the number of submissions. That isn’t the case here, where there were just 4 invited speakers and 30-or-so posters. Presumably, the difference is due to a critical mass of Machine Learning interested people in the area and the ease of their attendance. Are there other areas where a local Machine Learning day would fly? It’s easy to imagine something working out in the San Franci

4 0.50350749 410 hunch net-2010-09-17-New York Area Machine Learning Events

Introduction: On Sept 21, there is another machine learning meetup where I’ll be speaking. Although the topic is contextual bandits, I think of it as “the future of machine learning”. In particular, it’s all about how to learn in an interactive environment, such as for ad display, trading, news recommendation, etc… On Sept 24, abstracts for the New York Machine Learning Symposium are due. This is the largest Machine Learning event in the area, so it’s a great way to have a conversation with other people. On Oct 22, the NY ML Symposium actually happens. This year, we are expanding the spotlights, and trying to have more time for posters. In addition, we have a strong set of invited speakers: David Blei , Sanjoy Dasgupta , Tommi Jaakkola , and Yann LeCun . After the meeting, a late hackNY related event is planned where students and startups can meet. I’d also like to point out the related CS/Econ symposium as I have interests there as well.

5 0.50070465 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

6 0.488778 313 hunch net-2008-08-18-Radford Neal starts a blog

7 0.48454109 415 hunch net-2010-10-28-NY ML Symposium 2010

8 0.48085433 164 hunch net-2006-03-17-Multitask learning is Black-Boxable

9 0.44488189 316 hunch net-2008-09-04-Fall ML Conferences

10 0.43403628 377 hunch net-2009-11-09-NYAS ML Symposium this year.

11 0.43112883 489 hunch net-2013-09-20-No NY ML Symposium in 2013, and some good news

12 0.42633122 402 hunch net-2010-07-02-MetaOptimize

13 0.4204351 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

14 0.39088908 357 hunch net-2009-05-30-Many ways to Learn this summer

15 0.38687477 491 hunch net-2013-11-21-Ben Taskar is gone

16 0.38646662 447 hunch net-2011-10-10-ML Symposium and ICML details

17 0.38621172 102 hunch net-2005-08-11-Why Manifold-Based Dimension Reduction Techniques?

18 0.38554376 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

19 0.38070261 77 hunch net-2005-05-29-Maximum Margin Mismatch?

20 0.36841857 412 hunch net-2010-09-28-Machined Learnings


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.076), (55, 0.093), (77, 0.654)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9305895 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

Introduction: I’ve been wanting to attend the NYC ML Meetup for some time and hope to make it next week on the 25th . Rob Schapire is talking about “Playing Repeated Games”, which in my experience is far more relevant to machine learning than the title might indicate.

2 0.87322021 436 hunch net-2011-06-22-Ultra LDA

Introduction: Shravan and Alex ‘s LDA code is released . On a single machine, I’m not sure how it currently compares to the online LDA in VW , but the ability to effectively scale across very many machines is surely interesting.

3 0.69114625 206 hunch net-2006-09-09-How to solve an NP hard problem in quadratic time

Introduction: This title is a lie, but it is a special lie which has a bit of truth. If n players each play each other, you have a tournament. How do you order the players from weakest to strongest? The standard first attempt is “find the ordering which agrees with the tournament on as many player pairs as possible”. This is called the “minimum feedback arcset” problem in the CS theory literature and it is a well known NP-hard problem. A basic guarantee holds for the solution to this problem: if there is some “true” intrinsic ordering, and the outcome of the tournament disagrees k times (due to noise for instance), then the output ordering will disagree with the original ordering on at most 2k edges (and no solution can be better). One standard approach to tractably solving an NP-hard problem is to find another algorithm with an approximation guarantee. For example, Don Coppersmith , Lisa Fleischer and Atri Rudra proved that ordering players according to the number of wins is

4 0.5706467 165 hunch net-2006-03-23-The Approximation Argument

Introduction: An argument is sometimes made that the Bayesian way is the “right” way to do machine learning. This is a serious argument which deserves a serious reply. The approximation argument is a serious reply for which I have not yet seen a reply 2 . The idea for the Bayesian approach is quite simple, elegant, and general. Essentially, you first specify a prior P(D) over possible processes D producing the data, observe the data, then condition on the data according to Bayes law to construct a posterior: P(D|x) = P(x|D)P(D)/P(x) After this, hard decisions are made (such as “turn left” or “turn right”) by choosing the one which minimizes the expected (with respect to the posterior) loss. This basic idea is reused thousands of times with various choices of P(D) and loss functions which is unsurprising given the many nice properties: There is an extremely strong associated guarantee: If the actual distribution generating the data is drawn from P(D) there is no better method.

5 0.51949859 269 hunch net-2007-10-24-Contextual Bandits

Introduction: One of the fundamental underpinnings of the internet is advertising based content. This has become much more effective due to targeted advertising where ads are specifically matched to interests. Everyone is familiar with this, because everyone uses search engines and all search engines try to make money this way. The problem of matching ads to interests is a natural machine learning problem in some ways since there is much information in who clicks on what. A fundamental problem with this information is that it is not supervised—in particular a click-or-not on one ad doesn’t generally tell you if a different ad would have been clicked on. This implies we have a fundamental exploration problem. A standard mathematical setting for this situation is “ k -Armed Bandits”, often with various relevant embellishments. The k -Armed Bandit setting works on a round-by-round basis. On each round: A policy chooses arm a from 1 of k arms (i.e. 1 of k ads). The world reveals t

6 0.48289084 375 hunch net-2009-10-26-NIPS workshops

7 0.44043744 388 hunch net-2010-01-24-Specializations of the Master Problem

8 0.42338872 317 hunch net-2008-09-12-How do we get weak action dependence for learning with partial observations?

9 0.35675865 392 hunch net-2010-03-26-A Variance only Deviation Bound

10 0.30222106 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

11 0.29147372 118 hunch net-2005-10-07-On-line learning of regular decision rules

12 0.28949678 100 hunch net-2005-08-04-Why Reinforcement Learning is Important

13 0.2548086 410 hunch net-2010-09-17-New York Area Machine Learning Events

14 0.24778429 169 hunch net-2006-04-05-What is state?

15 0.23997223 259 hunch net-2007-08-19-Choice of Metrics

16 0.23588982 185 hunch net-2006-06-16-Regularization = Robustness

17 0.22533551 99 hunch net-2005-08-01-Peekaboom

18 0.22527787 293 hunch net-2008-03-23-Interactive Machine Learning

19 0.22025803 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

20 0.21790949 220 hunch net-2006-11-27-Continuizing Solutions