hunch_net hunch_net-2005 hunch_net-2005-144 knowledge-graph by maker-knowledge-mining

144 hunch net-2005-12-28-Yet more nips thoughts


meta infos for this blog

Source: html

Introduction: I only managed to make it out to the NIPS workshops this year so I’ll give my comments on what I saw there. The Learing and Robotics workshops lives again. I hope it continues and gets more high quality papers in the future. The most interesting talk for me was Larry Jackel’s on the LAGR program (see John’s previous post on said program). I got some ideas as to what progress has been made. Larry really explained the types of benchmarks and the tradeoffs that had to be made to make the goals achievable but challenging. Hal Daume gave a very interesting talk about structured prediction using RL techniques, something near and dear to my own heart. He achieved rather impressive results using only a very greedy search. The non-parametric Bayes workshop was great. I enjoyed the entire morning session I spent there, and particularly (the usually desultory) discussion periods. One interesting topic was the Gibbs/Variational inference divide. I won’t try to summarize espe


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I only managed to make it out to the NIPS workshops this year so I’ll give my comments on what I saw there. [sent-1, score-0.125]

2 The Learing and Robotics workshops lives again. [sent-2, score-0.068]

3 The most interesting talk for me was Larry Jackel’s on the LAGR program (see John’s previous post on said program). [sent-4, score-0.197]

4 Larry really explained the types of benchmarks and the tradeoffs that had to be made to make the goals achievable but challenging. [sent-6, score-0.182]

5 Hal Daume gave a very interesting talk about structured prediction using RL techniques, something near and dear to my own heart. [sent-7, score-0.207]

6 I enjoyed the entire morning session I spent there, and particularly (the usually desultory) discussion periods. [sent-10, score-0.063]

7 One interesting topic was the Gibbs/Variational inference divide. [sent-11, score-0.08]

8 I won’t try to summarize especially as no conclusion was reached. [sent-12, score-0.057]

9 It was interesting to note that samplers are competitive with the variational approaches for many Dirichlet process problems. [sent-13, score-0.143]

10 One open question I left with was whether the fast variants of Gibbs sampling could be made multi-processor as the naive variants can. [sent-14, score-0.184]

11 I also have a reading list of sorts from the main conference. [sent-15, score-0.059]

12 Most of the papers mentioned in previous posts on NIPS are on that list as well as these: (in no particular order) The Information-Form Data Association Filter Sebastian Thrun, Brad Schumitsch, Gary Bradski, Kunle Olukotun [ps. [sent-16, score-0.117]

13 gz][pdf][bibtex] Divergences, surrogate loss functions and experimental design XuanLong Nguyen, Martin Wainwright, Michael Jordan [ps. [sent-17, score-0.063]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bibtex', 0.615), ('pdf', 0.506), ('larry', 0.119), ('dirichlet', 0.109), ('variants', 0.092), ('interesting', 0.08), ('thrun', 0.068), ('benchmarks', 0.068), ('dear', 0.068), ('dynamical', 0.068), ('hertzmann', 0.068), ('sudderth', 0.068), ('unseen', 0.068), ('wang', 0.068), ('workshops', 0.068), ('lagr', 0.063), ('abbeel', 0.063), ('pieter', 0.063), ('association', 0.063), ('freeman', 0.063), ('gibbs', 0.063), ('jordan', 0.063), ('le', 0.063), ('morning', 0.063), ('nicolas', 0.063), ('niyogi', 0.063), ('partha', 0.063), ('subspace', 0.063), ('surrogate', 0.063), ('variational', 0.063), ('visual', 0.063), ('wainwright', 0.063), ('transformed', 0.06), ('bengio', 0.06), ('yoshua', 0.06), ('scenes', 0.06), ('sebastian', 0.06), ('olivier', 0.06), ('aaron', 0.06), ('alan', 0.06), ('william', 0.06), ('list', 0.059), ('talk', 0.059), ('previous', 0.058), ('summarize', 0.057), ('tradeoffs', 0.057), ('martin', 0.057), ('achievable', 0.057), ('saw', 0.057), ('nips', 0.055)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000004 144 hunch net-2005-12-28-Yet more nips thoughts

Introduction: I only managed to make it out to the NIPS workshops this year so I’ll give my comments on what I saw there. The Learing and Robotics workshops lives again. I hope it continues and gets more high quality papers in the future. The most interesting talk for me was Larry Jackel’s on the LAGR program (see John’s previous post on said program). I got some ideas as to what progress has been made. Larry really explained the types of benchmarks and the tradeoffs that had to be made to make the goals achievable but challenging. Hal Daume gave a very interesting talk about structured prediction using RL techniques, something near and dear to my own heart. He achieved rather impressive results using only a very greedy search. The non-parametric Bayes workshop was great. I enjoyed the entire morning session I spent there, and particularly (the usually desultory) discussion periods. One interesting topic was the Gibbs/Variational inference divide. I won’t try to summarize espe

2 0.091279462 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

Introduction: Maybe it’s too early to call, but with four separate Neural Network sessions at this year’s ICML , it looks like Neural Networks are making a comeback. Here are my highlights of these sessions. In general, my feeling is that these papers both demystify deep learning and show its broader applicability. The first observation I made is that the once disreputable “Neural” nomenclature is being used again in lieu of “deep learning”. Maybe it’s because Adam Coates et al. showed that single layer networks can work surprisingly well. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , Adam Coates , Honglak Lee , Andrew Y. Ng (AISTATS 2011) The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization , Adam Coates , Andrew Y. Ng (ICML 2011) Another surprising result out of Andrew Ng’s group comes from Andrew Saxe et al. who show that certain convolutional pooling architectures can obtain close to state-of-the-art pe

3 0.089738801 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

Introduction: Following up on Hal Daume’s post and John’s post on cool and interesting things seen at NIPS I’ll post my own little list of neat papers here as well. Of course it’s going to be biased towards what I think is interesting. Also, I have to say that I hadn’t been able to see many papers this year at nips due to myself being too busy, so please feel free to contribute the papers that you liked 1. P. Mudigonda, V. Kolmogorov, P. Torr. An Analysis of Convex Relaxations for MAP Estimation. A surprising paper which shows that many of the more sophisticated convex relaxations that had been proposed recently turns out to be subsumed by the simplest LP relaxation. Be careful next time you try a cool new convex relaxation! 2. D. Sontag, T. Jaakkola. New Outer Bounds on the Marginal Polytope. The title says it all. The marginal polytope is the set of local marginal distributions over subsets of variables that are globally consistent in the sense that there is at least one distributio

4 0.068852365 140 hunch net-2005-12-14-More NIPS Papers II

Introduction: I thought this was a very good NIPS with many excellent papers. The following are a few NIPS papers which I liked and I hope to study more carefully when I get the chance. The list is not exhaustive and in no particular order… Preconditioner Approximations for Probabilistic Graphical Models. Pradeeep Ravikumar and John Lafferty. I thought the use of preconditioner methods from solving linear systems in the context of approximate inference was novel and interesting. The results look good and I’d like to understand the limitations. Rodeo: Sparse nonparametric regression in high dimensions. John Lafferty and Larry Wasserman. A very interesting approach to feature selection in nonparametric regression from a frequentist framework. The use of lengthscale variables in each dimension reminds me a lot of ‘Automatic Relevance Determination’ in Gaussian process regression — it would be interesting to compare Rodeo to ARD in GPs. Interpolating between types and tokens by estimating

5 0.062213469 188 hunch net-2006-06-30-ICML papers

Introduction: Here are some ICML papers which interested me. Arindam Banerjee had a paper which notes that PAC-Bayes bounds, a core theorem in online learning, and the optimality of Bayesian learning statements share a core inequality in their proof. Pieter Abbeel , Morgan Quigley and Andrew Y. Ng have a paper discussing RL techniques for learning given a bad (but not too bad) model of the world. Nina Balcan and Avrim Blum have a paper which discusses how to learn given a similarity function rather than a kernel. A similarity function requires less structure than a kernel, implying that a learning algorithm using a similarity function might be applied in situations where no effective kernel is evident. Nathan Ratliff , Drew Bagnell , and Marty Zinkevich have a paper describing an algorithm which attempts to fuse A * path planning with learning of transition costs based on human demonstration. Papers (2), (3), and (4), all seem like an initial pass at solving in

6 0.06031302 224 hunch net-2006-12-12-Interesting Papers at NIPS 2006

7 0.060201518 71 hunch net-2005-05-14-NIPS

8 0.059613228 46 hunch net-2005-03-24-The Role of Workshops

9 0.05914928 437 hunch net-2011-07-10-ICML 2011 and the future

10 0.059042044 212 hunch net-2006-10-04-Health of Conferences Wiki

11 0.058851726 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

12 0.056983661 141 hunch net-2005-12-17-Workshops as Franchise Conferences

13 0.056341611 414 hunch net-2010-10-17-Partha Niyogi has died

14 0.056194145 420 hunch net-2010-12-26-NIPS 2010

15 0.056189664 216 hunch net-2006-11-02-2006 NIPS workshops

16 0.055829838 443 hunch net-2011-09-03-Fall Machine Learning Events

17 0.054247621 385 hunch net-2009-12-27-Interesting things at NIPS 2009

18 0.052617289 285 hunch net-2008-01-23-Why Workshop?

19 0.052430365 444 hunch net-2011-09-07-KDD and MUCMD 2011

20 0.052275248 189 hunch net-2006-07-05-more icml papers


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.115), (1, -0.025), (2, -0.028), (3, -0.047), (4, 0.03), (5, 0.085), (6, 0.013), (7, -0.032), (8, 0.072), (9, -0.036), (10, 0.019), (11, 0.002), (12, -0.08), (13, -0.049), (14, 0.013), (15, -0.021), (16, -0.043), (17, 0.052), (18, 0.018), (19, -0.031), (20, -0.011), (21, -0.017), (22, -0.057), (23, -0.078), (24, -0.004), (25, -0.013), (26, -0.001), (27, 0.034), (28, 0.005), (29, 0.05), (30, 0.009), (31, 0.024), (32, 0.034), (33, -0.033), (34, 0.034), (35, 0.048), (36, -0.017), (37, 0.065), (38, -0.009), (39, -0.018), (40, 0.055), (41, 0.008), (42, 0.036), (43, -0.044), (44, 0.019), (45, -0.018), (46, -0.04), (47, 0.037), (48, -0.004), (49, 0.049)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96827477 144 hunch net-2005-12-28-Yet more nips thoughts

Introduction: I only managed to make it out to the NIPS workshops this year so I’ll give my comments on what I saw there. The Learing and Robotics workshops lives again. I hope it continues and gets more high quality papers in the future. The most interesting talk for me was Larry Jackel’s on the LAGR program (see John’s previous post on said program). I got some ideas as to what progress has been made. Larry really explained the types of benchmarks and the tradeoffs that had to be made to make the goals achievable but challenging. Hal Daume gave a very interesting talk about structured prediction using RL techniques, something near and dear to my own heart. He achieved rather impressive results using only a very greedy search. The non-parametric Bayes workshop was great. I enjoyed the entire morning session I spent there, and particularly (the usually desultory) discussion periods. One interesting topic was the Gibbs/Variational inference divide. I won’t try to summarize espe

2 0.65551364 139 hunch net-2005-12-11-More NIPS Papers

Introduction: Let me add to John’s post with a few of my own favourites from this year’s conference. First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. And, speaking of tagging, I think the U.Mass Citeseer replacement system Rexa from the demo track is very cool. Finally, let me add my recommendations for specific papers: Z. Ghahramani, K. Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects.) T. Griffiths, Z. Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties.) K. Weinberger, J. Blitzer, L. Saul: Distance Metric Lea

3 0.63069159 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

Introduction: Following up on Hal Daume’s post and John’s post on cool and interesting things seen at NIPS I’ll post my own little list of neat papers here as well. Of course it’s going to be biased towards what I think is interesting. Also, I have to say that I hadn’t been able to see many papers this year at nips due to myself being too busy, so please feel free to contribute the papers that you liked 1. P. Mudigonda, V. Kolmogorov, P. Torr. An Analysis of Convex Relaxations for MAP Estimation. A surprising paper which shows that many of the more sophisticated convex relaxations that had been proposed recently turns out to be subsumed by the simplest LP relaxation. Be careful next time you try a cool new convex relaxation! 2. D. Sontag, T. Jaakkola. New Outer Bounds on the Marginal Polytope. The title says it all. The marginal polytope is the set of local marginal distributions over subsets of variables that are globally consistent in the sense that there is at least one distributio

4 0.59920931 140 hunch net-2005-12-14-More NIPS Papers II

Introduction: I thought this was a very good NIPS with many excellent papers. The following are a few NIPS papers which I liked and I hope to study more carefully when I get the chance. The list is not exhaustive and in no particular order… Preconditioner Approximations for Probabilistic Graphical Models. Pradeeep Ravikumar and John Lafferty. I thought the use of preconditioner methods from solving linear systems in the context of approximate inference was novel and interesting. The results look good and I’d like to understand the limitations. Rodeo: Sparse nonparametric regression in high dimensions. John Lafferty and Larry Wasserman. A very interesting approach to feature selection in nonparametric regression from a frequentist framework. The use of lengthscale variables in each dimension reminds me a lot of ‘Automatic Relevance Determination’ in Gaussian process regression — it would be interesting to compare Rodeo to ARD in GPs. Interpolating between types and tokens by estimating

5 0.57957155 189 hunch net-2006-07-05-more icml papers

Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a

6 0.57272416 77 hunch net-2005-05-29-Maximum Margin Mismatch?

7 0.51902723 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

8 0.4887566 444 hunch net-2011-09-07-KDD and MUCMD 2011

9 0.48304757 185 hunch net-2006-06-16-Regularization = Robustness

10 0.46541658 192 hunch net-2006-07-08-Some recent papers

11 0.46172702 188 hunch net-2006-06-30-ICML papers

12 0.45734638 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)

13 0.45715833 216 hunch net-2006-11-02-2006 NIPS workshops

14 0.45261851 71 hunch net-2005-05-14-NIPS

15 0.44195661 113 hunch net-2005-09-19-NIPS Workshops

16 0.43967676 403 hunch net-2010-07-18-ICML & COLT 2010

17 0.42583042 340 hunch net-2009-01-28-Nielsen’s talk

18 0.41879028 177 hunch net-2006-05-05-An ICML reject

19 0.41827282 251 hunch net-2007-06-24-Interesting Papers at ICML 2007

20 0.41489181 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.011), (10, 0.038), (13, 0.022), (27, 0.115), (34, 0.039), (37, 0.057), (38, 0.029), (49, 0.036), (53, 0.049), (55, 0.093), (64, 0.018), (90, 0.332), (94, 0.043), (95, 0.017)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.94325215 340 hunch net-2009-01-28-Nielsen’s talk

Introduction: I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting.

2 0.92166358 323 hunch net-2008-11-04-Rise of the Machines

Introduction: On the enduring topic of how people deal with intelligent machines , we have this important election bulletin .

3 0.9039759 32 hunch net-2005-02-27-Antilearning: When proximity goes bad

Introduction: Joel Predd mentioned “ Antilearning ” by Adam Kowalczyk , which is interesting from a foundational intuitions viewpoint. There is a pervasive intuition that “nearby things tend to have the same label”. This intuition is instantiated in SVMs, nearest neighbor classifiers, decision trees, and neural networks. It turns out there are natural problems where this intuition is opposite of the truth. One natural situation where this occurs is in competition. For example, when Intel fails to meet its earnings estimate, is this evidence that AMD is doing badly also? Or evidence that AMD is doing well? This violation of the proximity intuition means that when the number of examples is few, negating a classifier which attempts to exploit proximity can provide predictive power (thus, the term “antilearning”).

same-blog 4 0.83793032 144 hunch net-2005-12-28-Yet more nips thoughts

Introduction: I only managed to make it out to the NIPS workshops this year so I’ll give my comments on what I saw there. The Learing and Robotics workshops lives again. I hope it continues and gets more high quality papers in the future. The most interesting talk for me was Larry Jackel’s on the LAGR program (see John’s previous post on said program). I got some ideas as to what progress has been made. Larry really explained the types of benchmarks and the tradeoffs that had to be made to make the goals achievable but challenging. Hal Daume gave a very interesting talk about structured prediction using RL techniques, something near and dear to my own heart. He achieved rather impressive results using only a very greedy search. The non-parametric Bayes workshop was great. I enjoyed the entire morning session I spent there, and particularly (the usually desultory) discussion periods. One interesting topic was the Gibbs/Variational inference divide. I won’t try to summarize espe

5 0.78705984 239 hunch net-2007-04-18-$50K Spock Challenge

Introduction: Apparently, the company Spock is setting up a $50k entity resolution challenge . $50k is much less than the Netflix challenge, but it’s effectively the same as Netflix until someone reaches 10% . It’s also nice that the Spock challenge has a short duration. The (visible) test set is of size 25k and the training set has size 75k.

6 0.76672137 139 hunch net-2005-12-11-More NIPS Papers

7 0.7296387 284 hunch net-2008-01-18-Datasets

8 0.66128874 333 hunch net-2008-12-27-Adversarial Academia

9 0.44155431 454 hunch net-2012-01-30-ICML Posters and Scope

10 0.43767878 437 hunch net-2011-07-10-ICML 2011 and the future

11 0.43600464 134 hunch net-2005-12-01-The Webscience Future

12 0.43164313 1 hunch net-2005-01-19-Why I decided to run a weblog.

13 0.43149886 95 hunch net-2005-07-14-What Learning Theory might do

14 0.4304049 194 hunch net-2006-07-11-New Models

15 0.42696124 452 hunch net-2012-01-04-Why ICML? and the summer conferences

16 0.42597795 40 hunch net-2005-03-13-Avoiding Bad Reviewing

17 0.42575511 343 hunch net-2009-02-18-Decision by Vetocracy

18 0.42571372 153 hunch net-2006-02-02-Introspectionism as a Disease

19 0.42440104 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

20 0.42213696 116 hunch net-2005-09-30-Research in conferences