hunch_net hunch_net-2005 hunch_net-2005-139 knowledge-graph by maker-knowledge-mining

139 hunch net-2005-12-11-More NIPS Papers


meta infos for this blog

Source: html

Introduction: Let me add to John’s post with a few of my own favourites from this year’s conference. First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. And, speaking of tagging, I think the U.Mass Citeseer replacement system Rexa from the demo track is very cool. Finally, let me add my recommendations for specific papers: Z. Ghahramani, K. Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects.) T. Griffiths, Z. Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties.) K. Weinberger, J. Blitzer, L. Saul: Distance Metric Lea


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Let me add to John’s post with a few of my own favourites from this year’s conference. [sent-1, score-0.434]

2 First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . [sent-2, score-0.149]

3 I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. [sent-3, score-0.223]

4 Mass Citeseer replacement system Rexa from the demo track is very cool. [sent-5, score-0.107]

5 Finally, let me add my recommendations for specific papers: Z. [sent-6, score-0.361]

6 Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects. [sent-8, score-0.349]

7 Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties. [sent-11, score-1.042]

8 Lafferty: Correlated Topic Models [ preprint ] (Nice trick using the lognormal to induce correlations on the simplex applied to topic models for text. [sent-20, score-0.832]

9 ) I’ll also post in the comments a list of other papers that caught my eye but which I haven’t looked at closely enough to be able to out-and-out recommend. [sent-21, score-0.44]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('preprint', 0.484), ('favourites', 0.242), ('ghahramani', 0.215), ('nice', 0.185), ('nearby', 0.179), ('infinite', 0.172), ('let', 0.149), ('talk', 0.14), ('models', 0.132), ('add', 0.112), ('topic', 0.109), ('exchangeability', 0.107), ('blitzer', 0.107), ('demo', 0.107), ('griffiths', 0.107), ('induce', 0.107), ('style', 0.103), ('nash', 0.1), ('eye', 0.1), ('citeseer', 0.1), ('transformation', 0.1), ('indian', 0.1), ('lafferty', 0.1), ('latent', 0.1), ('recommendations', 0.1), ('rexa', 0.1), ('sends', 0.1), ('differing', 0.094), ('blei', 0.094), ('correlated', 0.094), ('looked', 0.094), ('retrieval', 0.094), ('feature', 0.092), ('coarse', 0.09), ('tagging', 0.09), ('matrices', 0.09), ('closer', 0.09), ('points', 0.087), ('beautiful', 0.086), ('caught', 0.086), ('dirichlet', 0.086), ('brings', 0.083), ('complexity', 0.083), ('closely', 0.08), ('margin', 0.08), ('post', 0.08), ('elegant', 0.078), ('distance', 0.074), ('objects', 0.074), ('recommend', 0.074)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 139 hunch net-2005-12-11-More NIPS Papers

Introduction: Let me add to John’s post with a few of my own favourites from this year’s conference. First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. And, speaking of tagging, I think the U.Mass Citeseer replacement system Rexa from the demo track is very cool. Finally, let me add my recommendations for specific papers: Z. Ghahramani, K. Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects.) T. Griffiths, Z. Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties.) K. Weinberger, J. Blitzer, L. Saul: Distance Metric Lea

2 0.11447069 189 hunch net-2006-07-05-more icml papers

Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a

3 0.11011077 8 hunch net-2005-02-01-NIPS: Online Bayes

Introduction: One nice use for this blog is to consider and discuss papers that that have appeared at recent conferences. I really enjoyed Andrew Ng and Sham Kakade’s paper Online Bounds for Bayesian Algorithms . From the paper: The philosophy taken in the Bayesian methodology is often at odds with that in the online learning community…. the online learning setting makes rather minimal assumptions on the conditions under which the data are being presented to the learner —usually, Nature could provide examples in an adversarial manner. We study the performance of Bayesian algorithms in a more adversarial setting… We provide competitive bounds when the cost function is the log loss, and we compare our performance to the best model in our model class (as in the experts setting). It’s a very nice analysis of some of my favorite algorithms that all hinges around a beautiful theorem: Let Q be any distribution over parameters theta. Then for all sequences S: L_{Bayes}(S) leq L_Q(S)

4 0.098164521 6 hunch net-2005-01-27-Learning Complete Problems

Introduction: Let’s define a learning problem as making predictions given past data. There are several ways to attack the learning problem which seem to be equivalent to solving the learning problem. Find the Invariant This viewpoint says that learning is all about learning (or incorporating) transformations of objects that do not change the correct prediction. The best possible invariant is the one which says “all things of the same class are the same”. Finding this is equivalent to learning. This viewpoint is particularly common when working with image features. Feature Selection This viewpoint says that the way to learn is by finding the right features to input to a learning algorithm. The best feature is the one which is the class to predict. Finding this is equivalent to learning for all reasonable learning algorithms. This viewpoint is common in several applications of machine learning. See Gilad’s and Bianca’s comments . Find the Representation This is almost the same a

5 0.096778527 173 hunch net-2006-04-17-Rexa is live

Introduction: Rexa is now publicly available. Anyone can create an account and login. Rexa is similar to Citeseer and Google Scholar in functionality with more emphasis on the use of machine learning for intelligent information extraction. For example, Rexa can automatically display a picture on an author’s homepage when the author is searched for.

6 0.096268699 251 hunch net-2007-06-24-Interesting Papers at ICML 2007

7 0.087795854 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

8 0.087107353 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

9 0.085419275 102 hunch net-2005-08-11-Why Manifold-Based Dimension Reduction Techniques?

10 0.083239689 77 hunch net-2005-05-29-Maximum Margin Mismatch?

11 0.081474259 199 hunch net-2006-07-26-Two more UAI papers of interest

12 0.080015622 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)

13 0.073064782 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

14 0.069744803 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

15 0.068953454 385 hunch net-2009-12-27-Interesting things at NIPS 2009

16 0.068316989 456 hunch net-2012-02-24-ICML+50%

17 0.067558855 134 hunch net-2005-12-01-The Webscience Future

18 0.064052813 91 hunch net-2005-07-10-Thinking the Unthought

19 0.063960537 97 hunch net-2005-07-23-Interesting papers at ACL

20 0.063523293 440 hunch net-2011-08-06-Interesting thing at UAI 2011


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.141), (1, 0.023), (2, 0.009), (3, -0.019), (4, 0.086), (5, 0.007), (6, -0.012), (7, -0.041), (8, 0.051), (9, -0.105), (10, 0.049), (11, -0.011), (12, -0.135), (13, -0.043), (14, 0.04), (15, -0.075), (16, -0.068), (17, 0.084), (18, 0.104), (19, -0.032), (20, -0.011), (21, 0.016), (22, -0.037), (23, -0.063), (24, 0.078), (25, -0.033), (26, 0.017), (27, 0.057), (28, -0.079), (29, -0.026), (30, 0.007), (31, -0.022), (32, 0.077), (33, -0.063), (34, -0.024), (35, 0.081), (36, 0.021), (37, 0.029), (38, -0.103), (39, 0.01), (40, 0.017), (41, -0.032), (42, 0.051), (43, -0.021), (44, 0.002), (45, 0.015), (46, -0.025), (47, 0.035), (48, -0.018), (49, -0.017)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97922039 139 hunch net-2005-12-11-More NIPS Papers

Introduction: Let me add to John’s post with a few of my own favourites from this year’s conference. First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. And, speaking of tagging, I think the U.Mass Citeseer replacement system Rexa from the demo track is very cool. Finally, let me add my recommendations for specific papers: Z. Ghahramani, K. Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects.) T. Griffiths, Z. Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties.) K. Weinberger, J. Blitzer, L. Saul: Distance Metric Lea

2 0.75324452 189 hunch net-2006-07-05-more icml papers

Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a

3 0.72241867 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

Introduction: Following up on Hal Daume’s post and John’s post on cool and interesting things seen at NIPS I’ll post my own little list of neat papers here as well. Of course it’s going to be biased towards what I think is interesting. Also, I have to say that I hadn’t been able to see many papers this year at nips due to myself being too busy, so please feel free to contribute the papers that you liked 1. P. Mudigonda, V. Kolmogorov, P. Torr. An Analysis of Convex Relaxations for MAP Estimation. A surprising paper which shows that many of the more sophisticated convex relaxations that had been proposed recently turns out to be subsumed by the simplest LP relaxation. Be careful next time you try a cool new convex relaxation! 2. D. Sontag, T. Jaakkola. New Outer Bounds on the Marginal Polytope. The title says it all. The marginal polytope is the set of local marginal distributions over subsets of variables that are globally consistent in the sense that there is at least one distributio

4 0.66026312 140 hunch net-2005-12-14-More NIPS Papers II

Introduction: I thought this was a very good NIPS with many excellent papers. The following are a few NIPS papers which I liked and I hope to study more carefully when I get the chance. The list is not exhaustive and in no particular order… Preconditioner Approximations for Probabilistic Graphical Models. Pradeeep Ravikumar and John Lafferty. I thought the use of preconditioner methods from solving linear systems in the context of approximate inference was novel and interesting. The results look good and I’d like to understand the limitations. Rodeo: Sparse nonparametric regression in high dimensions. John Lafferty and Larry Wasserman. A very interesting approach to feature selection in nonparametric regression from a frequentist framework. The use of lengthscale variables in each dimension reminds me a lot of ‘Automatic Relevance Determination’ in Gaussian process regression — it would be interesting to compare Rodeo to ARD in GPs. Interpolating between types and tokens by estimating

5 0.65029109 144 hunch net-2005-12-28-Yet more nips thoughts

Introduction: I only managed to make it out to the NIPS workshops this year so I’ll give my comments on what I saw there. The Learing and Robotics workshops lives again. I hope it continues and gets more high quality papers in the future. The most interesting talk for me was Larry Jackel’s on the LAGR program (see John’s previous post on said program). I got some ideas as to what progress has been made. Larry really explained the types of benchmarks and the tradeoffs that had to be made to make the goals achievable but challenging. Hal Daume gave a very interesting talk about structured prediction using RL techniques, something near and dear to my own heart. He achieved rather impressive results using only a very greedy search. The non-parametric Bayes workshop was great. I enjoyed the entire morning session I spent there, and particularly (the usually desultory) discussion periods. One interesting topic was the Gibbs/Variational inference divide. I won’t try to summarize espe

6 0.63253385 77 hunch net-2005-05-29-Maximum Margin Mismatch?

7 0.61549234 97 hunch net-2005-07-23-Interesting papers at ACL

8 0.50846422 251 hunch net-2007-06-24-Interesting Papers at ICML 2007

9 0.49850729 185 hunch net-2006-06-16-Regularization = Robustness

10 0.49684486 361 hunch net-2009-06-24-Interesting papers at UAICMOLT 2009

11 0.47527003 440 hunch net-2011-08-06-Interesting thing at UAI 2011

12 0.46310902 45 hunch net-2005-03-22-Active learning

13 0.45774838 188 hunch net-2006-06-30-ICML papers

14 0.45548058 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)

15 0.44909027 30 hunch net-2005-02-25-Why Papers?

16 0.44867808 330 hunch net-2008-12-07-A NIPS paper

17 0.44189167 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

18 0.43267459 439 hunch net-2011-08-01-Interesting papers at COLT 2011

19 0.43193734 192 hunch net-2006-07-08-Some recent papers

20 0.43047991 263 hunch net-2007-09-18-It’s MDL Jim, but not as we know it…(on Bayes, MDL and consistency)


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.036), (27, 0.184), (38, 0.057), (53, 0.127), (67, 0.015), (90, 0.422), (95, 0.062)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96442801 340 hunch net-2009-01-28-Nielsen’s talk

Introduction: I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting.

2 0.9199931 323 hunch net-2008-11-04-Rise of the Machines

Introduction: On the enduring topic of how people deal with intelligent machines , we have this important election bulletin .

3 0.89471865 32 hunch net-2005-02-27-Antilearning: When proximity goes bad

Introduction: Joel Predd mentioned “ Antilearning ” by Adam Kowalczyk , which is interesting from a foundational intuitions viewpoint. There is a pervasive intuition that “nearby things tend to have the same label”. This intuition is instantiated in SVMs, nearest neighbor classifiers, decision trees, and neural networks. It turns out there are natural problems where this intuition is opposite of the truth. One natural situation where this occurs is in competition. For example, when Intel fails to meet its earnings estimate, is this evidence that AMD is doing badly also? Or evidence that AMD is doing well? This violation of the proximity intuition means that when the number of examples is few, negating a classifier which attempts to exploit proximity can provide predictive power (thus, the term “antilearning”).

same-blog 4 0.86980987 139 hunch net-2005-12-11-More NIPS Papers

Introduction: Let me add to John’s post with a few of my own favourites from this year’s conference. First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. And, speaking of tagging, I think the U.Mass Citeseer replacement system Rexa from the demo track is very cool. Finally, let me add my recommendations for specific papers: Z. Ghahramani, K. Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects.) T. Griffiths, Z. Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties.) K. Weinberger, J. Blitzer, L. Saul: Distance Metric Lea

5 0.84233069 239 hunch net-2007-04-18-$50K Spock Challenge

Introduction: Apparently, the company Spock is setting up a $50k entity resolution challenge . $50k is much less than the Netflix challenge, but it’s effectively the same as Netflix until someone reaches 10% . It’s also nice that the Spock challenge has a short duration. The (visible) test set is of size 25k and the training set has size 75k.

6 0.75368452 284 hunch net-2008-01-18-Datasets

7 0.75210798 144 hunch net-2005-12-28-Yet more nips thoughts

8 0.63842463 333 hunch net-2008-12-27-Adversarial Academia

9 0.46073467 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

10 0.4531602 12 hunch net-2005-02-03-Learning Theory, by assumption

11 0.45079142 201 hunch net-2006-08-07-The Call of the Deep

12 0.44979563 227 hunch net-2007-01-10-A Deep Belief Net Learning Problem

13 0.44771877 134 hunch net-2005-12-01-The Webscience Future

14 0.44629133 131 hunch net-2005-11-16-The Everything Ensemble Edge

15 0.44378555 483 hunch net-2013-06-10-The Large Scale Learning class notes

16 0.44286305 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

17 0.44077712 6 hunch net-2005-01-27-Learning Complete Problems

18 0.43950605 27 hunch net-2005-02-23-Problem: Reinforcement Learning with Classification

19 0.43887877 19 hunch net-2005-02-14-Clever Methods of Overfitting

20 0.43758401 14 hunch net-2005-02-07-The State of the Reduction