hunch_net hunch_net-2006 hunch_net-2006-189 knowledge-graph by maker-knowledge-mining

189 hunch net-2006-07-05-more icml papers


meta infos for this blog

Source: html

Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. [sent-4, score-0.579]

2 ” paper showing how to trade off latency and decoding accuracy when doing MAP labelling (Viterbi decoding) in sequential Markovian models. [sent-7, score-0.502]

3 Efficient inference on sequence segmentation model Sunita Sarawagi A smart way to re-represent potentials in segmentation models to reduce the complexity of inference from cubic in the input sequence to linear. [sent-9, score-1.612]

4 Moral of the story: segmentation is NOT just sequence labelling. [sent-11, score-0.488]

5 Surprisingly, they show that the QP relaxation is both computationally more attractive and more accurate than the “natural” LP relaxation or than loopy BP approximations. [sent-15, score-0.446]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('segmentation', 0.312), ('decoding', 0.205), ('sequence', 0.176), ('labelling', 0.176), ('marina', 0.176), ('qp', 0.176), ('clustering', 0.162), ('models', 0.159), ('inference', 0.155), ('lafferty', 0.145), ('relaxation', 0.145), ('relaxations', 0.136), ('blei', 0.136), ('john', 0.124), ('latency', 0.121), ('whose', 0.121), ('map', 0.105), ('metric', 0.101), ('topics', 0.092), ('markov', 0.092), ('michael', 0.089), ('model', 0.089), ('parameters', 0.088), ('topic', 0.079), ('field', 0.079), ('dag', 0.078), ('bp', 0.078), ('potentials', 0.078), ('narasimhan', 0.078), ('ravikumar', 0.078), ('dynamical', 0.078), ('allocation', 0.078), ('boost', 0.078), ('cohen', 0.078), ('evolve', 0.078), ('interior', 0.078), ('loopy', 0.078), ('moral', 0.078), ('namely', 0.078), ('observable', 0.078), ('optimum', 0.078), ('pradeep', 0.078), ('unknowable', 0.078), ('viola', 0.078), ('wei', 0.078), ('weston', 0.078), ('computationally', 0.078), ('optimal', 0.075), ('winners', 0.072), ('distinguished', 0.072)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 189 hunch net-2006-07-05-more icml papers

Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a

2 0.17821932 77 hunch net-2005-05-29-Maximum Margin Mismatch?

Introduction: John makes a fascinating point about structured classification (and slightly scooped my post!). Maximum Margin Markov Networks (M3N) are an interesting example of the second class of structured classifiers (where the classification of one label depends on the others), and one of my favorite papers. I’m not alone: the paper won the best student paper award at NIPS in 2003. There are some things I find odd about the paper. For instance, it says of probabilistic models “cannot handle high dimensional feature spaces and lack strong theoretical guarrantees.” I’m aware of no such limitations. Also: “Unfortunately, even probabilistic graphical models that are trained discriminatively do not achieve the same level of performance as SVMs, especially when kernel features are used.” This is quite interesting and contradicts my own experience as well as that of a number of people I greatly respect . I wonder what the root cause is: perhaps there is something different abo

3 0.123962 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

Introduction: Following up on Hal Daume’s post and John’s post on cool and interesting things seen at NIPS I’ll post my own little list of neat papers here as well. Of course it’s going to be biased towards what I think is interesting. Also, I have to say that I hadn’t been able to see many papers this year at nips due to myself being too busy, so please feel free to contribute the papers that you liked 1. P. Mudigonda, V. Kolmogorov, P. Torr. An Analysis of Convex Relaxations for MAP Estimation. A surprising paper which shows that many of the more sophisticated convex relaxations that had been proposed recently turns out to be subsumed by the simplest LP relaxation. Be careful next time you try a cool new convex relaxation! 2. D. Sontag, T. Jaakkola. New Outer Bounds on the Marginal Polytope. The title says it all. The marginal polytope is the set of local marginal distributions over subsets of variables that are globally consistent in the sense that there is at least one distributio

4 0.11447069 139 hunch net-2005-12-11-More NIPS Papers

Introduction: Let me add to John’s post with a few of my own favourites from this year’s conference. First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. And, speaking of tagging, I think the U.Mass Citeseer replacement system Rexa from the demo track is very cool. Finally, let me add my recommendations for specific papers: Z. Ghahramani, K. Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects.) T. Griffiths, Z. Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties.) K. Weinberger, J. Blitzer, L. Saul: Distance Metric Lea

5 0.11292071 97 hunch net-2005-07-23-Interesting papers at ACL

Introduction: A recent discussion indicated that one goal of this blog might be to allow people to post comments about recent papers that they liked. I think this could potentially be very useful, especially for those with diverse interests but only finite time to read through conference proceedings. ACL 2005 recently completed, and here are four papers from that conference that I thought were either good or perhaps of interest to a machine learning audience. David Chiang, A Hierarchical Phrase-Based Model for Statistical Machine Translation . (Best paper award.) This paper takes the standard phrase-based MT model that is popular in our field (basically, translate a sentence by individually translating phrases and reordering them according to a complicated statistical model) and extends it to take into account hierarchy in phrases, so that you can learn things like “X ‘s Y” -> “Y de X” in chinese, where X and Y are arbitrary phrases. This takes a step toward linguistic syntax for MT, whic

6 0.10495069 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

7 0.097733766 456 hunch net-2012-02-24-ICML+50%

8 0.095034726 58 hunch net-2005-04-21-Dynamic Programming Generalizations and Their Use

9 0.091793448 45 hunch net-2005-03-22-Active learning

10 0.086845726 194 hunch net-2006-07-11-New Models

11 0.078990638 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

12 0.077524468 334 hunch net-2009-01-07-Interesting Papers at SODA 2009

13 0.077205971 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

14 0.076217405 140 hunch net-2005-12-14-More NIPS Papers II

15 0.074530594 259 hunch net-2007-08-19-Choice of Metrics

16 0.074081019 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

17 0.072381742 235 hunch net-2007-03-03-All Models of Learning have Flaws

18 0.071426168 361 hunch net-2009-06-24-Interesting papers at UAICMOLT 2009

19 0.068569198 385 hunch net-2009-12-27-Interesting things at NIPS 2009

20 0.06761393 388 hunch net-2010-01-24-Specializations of the Master Problem


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.153), (1, 0.042), (2, 0.009), (3, -0.029), (4, 0.089), (5, 0.003), (6, -0.025), (7, -0.05), (8, 0.026), (9, -0.091), (10, 0.053), (11, -0.017), (12, -0.157), (13, -0.076), (14, 0.043), (15, -0.05), (16, 0.055), (17, 0.111), (18, 0.112), (19, -0.104), (20, -0.022), (21, -0.051), (22, -0.046), (23, -0.029), (24, 0.05), (25, 0.077), (26, -0.055), (27, 0.012), (28, 0.018), (29, -0.065), (30, -0.042), (31, -0.029), (32, 0.102), (33, -0.054), (34, -0.001), (35, 0.052), (36, 0.024), (37, 0.04), (38, -0.001), (39, -0.013), (40, 0.015), (41, -0.078), (42, 0.046), (43, 0.008), (44, 0.014), (45, -0.079), (46, -0.016), (47, 0.083), (48, -0.045), (49, 0.018)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97812432 189 hunch net-2006-07-05-more icml papers

Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a

2 0.74237394 139 hunch net-2005-12-11-More NIPS Papers

Introduction: Let me add to John’s post with a few of my own favourites from this year’s conference. First, let me say that Sanjoy’s talk, Coarse Sample Complexity Bounds for Active Learning was also one of my favourites, as was the Forgettron paper . I also really enjoyed the last third of Christos’ talk on the complexity of finding Nash equilibria. And, speaking of tagging, I think the U.Mass Citeseer replacement system Rexa from the demo track is very cool. Finally, let me add my recommendations for specific papers: Z. Ghahramani, K. Heller: Bayesian Sets [no preprint] (A very elegant probabilistic information retrieval style model of which objects are “most like” a given subset of objects.) T. Griffiths, Z. Ghahramani: Infinite Latent Feature Models and the Indian Buffet Process [ preprint ] (A Dirichlet style prior over infinite binary matrices with beautiful exchangeability properties.) K. Weinberger, J. Blitzer, L. Saul: Distance Metric Lea

3 0.69779521 97 hunch net-2005-07-23-Interesting papers at ACL

Introduction: A recent discussion indicated that one goal of this blog might be to allow people to post comments about recent papers that they liked. I think this could potentially be very useful, especially for those with diverse interests but only finite time to read through conference proceedings. ACL 2005 recently completed, and here are four papers from that conference that I thought were either good or perhaps of interest to a machine learning audience. David Chiang, A Hierarchical Phrase-Based Model for Statistical Machine Translation . (Best paper award.) This paper takes the standard phrase-based MT model that is popular in our field (basically, translate a sentence by individually translating phrases and reordering them according to a complicated statistical model) and extends it to take into account hierarchy in phrases, so that you can learn things like “X ‘s Y” -> “Y de X” in chinese, where X and Y are arbitrary phrases. This takes a step toward linguistic syntax for MT, whic

4 0.68988699 77 hunch net-2005-05-29-Maximum Margin Mismatch?

Introduction: John makes a fascinating point about structured classification (and slightly scooped my post!). Maximum Margin Markov Networks (M3N) are an interesting example of the second class of structured classifiers (where the classification of one label depends on the others), and one of my favorite papers. I’m not alone: the paper won the best student paper award at NIPS in 2003. There are some things I find odd about the paper. For instance, it says of probabilistic models “cannot handle high dimensional feature spaces and lack strong theoretical guarrantees.” I’m aware of no such limitations. Also: “Unfortunately, even probabilistic graphical models that are trained discriminatively do not achieve the same level of performance as SVMs, especially when kernel features are used.” This is quite interesting and contradicts my own experience as well as that of a number of people I greatly respect . I wonder what the root cause is: perhaps there is something different abo

5 0.6835348 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

Introduction: Following up on Hal Daume’s post and John’s post on cool and interesting things seen at NIPS I’ll post my own little list of neat papers here as well. Of course it’s going to be biased towards what I think is interesting. Also, I have to say that I hadn’t been able to see many papers this year at nips due to myself being too busy, so please feel free to contribute the papers that you liked 1. P. Mudigonda, V. Kolmogorov, P. Torr. An Analysis of Convex Relaxations for MAP Estimation. A surprising paper which shows that many of the more sophisticated convex relaxations that had been proposed recently turns out to be subsumed by the simplest LP relaxation. Be careful next time you try a cool new convex relaxation! 2. D. Sontag, T. Jaakkola. New Outer Bounds on the Marginal Polytope. The title says it all. The marginal polytope is the set of local marginal distributions over subsets of variables that are globally consistent in the sense that there is at least one distributio

6 0.60345984 140 hunch net-2005-12-14-More NIPS Papers II

7 0.57549208 144 hunch net-2005-12-28-Yet more nips thoughts

8 0.5600844 87 hunch net-2005-06-29-Not EM for clustering at COLT

9 0.55485487 188 hunch net-2006-06-30-ICML papers

10 0.53245836 248 hunch net-2007-06-19-How is Compressed Sensing going to change Machine Learning ?

11 0.52482522 398 hunch net-2010-05-10-Aggregation of estimators, sparsity in high dimension and computational feasibility

12 0.50509858 185 hunch net-2006-06-16-Regularization = Robustness

13 0.50328404 440 hunch net-2011-08-06-Interesting thing at UAI 2011

14 0.49639297 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

15 0.49204826 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

16 0.48385948 58 hunch net-2005-04-21-Dynamic Programming Generalizations and Their Use

17 0.48358554 192 hunch net-2006-07-08-Some recent papers

18 0.48281389 361 hunch net-2009-06-24-Interesting papers at UAICMOLT 2009

19 0.48199347 439 hunch net-2011-08-01-Interesting papers at COLT 2011

20 0.47508058 135 hunch net-2005-12-04-Watchword: model


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(0, 0.015), (10, 0.034), (21, 0.016), (26, 0.012), (27, 0.154), (30, 0.459), (38, 0.055), (53, 0.036), (55, 0.051), (77, 0.019), (94, 0.038), (95, 0.031)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90136123 189 hunch net-2006-07-05-more icml papers

Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a

2 0.84454292 154 hunch net-2006-02-04-Research Budget Changes

Introduction: The announcement of an increase in funding for basic research in the US is encouraging. There is some discussion of this at the Computing Research Policy blog. One part of this discussion has a graph of NSF funding over time, presumably in dollar budgets. I don’t believe that dollar budgets are the right way to judge the impact of funding changes on researchers. A better way to judge seems to be in terms of dollar budget divided by GDP which provides a measure of the relative emphasis on research. This graph was assembled by dividing the NSF budget by the US GDP . For 2005 GDP, I used the current estimate and for 2006 and 2007 assumed an increase by a factor of 1.04 per year. The 2007 number also uses the requested 2007 budget which is certain to change. This graph makes it clear why researchers were upset: research funding emphasis has fallen for 3 years in a row. The reality has been significantly more severe due to DARPA decreasing funding and industrial

3 0.83640528 364 hunch net-2009-07-11-Interesting papers at KDD

Introduction: I attended KDD this year. The conference has always had a strong grounding in what works based on the KDDcup , but it has developed a halo of workshops on various subjects. It seems that KDD has become a place where the economy meets machine learning in a stronger sense than many other conferences. There were several papers that other people might like to take a look at. Yehuda Koren Collaborative Filtering with Temporal Dynamics . This paper describes how to incorporate temporal dynamics into a couple of collaborative filtering approaches. This was also a best paper award. D. Sculley , Robert Malkin, Sugato Basu , Roberto J. Bayardo , Predicting Bounce Rates in Sponsored Search Advertisements . The basic claim of this paper is that the probability people immediately leave (“bounce”) after clicking on an advertisement is predictable. Frank McSherry and Ilya Mironov Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contende

4 0.78408307 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop

Introduction: The From Data to Knowledge workshop May 7-11 at Berkeley should be of interest to the many people encountering streaming data in different disciplines. It’s run by a group of astronomers who encounter streaming data all the time. I met Josh Bloom recently and he is broadly interested in a workshop covering all aspects of Machine Learning on streaming data. The hope here is that techniques developed in one area turn out useful in another which seems quite plausible. Particularly if you are in the bay area, consider checking it out.

5 0.70283169 444 hunch net-2011-09-07-KDD and MUCMD 2011

Introduction: At KDD I enjoyed Stephen Boyd ‘s invited talk about optimization quite a bit. However, the most interesting talk for me was David Haussler ‘s. His talk started out with a formidable load of biological complexity. About half-way through you start wondering, “can this be used to help with cancer?” And at the end he connects it directly to use with a call to arms for the audience: cure cancer. The core thesis here is that cancer is a complex set of diseases which can be distentangled via genetic assays, allowing attacking the specific signature of individual cancers. However, the data quantity and complex dependencies within the data require systematic and relatively automatic prediction and analysis algorithms of the kind that we are best familiar with. Some of the papers which interested me are: Kai-Wei Chang and Dan Roth , Selective Block Minimization for Faster Convergence of Limited Memory Large-Scale Linear Models , which is about effectively using a hard-example

6 0.68945223 85 hunch net-2005-06-28-A COLT paper

7 0.62828934 292 hunch net-2008-03-15-COLT Open Problems

8 0.48218876 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

9 0.3974486 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

10 0.39637884 41 hunch net-2005-03-15-The State of Tight Bounds

11 0.39338383 140 hunch net-2005-12-14-More NIPS Papers II

12 0.37606591 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

13 0.37370506 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

14 0.37291712 77 hunch net-2005-05-29-Maximum Margin Mismatch?

15 0.37160569 282 hunch net-2008-01-06-Research Political Issues

16 0.37114006 466 hunch net-2012-06-05-ICML acceptance statistics

17 0.37037322 297 hunch net-2008-04-22-Taking the next step

18 0.36814019 26 hunch net-2005-02-21-Problem: Cross Validation

19 0.36734322 439 hunch net-2011-08-01-Interesting papers at COLT 2011

20 0.36677125 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three