andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-931 knowledge-graph by maker-knowledge-mining

931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories


meta infos for this blog

Source: html

Introduction: Tomas Iesmantas had asked me for advice on a regression problem with 50 parameters, and I’d recommended Hamiltonian Monte Carlo. A few weeks later he reported back: After trying several modifications (HMC for all parameters at once, HMC just for first level parameters and Riemman manifold Hamiltonian Monte Carlo method), I finally got it running with HMC just for first level parameters and for others using direct sampling, since conditional distributions turned out to have closed form. However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. In my model log-likelihood I had exponents and values of metrics matrix elements was very large and wh


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). [sent-3, score-0.784]

2 Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. [sent-4, score-0.603]

3 Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. [sent-5, score-0.717]

4 In my model log-likelihood I had exponents and values of metrics matrix elements was very large and when inverting this matrix, algorithm often produced singular matrices. [sent-6, score-0.684]

5 I even tried adaptive HMC (Bayesian Adaptive Hamiltonian Monte Carlo with and Application to High-Dimensional BEKK GARCH Models of Martin Burda), but it did not worked. [sent-9, score-0.149]

6 Adaptation seemed strange, since there were no vanishing adaptation, just half of sum of previous and new metrics. [sent-10, score-0.056]

7 Bob asked: How did HMC for all the parameters work compared to using HMC for low-level ones and direct sampling for others? [sent-12, score-0.553]

8 Radford Neal discusses this approach in his MCMC handbook chapter on HMC, but we were hoping (backed up by some back of the envelope calculations) that we could just do all the parameters at once. [sent-13, score-0.455]

9 Sometimes first level parameters showed “wish to mix”, but second level parameters almost didn’t move at all, and usually all parameters were stuck on some values and didn’t move at all. [sent-15, score-1.682]

10 If I reduced leapfrog intergration step more, I just obtained very correlated chains and no matter how I tried to vary intergration step, number of steps, or mass matrix, second level parameters showed (almost) no will to mix. [sent-16, score-1.265]

11 As for HMC just for first level parameters, everything worked better than HMC for all parameters at once. [sent-17, score-0.545]

12 It seemes that second level parameters gave some very odd topology. [sent-18, score-0.556]

13 And for Riemman manifold HMC Fisher information metric wasn’t the right one in my case. [sent-19, score-0.368]

14 So there are no negative elements in the mass matrix? [sent-21, score-0.356]

15 I think this should help if there are more positive correlations in your posterior than negative ones, but hurt if there are many negative correlations. [sent-22, score-0.376]

16 I wonder if there’s something about your model that makes positive correlations more common than negative ones. [sent-23, score-0.262]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('hmc', 0.558), ('parameters', 0.344), ('manifold', 0.264), ('matrix', 0.234), ('riemman', 0.217), ('level', 0.15), ('mass', 0.148), ('intergration', 0.145), ('hamiltonian', 0.129), ('tomas', 0.124), ('monte', 0.117), ('negative', 0.114), ('adaptation', 0.111), ('metric', 0.104), ('ones', 0.096), ('elements', 0.094), ('adaptive', 0.092), ('quite', 0.085), ('carlo', 0.082), ('step', 0.08), ('positive', 0.078), ('bob', 0.072), ('showed', 0.072), ('correlations', 0.07), ('algorithm', 0.069), ('burda', 0.066), ('definite', 0.066), ('exponents', 0.066), ('iesmantas', 0.062), ('diagonal', 0.062), ('garch', 0.062), ('leapfrog', 0.062), ('second', 0.062), ('ensuring', 0.059), ('modifications', 0.059), ('envelope', 0.059), ('singular', 0.057), ('sampling', 0.057), ('tried', 0.057), ('since', 0.056), ('direct', 0.056), ('inverting', 0.056), ('move', 0.055), ('values', 0.055), ('metrics', 0.053), ('employ', 0.052), ('handbook', 0.052), ('radford', 0.052), ('first', 0.051), ('blind', 0.051)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

Introduction: Tomas Iesmantas had asked me for advice on a regression problem with 50 parameters, and I’d recommended Hamiltonian Monte Carlo. A few weeks later he reported back: After trying several modifications (HMC for all parameters at once, HMC just for first level parameters and Riemman manifold Hamiltonian Monte Carlo method), I finally got it running with HMC just for first level parameters and for others using direct sampling, since conditional distributions turned out to have closed form. However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. In my model log-likelihood I had exponents and values of metrics matrix elements was very large and wh

2 0.32958895 1339 andrew gelman stats-2012-05-23-Learning Differential Geometry for Hamiltonian Monte Carlo

Introduction: You can get a taste of Hamiltonian Monte Carlo (HMC) by reading the very gentle introduction in David MacKay’s general text on information theory: MacKay, D. 2003. Information Theory, Inference, and Learning Algorithms . Cambridge University Press. [see Chapter 31, which is relatively standalone and can be downloaded separately.] Follow this up with Radford Neal’s much more thorough introduction to HMC: Neal, R. 2011. MCMC Using Hamiltonian Dynamics . In Brooks, Gelman, Jones and Meng, eds., Handbook of Markov Chain Monte Carlo . Chapman and Hall/CRC Press. To understand why HMC works and set yourself on the path to understanding generalizations like Riemann manifold HMC , you’ll need to know a bit about differential geometry. I really liked the combination of these two books: Magnus, J. R. and H. Neudecker. 2007. Matrix Differential Calculus with Application in Statistics and Econometrics . 3rd Edition. Wiley? and Leimkuhler, B. and S.

3 0.28546807 1772 andrew gelman stats-2013-03-20-Stan at Google this Thurs and at Berkeley this Fri noon

Introduction: Michael Betancourt will be speaking at Google and at the University of California, Berkeley. The Google talk is closed to outsiders (but if you work at Google, you should go!); the Berkeley talk is open to all: Friday March 22, 12:10 pm, Evans Hall 1011. Title of talk: Stan : Practical Bayesian Inference with Hamiltonian Monte Carlo Abstract: Practical implementations of Bayesian inference are often limited to approximation methods that only slowly explore the posterior distribution. By taking advantage of the curvature of the posterior, however, Hamiltonian Monte Carlo (HMC) efficiently explores even the most highly contorted distributions. In this talk I will review the foundations of and recent developments within HMC, concluding with a discussion of Stan, a powerful inference engine that utilizes HMC, automatic differentiation, and adaptive methods to minimize user input. This is cool stuff. And he’ll be showing the whirlpool movie!

4 0.24793798 1749 andrew gelman stats-2013-03-04-Stan in L.A. this Wed 3:30pm

Introduction: Michael Betancourt will be speaking at UCLA: The location for refreshment is in room 51-254 CHS at 3:00 PM. The place for the seminar is at CHS 33-105A at 3:30pm – 4:30pm, Wed 6 Mar. ["CHS" stands for Center for Health Sciences, the building of the UCLA schools of medicine and public health. Here's a map with directions .] Title of talk: Stan : Practical Bayesian Inference with Hamiltonian Monte Carlo Abstract: Practical implementations of Bayesian inference are often limited to approximation methods that only slowly explore the posterior distribution. By taking advantage of the curvature of the posterior, however, Hamiltonian Monte Carlo (HMC) efficiently explores even the most highly contorted distributions. In this talk I will review the foundations of and recent developments within HMC, concluding with a discussion of Stan, a powerful inference engine that utilizes HMC, automatic differentiation, and adaptive methods to minimize user input. This is cool stuff.

5 0.23439893 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

Introduction: We interrupt our usual program of Ed Wegman Gregg Easterbrook Niall Ferguson mockery to deliver a serious update on our statistical computing project. Stan (“Sampling Through Adaptive Neighborhoods”) is our new C++ program (written mostly by Bob Carpenter) that draws samples from Bayesian models. Stan can take different sorts of inputs: you can write the model in a Bugs-like syntax and it goes from there, or you can write the log-posterior directly as a C++ function. Most of the computation is done using Hamiltonian Monte Carlo. HMC requires some tuning, so Matt Hoffman up and wrote a new algorithm, Nuts (the “No-U-Turn Sampler”) which optimizes HMC adaptively. In many settings, Nuts is actually more computationally efficient than the optimal static HMC! When the the Nuts paper appeared on Arxiv, Christian Robert noticed it and had some reactions . In response to Xian’s comments, Matt writes: Christian writes: I wonder about the computing time (and the “una

6 0.1893568 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

7 0.16861972 2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs

8 0.15932122 858 andrew gelman stats-2011-08-17-Jumping off the edge of the world

9 0.14304547 2296 andrew gelman stats-2014-04-19-Index or indicator variables

10 0.14093617 861 andrew gelman stats-2011-08-19-Will Stan work well with 40×40 matrices?

11 0.1296154 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

12 0.12509732 1287 andrew gelman stats-2012-04-28-Understanding simulations in terms of predictive inference?

13 0.12203208 1144 andrew gelman stats-2012-01-29-How many parameters are in a multilevel model?

14 0.12194574 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

15 0.11973172 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

16 0.11932446 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

17 0.11553009 2258 andrew gelman stats-2014-03-21-Random matrices in the news

18 0.11527436 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

19 0.11292474 1466 andrew gelman stats-2012-08-22-The scaled inverse Wishart prior distribution for a covariance matrix in a hierarchical model

20 0.09789449 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.116), (1, 0.095), (2, 0.011), (3, 0.045), (4, 0.05), (5, 0.044), (6, 0.062), (7, -0.082), (8, -0.062), (9, -0.045), (10, -0.021), (11, -0.021), (12, -0.056), (13, -0.007), (14, 0.023), (15, -0.063), (16, -0.005), (17, 0.026), (18, 0.012), (19, -0.026), (20, -0.009), (21, -0.009), (22, 0.008), (23, 0.012), (24, 0.05), (25, 0.035), (26, -0.047), (27, 0.073), (28, 0.062), (29, 0.01), (30, -0.021), (31, 0.004), (32, 0.001), (33, 0.003), (34, -0.01), (35, -0.06), (36, -0.033), (37, -0.025), (38, 0.0), (39, 0.002), (40, -0.051), (41, 0.019), (42, -0.017), (43, 0.01), (44, 0.037), (45, -0.097), (46, -0.038), (47, 0.016), (48, 0.096), (49, -0.082)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96735907 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

Introduction: Tomas Iesmantas had asked me for advice on a regression problem with 50 parameters, and I’d recommended Hamiltonian Monte Carlo. A few weeks later he reported back: After trying several modifications (HMC for all parameters at once, HMC just for first level parameters and Riemman manifold Hamiltonian Monte Carlo method), I finally got it running with HMC just for first level parameters and for others using direct sampling, since conditional distributions turned out to have closed form. However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. In my model log-likelihood I had exponents and values of metrics matrix elements was very large and wh

2 0.82258052 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

Introduction: This post is an (unpaid) advertisement for the following extremely useful resource: Petersen, K. B. and M. S. Pedersen. 2008. The Matrix Cookbook . Tehcnical Report, Technical University of Denmark. It contains 70+ pages of useful relations and derivations involving matrices. What grabbed my eye was the computation of gradients for matrix operations ranging from eigenvalues and determinants to multivariate normal density functions. I had no idea the multivariate normal had such a clean gradient (see section 8). We’ve been playing around with Hamiltonian (aka Hybrid) Monte Carlo for sampling from the posterior of hierarchical generalized linear models with lots of interactions. HMC speeds up Metropolis sampling by using the gradient of the log probability to drive samples in the direction of higher probability density, which is particularly useful for correlated parameters that mix slowly with standard Gibbs sampling. Matt “III” Hoffman ‘s already got it workin

3 0.76936311 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

Introduction: We need help picking out an automatic differentiation package for Hamiltonian Monte Carlo sampling from the posterior of a generalized linear model with deep interactions. Specifically, we need to compute gradients for log probability functions with thousands of parameters that involve matrix (determinants, eigenvalues, inverses), stats (distributions), and math (log gamma) functions. Any suggestions? The Application: Hybrid Monte Carlo for Posteriors We’re getting serious about implementing posterior sampling using Hamiltonian Monte Carlo. HMC speeds up mixing by including gradient information to help guide the Metropolis proposals toward areas high probability. In practice, the algorithm requires a handful or of gradient calculations per sample, but there are many dimensions and the functions are hairy enough we don’t want to compute derivaties by hand. Auto Diff: Perhaps not What you Think It may not have been clear to readers of this blog that automatic diffe

4 0.75429344 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

Introduction: Galin Jones, Steve Brooks, Xiao-Li Meng and I edited a handbook of Markov Chain Monte Carlo that has just been published . My chapter (with Kenny Shirley) is here , and it begins like this: Convergence of Markov chain simulations can be monitored by measuring the diffusion and mixing of multiple independently-simulated chains, but different levels of convergence are appropriate for different goals. When considering inference from stochastic simulation, we need to separate two tasks: (1) inference about parameters and functions of parameters based on broad characteristics of their distribution, and (2) more precise computation of expectations and other functions of probability distributions. For the first task, there is a natural limit to precision beyond which additional simulations add essentially nothing; for the second task, the appropriate precision must be decided from external considerations. We illustrate with an example from our current research, a hierarchical model of t

5 0.7512241 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

Introduction: We were talking about parallelizing MCMC and I came up with what I thought was a neat idea for parallelizing MCMC (sample with fractional prior, average samples on a per-draw basis). But then I realized this approach could get the right posterior mean or right posterior variance, but not both, depending on how the prior was divided (for a beta-binomial example). Then Aki told me it had already been done in a more general form in a paper of Scott et al., Bayes and Big Data , which was then used as the baseline in: Willie Neiswanger, Chong Wang, and Eric Xing. 2013. Asymptotically Exact, Embarrassingly Parallel MCMC . arXiv 1311.4780. It’s a neat paper, which Xi’an already blogged about months ago. But what really struck me was the following quote: We use Stan, an automated Hamiltonian Monte Carlo (HMC) software package, to perform sampling for both the true posterior (for groundtruth and comparison methods) and for the subposteriors on each machine. One advantage o

6 0.72405696 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

7 0.70647448 1339 andrew gelman stats-2012-05-23-Learning Differential Geometry for Hamiltonian Monte Carlo

8 0.68392676 2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs

9 0.68322724 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

10 0.68267465 1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices

11 0.67371362 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

12 0.66070312 650 andrew gelman stats-2011-04-05-Monitor the efficiency of your Markov chain sampler using expected squared jumped distance!

13 0.64058703 2332 andrew gelman stats-2014-05-12-“The results (not shown) . . .”

14 0.63823867 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

15 0.63153231 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!

16 0.63095212 1682 andrew gelman stats-2013-01-19-R package for Bayes factors

17 0.62479752 419 andrew gelman stats-2010-11-18-Derivative-based MCMC as a breakthrough technique for implementing Bayesian statistics

18 0.61428809 1799 andrew gelman stats-2013-04-12-Stan 1.3.0 and RStan 1.3.0 Ready for Action

19 0.5868839 2340 andrew gelman stats-2014-05-20-Thermodynamic Monte Carlo: Michael Betancourt’s new method for simulating from difficult distributions and evaluating normalizing constants

20 0.5653668 1749 andrew gelman stats-2013-03-04-Stan in L.A. this Wed 3:30pm


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.018), (16, 0.041), (21, 0.023), (24, 0.161), (32, 0.013), (57, 0.023), (73, 0.151), (76, 0.012), (82, 0.108), (86, 0.011), (95, 0.058), (99, 0.227)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.92758346 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

Introduction: Tomas Iesmantas had asked me for advice on a regression problem with 50 parameters, and I’d recommended Hamiltonian Monte Carlo. A few weeks later he reported back: After trying several modifications (HMC for all parameters at once, HMC just for first level parameters and Riemman manifold Hamiltonian Monte Carlo method), I finally got it running with HMC just for first level parameters and for others using direct sampling, since conditional distributions turned out to have closed form. However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. In my model log-likelihood I had exponents and values of metrics matrix elements was very large and wh

2 0.91899455 917 andrew gelman stats-2011-09-20-Last post on Hipmunk

Introduction: There was some confusion on my last try , so let me explain one more time . . . The flights I where Hipmunk failed (see here for background) were not obscure itineraries. One of them was a nonstop from New York to Cincinnati; another was from NY to Durham, North Carolina; and yet another was a trip to Midway in Chicago. In that last case, Hipmunk showed no nonstops at all—which will come as a surprise to the passengers on the Southwest Airlines flight I was on a couple days ago! In these cases, Hipmunk didn’t even do the courtesy of flashing a message telling me to try elsewhere. I don’t understand. How hard would it be for the program to automatically do a Kayak search and find all the flights? Hipmunk’s graphics are great, though. Lee Wilkinson reports: Check out the figure below from The Grammar of Graphics. Dan Rope invented this graphic and programmed it in Java in the late 1990′s. We shopped this graph around to Orbitz and Expedia but they weren’t interested. So I

3 0.9169088 655 andrew gelman stats-2011-04-10-“Versatile, affordable chicken has grown in popularity”

Introduction: Awhile ago I was cleaning out the closet and found some old unread magazines. Good stuff. As we’ve discussed before , lots of things are better read a few years late. Today I was reading the 18 Nov 2004 issue of the London Review of Books, which contained (among other things) the following: - A review by Jenny Diski of a biography of Stanley Milgram. Diski appears to want to debunk: Milgram was a whiz at devising sexy experiments, but barely interested in any theoretical basis for them. They all have the same instant attractiveness of style, and then an underlying emptiness. Huh? Michael Jordan couldn’t hit the curveball and he was reportedly an easy mark for golf hustlers but that doesn’t diminish his greatness on the basketball court. She also criticizes Milgram for being “no help at all” for solving international disputes. OK, fine. I haven’t solved any international disputes either. Milgram, though, . . . he conducted an imaginative experiment whose results stu

4 0.91574889 1925 andrew gelman stats-2013-07-04-“Versatile, affordable chicken has grown in popularity”

Introduction: From two years ago : Awhile ago I was cleaning out the closet and found some old unread magazines. Good stuff. As we’ve discussed before , lots of things are better read a few years late. Today I was reading the 18 Nov 2004 issue of the London Review of Books, which contained (among other things) the following: - A review by Jenny Diski of a biography of Stanley Milgram. Diski appears to want to debunk: Milgram was a whiz at devising sexy experiments, but barely interested in any theoretical basis for them. They all have the same instant attractiveness of style, and then an underlying emptiness. Huh? Michael Jordan couldn’t hit the curveball and he was reportedly an easy mark for golf hustlers but that doesn’t diminish his greatness on the basketball court. She also criticizes Milgram for being “no help at all” for solving international disputes. OK, fine. I haven’t solved any international disputes either. Milgram, though, . . . he conducted an imaginative exp

5 0.9112289 1748 andrew gelman stats-2013-03-04-PyStan!

Introduction: Stan is written in C++ and can be run from the command line and from R. We’d like for Python users to be able to run Stan as well. If anyone is interested in doing this, please let us know and we’d be happy to work with you on it. Stan, like Python, is completely free and open-source. P.S. Because Stan is open-source, it of course would also be possible for people to translate Stan into Python, or to take whatever features they like from Stan and incorporate them into a Python package. That’s fine too. But we think it would make sense in addition for users to be able to run Stan directly from Python, in the same way that it can be run from R.

6 0.88617909 7 andrew gelman stats-2010-04-27-Should Mister P be allowed-encouraged to reside in counter-factual populations?

7 0.88154507 794 andrew gelman stats-2011-07-09-The quest for the holy graph

8 0.87434208 801 andrew gelman stats-2011-07-13-On the half-Cauchy prior for a global scale parameter

9 0.87132353 2346 andrew gelman stats-2014-05-24-Buzzfeed, Porn, Kansas…That Can’t Be Good

10 0.8680976 280 andrew gelman stats-2010-09-16-Meet Hipmunk, a really cool flight-finder that doesn’t actually work

11 0.86001694 940 andrew gelman stats-2011-10-03-It depends upon what the meaning of the word “firm” is.

12 0.85483468 497 andrew gelman stats-2011-01-02-Hipmunk update

13 0.85297155 2238 andrew gelman stats-2014-03-09-Hipmunk worked

14 0.85282612 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

15 0.85229862 335 andrew gelman stats-2010-10-11-How to think about Lou Dobbs

16 0.85209048 1963 andrew gelman stats-2013-07-31-Response by Jessica Tracy and Alec Beall to my critique of the methods in their paper, “Women Are More Likely to Wear Red or Pink at Peak Fertility”

17 0.85084647 1488 andrew gelman stats-2012-09-08-Annals of spam

18 0.84894001 178 andrew gelman stats-2010-08-03-(Partisan) visualization of health care legislation

19 0.84718704 1094 andrew gelman stats-2011-12-31-Using factor analysis or principal components analysis or measurement-error models for biological measurements in archaeology?

20 0.84708226 1661 andrew gelman stats-2013-01-08-Software is as software does