andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-555 knowledge-graph by maker-knowledge-mining

555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients


meta infos for this blog

Source: html

Introduction: This post is an (unpaid) advertisement for the following extremely useful resource: Petersen, K. B. and M. S. Pedersen. 2008. The Matrix Cookbook . Tehcnical Report, Technical University of Denmark. It contains 70+ pages of useful relations and derivations involving matrices. What grabbed my eye was the computation of gradients for matrix operations ranging from eigenvalues and determinants to multivariate normal density functions. I had no idea the multivariate normal had such a clean gradient (see section 8). We’ve been playing around with Hamiltonian (aka Hybrid) Monte Carlo for sampling from the posterior of hierarchical generalized linear models with lots of interactions. HMC speeds up Metropolis sampling by using the gradient of the log probability to drive samples in the direction of higher probability density, which is particularly useful for correlated parameters that mix slowly with standard Gibbs sampling. Matt “III” Hoffman ‘s already got it workin


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This post is an (unpaid) advertisement for the following extremely useful resource: Petersen, K. [sent-1, score-0.213]

2 It contains 70+ pages of useful relations and derivations involving matrices. [sent-9, score-0.321]

3 What grabbed my eye was the computation of gradients for matrix operations ranging from eigenvalues and determinants to multivariate normal density functions. [sent-10, score-1.262]

4 I had no idea the multivariate normal had such a clean gradient (see section 8). [sent-11, score-0.526]

5 We’ve been playing around with Hamiltonian (aka Hybrid) Monte Carlo for sampling from the posterior of hierarchical generalized linear models with lots of interactions. [sent-12, score-0.097]

6 HMC speeds up Metropolis sampling by using the gradient of the log probability to drive samples in the direction of higher probability density, which is particularly useful for correlated parameters that mix slowly with standard Gibbs sampling. [sent-13, score-0.827]

7 To really get this going, we need to be able to handle the varying intercept/varying slope models used in the Red State/Blue State analysis of Gelman, Shor, Bafumi and Park, which is also explained by Andrew and Jennifer in their regression book . [sent-15, score-0.193]

8 The slopes and intercepts get multivariate normal priors, the covariance matrices of which require hyperpriors. [sent-16, score-0.757]

9 That means log probability functions involving operations like matrix-inverse products or eigenvalues (depending on how you model the covariance prior). [sent-17, score-0.949]

10 I’ve been following up some earlier suggestions on this blog to automatic differentiation . [sent-18, score-0.327]

11 I’ve pretty much settled on using David Gay ‘s elegant little Reverse Automatic Differentiation (RAD) package , which is a very straightforward C++ template-based library for computing gradients. [sent-19, score-0.52]

12 All you need to do is replace the floating point calcs in code with templated versions. [sent-20, score-0.497]

13 This in turn means we need a templated C++ vector/matrix library with BLAS/LAPACK-like functionality. [sent-21, score-0.64]

14 An exchange on Justin Domke’s blog led me to the Eigen package , which is a beautifully templated implementation of BLAS and at least what I need from LAPACK. [sent-22, score-0.647]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('templated', 0.304), ('cookbook', 0.212), ('eigenvalues', 0.203), ('multivariate', 0.199), ('differentiation', 0.174), ('gradient', 0.171), ('operations', 0.161), ('normal', 0.156), ('automatic', 0.153), ('covariance', 0.149), ('library', 0.141), ('density', 0.135), ('matrix', 0.133), ('log', 0.132), ('package', 0.118), ('involving', 0.116), ('need', 0.114), ('led', 0.111), ('useful', 0.107), ('probability', 0.107), ('speeds', 0.106), ('advertisement', 0.106), ('eigen', 0.106), ('petersen', 0.106), ('unpaid', 0.101), ('determinants', 0.098), ('hybrid', 0.098), ('derivations', 0.098), ('sampling', 0.097), ('bafumi', 0.093), ('hoffman', 0.093), ('elegant', 0.09), ('intercepts', 0.09), ('gradients', 0.09), ('settled', 0.089), ('resource', 0.087), ('aka', 0.087), ('grabbed', 0.087), ('shor', 0.085), ('slopes', 0.085), ('doc', 0.085), ('iii', 0.084), ('straightforward', 0.082), ('means', 0.081), ('floating', 0.079), ('slope', 0.079), ('hmc', 0.079), ('metropolis', 0.078), ('matrices', 0.078), ('gibbs', 0.076)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

Introduction: This post is an (unpaid) advertisement for the following extremely useful resource: Petersen, K. B. and M. S. Pedersen. 2008. The Matrix Cookbook . Tehcnical Report, Technical University of Denmark. It contains 70+ pages of useful relations and derivations involving matrices. What grabbed my eye was the computation of gradients for matrix operations ranging from eigenvalues and determinants to multivariate normal density functions. I had no idea the multivariate normal had such a clean gradient (see section 8). We’ve been playing around with Hamiltonian (aka Hybrid) Monte Carlo for sampling from the posterior of hierarchical generalized linear models with lots of interactions. HMC speeds up Metropolis sampling by using the gradient of the log probability to drive samples in the direction of higher probability density, which is particularly useful for correlated parameters that mix slowly with standard Gibbs sampling. Matt “III” Hoffman ‘s already got it workin

2 0.33817959 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

Introduction: We need help picking out an automatic differentiation package for Hamiltonian Monte Carlo sampling from the posterior of a generalized linear model with deep interactions. Specifically, we need to compute gradients for log probability functions with thousands of parameters that involve matrix (determinants, eigenvalues, inverses), stats (distributions), and math (log gamma) functions. Any suggestions? The Application: Hybrid Monte Carlo for Posteriors We’re getting serious about implementing posterior sampling using Hamiltonian Monte Carlo. HMC speeds up mixing by including gradient information to help guide the Metropolis proposals toward areas high probability. In practice, the algorithm requires a handful or of gradient calculations per sample, but there are many dimensions and the functions are hairy enough we don’t want to compute derivaties by hand. Auto Diff: Perhaps not What you Think It may not have been clear to readers of this blog that automatic diffe

3 0.18553028 1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices

Introduction: Since we’ve been discussing prior distributions on covariance matrices, I will recommend this recent article (coauthored with Tomoki Tokuda, Ben Goodrich, Iven Van Mechelen, and Francis Tuerlinckx) on their visualization: We present some methods for graphing distributions of covariance matrices and demonstrate them on several models, including the Wishart, inverse-Wishart, and scaled inverse-Wishart families in different dimensions. Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using two and three-dimensional plots to reveal multivariate structure. Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single multivariate dataset. Our visualization methods are available through the R package VisCov.

4 0.18338671 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

Introduction: In response to our recent posting of Amazon’s offer of Bayesian Data Analysis 3rd edition at 40% off, some people asked what was in this new edition, with more information beyond the beautiful cover image and the brief paragraph I’d posted earlier. Here’s the table of contents. The following sections have all-new material: 1.4 New introduction of BDA principles using a simple spell checking example 2.9 Weakly informative prior distributions 5.7 Weakly informative priors for hierarchical variance parameters 7.1-7.4 Predictive accuracy for model evaluation and comparison 10.6 Computing environments 11.4 Split R-hat 11.5 New measure of effective number of simulation draws 13.7 Variational inference 13.8 Expectation propagation 13.9 Other approximations 14.6 Regularization for regression models C.1 Getting started with R and Stan C.2 Fitting a hierarchical model in Stan C.4 Programming Hamiltonian Monte Carlo in R And the new chapters: 20 Basis function models 2

5 0.17856513 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

Introduction: Stan 1.2.0 and RStan 1.2.0 are now available for download. See: http://mc-stan.org/ Here are the highlights. Full Mass Matrix Estimation during Warmup Yuanjun Gao, a first-year grad student here at Columbia (!), built a regularized mass-matrix estimator. This helps for posteriors with high correlation among parameters and varying scales. We’re still testing this ourselves, so the estimation procedure may change in the future (don’t worry — it satisfies detailed balance as is, but we might be able to make it more computationally efficient in terms of time per effective sample). It’s not the default option. The major reason is the matrix operations required are expensive, raising the algorithm cost to , where is the average number of leapfrog steps, is the number of iterations, and is the number of parameters. Yuanjun did a great job with the Cholesky factorizations and implemented this about as efficiently as is possible. (His homework for Andrew’s class w

6 0.15229549 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!

7 0.13739911 419 andrew gelman stats-2010-11-18-Derivative-based MCMC as a breakthrough technique for implementing Bayesian statistics

8 0.13513473 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

9 0.12976527 1627 andrew gelman stats-2012-12-17-Stan and RStan 1.1.0

10 0.1296154 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

11 0.12935165 1682 andrew gelman stats-2013-01-19-R package for Bayes factors

12 0.12632433 1772 andrew gelman stats-2013-03-20-Stan at Google this Thurs and at Berkeley this Fri noon

13 0.12572314 1799 andrew gelman stats-2013-04-12-Stan 1.3.0 and RStan 1.3.0 Ready for Action

14 0.12292041 1726 andrew gelman stats-2013-02-18-What to read to catch up on multivariate statistics?

15 0.11502188 2258 andrew gelman stats-2014-03-21-Random matrices in the news

16 0.1107408 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability

17 0.10978796 1749 andrew gelman stats-2013-03-04-Stan in L.A. this Wed 3:30pm

18 0.10353951 1739 andrew gelman stats-2013-02-26-An AI can build and try out statistical models using an open-ended generative grammar

19 0.10252427 2161 andrew gelman stats-2014-01-07-My recent debugging experience

20 0.099275626 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.14), (1, 0.112), (2, 0.002), (3, 0.064), (4, 0.063), (5, 0.043), (6, 0.036), (7, -0.103), (8, -0.074), (9, -0.025), (10, -0.018), (11, -0.033), (12, -0.026), (13, -0.011), (14, 0.041), (15, -0.02), (16, -0.018), (17, 0.051), (18, 0.006), (19, -0.023), (20, 0.028), (21, 0.009), (22, -0.009), (23, 0.028), (24, 0.068), (25, 0.03), (26, -0.031), (27, 0.123), (28, 0.071), (29, -0.008), (30, -0.02), (31, 0.034), (32, -0.006), (33, 0.016), (34, 0.017), (35, -0.078), (36, 0.002), (37, -0.002), (38, -0.041), (39, 0.016), (40, -0.065), (41, 0.0), (42, 0.035), (43, -0.037), (44, 0.014), (45, -0.016), (46, -0.077), (47, 0.032), (48, 0.063), (49, -0.165)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9621191 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

Introduction: This post is an (unpaid) advertisement for the following extremely useful resource: Petersen, K. B. and M. S. Pedersen. 2008. The Matrix Cookbook . Tehcnical Report, Technical University of Denmark. It contains 70+ pages of useful relations and derivations involving matrices. What grabbed my eye was the computation of gradients for matrix operations ranging from eigenvalues and determinants to multivariate normal density functions. I had no idea the multivariate normal had such a clean gradient (see section 8). We’ve been playing around with Hamiltonian (aka Hybrid) Monte Carlo for sampling from the posterior of hierarchical generalized linear models with lots of interactions. HMC speeds up Metropolis sampling by using the gradient of the log probability to drive samples in the direction of higher probability density, which is particularly useful for correlated parameters that mix slowly with standard Gibbs sampling. Matt “III” Hoffman ‘s already got it workin

2 0.82115203 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

Introduction: We need help picking out an automatic differentiation package for Hamiltonian Monte Carlo sampling from the posterior of a generalized linear model with deep interactions. Specifically, we need to compute gradients for log probability functions with thousands of parameters that involve matrix (determinants, eigenvalues, inverses), stats (distributions), and math (log gamma) functions. Any suggestions? The Application: Hybrid Monte Carlo for Posteriors We’re getting serious about implementing posterior sampling using Hamiltonian Monte Carlo. HMC speeds up mixing by including gradient information to help guide the Metropolis proposals toward areas high probability. In practice, the algorithm requires a handful or of gradient calculations per sample, but there are many dimensions and the functions are hairy enough we don’t want to compute derivaties by hand. Auto Diff: Perhaps not What you Think It may not have been clear to readers of this blog that automatic diffe

3 0.81029642 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

Introduction: Tomas Iesmantas had asked me for advice on a regression problem with 50 parameters, and I’d recommended Hamiltonian Monte Carlo. A few weeks later he reported back: After trying several modifications (HMC for all parameters at once, HMC just for first level parameters and Riemman manifold Hamiltonian Monte Carlo method), I finally got it running with HMC just for first level parameters and for others using direct sampling, since conditional distributions turned out to have closed form. However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. In my model log-likelihood I had exponents and values of metrics matrix elements was very large and wh

4 0.80931228 1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices

Introduction: Since we’ve been discussing prior distributions on covariance matrices, I will recommend this recent article (coauthored with Tomoki Tokuda, Ben Goodrich, Iven Van Mechelen, and Francis Tuerlinckx) on their visualization: We present some methods for graphing distributions of covariance matrices and demonstrate them on several models, including the Wishart, inverse-Wishart, and scaled inverse-Wishart families in different dimensions. Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using two and three-dimensional plots to reveal multivariate structure. Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single multivariate dataset. Our visualization methods are available through the R package VisCov.

5 0.78909612 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

Introduction: In response to our recent posting of Amazon’s offer of Bayesian Data Analysis 3rd edition at 40% off, some people asked what was in this new edition, with more information beyond the beautiful cover image and the brief paragraph I’d posted earlier. Here’s the table of contents. The following sections have all-new material: 1.4 New introduction of BDA principles using a simple spell checking example 2.9 Weakly informative prior distributions 5.7 Weakly informative priors for hierarchical variance parameters 7.1-7.4 Predictive accuracy for model evaluation and comparison 10.6 Computing environments 11.4 Split R-hat 11.5 New measure of effective number of simulation draws 13.7 Variational inference 13.8 Expectation propagation 13.9 Other approximations 14.6 Regularization for regression models C.1 Getting started with R and Stan C.2 Fitting a hierarchical model in Stan C.4 Programming Hamiltonian Monte Carlo in R And the new chapters: 20 Basis function models 2

6 0.73738599 1339 andrew gelman stats-2012-05-23-Learning Differential Geometry for Hamiltonian Monte Carlo

7 0.72781003 1682 andrew gelman stats-2013-01-19-R package for Bayes factors

8 0.72713512 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!

9 0.71968967 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

10 0.71096307 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

11 0.69395989 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

12 0.67780596 1799 andrew gelman stats-2013-04-12-Stan 1.3.0 and RStan 1.3.0 Ready for Action

13 0.65397388 501 andrew gelman stats-2011-01-04-A new R package for fititng multilevel models

14 0.65046549 419 andrew gelman stats-2010-11-18-Derivative-based MCMC as a breakthrough technique for implementing Bayesian statistics

15 0.64040309 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

16 0.62337768 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

17 0.6191743 2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs

18 0.61724311 2332 andrew gelman stats-2014-05-12-“The results (not shown) . . .”

19 0.61696577 2258 andrew gelman stats-2014-03-21-Random matrices in the news

20 0.57535619 1466 andrew gelman stats-2012-08-22-The scaled inverse Wishart prior distribution for a covariance matrix in a hierarchical model


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(5, 0.014), (15, 0.013), (16, 0.055), (21, 0.035), (24, 0.095), (51, 0.02), (57, 0.034), (59, 0.049), (60, 0.115), (61, 0.015), (82, 0.048), (85, 0.014), (86, 0.04), (89, 0.052), (95, 0.025), (99, 0.241)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93680161 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

Introduction: This post is an (unpaid) advertisement for the following extremely useful resource: Petersen, K. B. and M. S. Pedersen. 2008. The Matrix Cookbook . Tehcnical Report, Technical University of Denmark. It contains 70+ pages of useful relations and derivations involving matrices. What grabbed my eye was the computation of gradients for matrix operations ranging from eigenvalues and determinants to multivariate normal density functions. I had no idea the multivariate normal had such a clean gradient (see section 8). We’ve been playing around with Hamiltonian (aka Hybrid) Monte Carlo for sampling from the posterior of hierarchical generalized linear models with lots of interactions. HMC speeds up Metropolis sampling by using the gradient of the log probability to drive samples in the direction of higher probability density, which is particularly useful for correlated parameters that mix slowly with standard Gibbs sampling. Matt “III” Hoffman ‘s already got it workin

2 0.92364621 567 andrew gelman stats-2011-02-10-English-to-English translation

Introduction: It’s not just for Chaucer (or Mad Max) anymore. Peter Frase writes: It’s a shame that we neglect to re-translate older works into English merely because they were originally written in English. Languages change, and our reactions to words and formulations change. This is obvious when you read something like Chaucer, but it’s true to a more subtle degree of more recent writings. There is a pretty good chance that something written in the 19th century won’t mean the same thing to us that it meant to its contemporary readers. Thus it would make sense to re-translate Huckleberry Finn into modern language, in the same way we periodically get new translations of Homer or Dante or Thomas Mann. This is a point that applies equally well to non-fiction and social theory: in some ways, English-speaking sociologists are lucky that our canonical trio of classical theorists-Marx, Weber, and Durkheim-all wrote in another language. The most recent translation of Capital is eminently more readable

3 0.91790223 1448 andrew gelman stats-2012-08-07-Scientific fraud, double standards and institutions protecting themselves

Introduction: Ole Rogeberg writes: After reading your recent post , I thought you might find this interesting – especially the scanned interview that is included at the bottom of the posting. It’s an old OMNI interview with Walter Stewart that was the first thing I read (at a young and impressionable age ;) about the prevalence of errors, fraud and cheating in science, the institutional barriers to tackling it, the often high personal costs to whistleblowers, the difficulty of accessing scientific data to repeat published analyses, and the surprisingly negative attitude towards criticism within scientific communities. Highly recommended entertaining reading – with some good examples of scientific investigations into implausible effects. The post itself contains the info I once dug up about what happened to him later – he seems like an interesting and very determined guy: when the NIH tried to stop him from investigating scientific errors and fraud he went on a hunger strike. No idea what’s h

4 0.91732204 1548 andrew gelman stats-2012-10-25-Health disparities are associated with low life expectancy

Introduction: Lee Seachrest points to an article , “Life expectancy and disparity: an international comparison of life table data,” by James Vaupel, Zhen Zhang, and Alyson van Raalte. This paper has killer graphs. Here are their results: In 89 of the 170 years from 1840 to 2009, the country with the highest male life expectancy also had the lowest male life disparity. This was true in 86 years for female life expectancy and disparity. In all years, the top several life expectancy leaders were also the top life disparity leaders. Although only 38% of deaths were premature, fully 84% of the increase in life expectancy resulted from averting premature deaths. The reduction in life disparity resulted from reductions in early-life disparity, that is, disparity caused by premature deaths; late-life disparity levels remained roughly constant. The authors also note: Reducing early-life disparities helps people plan their less-uncertain lifetimes. A higher likelihood of surviving to old age

5 0.91458583 968 andrew gelman stats-2011-10-21-Could I use a statistics coach?

Introduction: In a thought-provoking article subtitled “Top athletes and singers have coaches. Should you?,” surgeon/journalist Atul Gawande describes how, even after eight years and more than two thousand operations, he benefited from coaching (from a retired surgeon), just as pro athletes and accomplished musicians do. He then talks about proposals to institute coaching for teachers to help them perform better. This all makes sense to me—except that I’m a little worried about expansion of the teacher coaching program. I can imagine it could work pretty well for teachers who are motivated to be coached—for example, I think I would get a lot out of it—but I’m afraid that if teacher coaching became a big business, it would get taken over by McKinsey-style scam artists. But could I use a coach? First, let me get rid of the easy questions. 1. Yes, I could use a squash coach. I enjoy squash and play when I can, but I’m terrible at it. I’m sure a coach would help. On the other hand, I’m h

6 0.90878206 191 andrew gelman stats-2010-08-08-Angry about the soda tax

7 0.90225941 1040 andrew gelman stats-2011-12-03-Absolutely last Niall Ferguson post ever, in which I offer him serious advice

8 0.90193188 1053 andrew gelman stats-2011-12-11-This one is so dumb it makes me want to barf

9 0.8991248 949 andrew gelman stats-2011-10-10-Grrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr

10 0.89538056 1680 andrew gelman stats-2013-01-18-“If scientists wrote horoscopes, this is what yours would say”

11 0.88737398 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

12 0.88670421 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

13 0.88046861 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

14 0.88024265 326 andrew gelman stats-2010-10-07-Peer pressure, selection, and educational reform

15 0.87543011 1963 andrew gelman stats-2013-07-31-Response by Jessica Tracy and Alec Beall to my critique of the methods in their paper, “Women Are More Likely to Wear Red or Pink at Peak Fertility”

16 0.87498307 231 andrew gelman stats-2010-08-24-Yet another Bayesian job opportunity

17 0.87482351 2355 andrew gelman stats-2014-05-31-Jessica Tracy and Alec Beall (authors of the fertile-women-wear-pink study) comment on our Garden of Forking Paths paper, and I comment on their comments

18 0.87473464 901 andrew gelman stats-2011-09-12-Some thoughts on academic cheating, inspired by Frey, Wegman, Fischer, Hauser, Stapel

19 0.87432188 623 andrew gelman stats-2011-03-21-Baseball’s greatest fielders

20 0.87401652 67 andrew gelman stats-2010-06-03-More on that Dartmouth health care study