andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-804 knowledge-graph by maker-knowledge-mining

804 andrew gelman stats-2011-07-15-Static sensitivity analysis


meta infos for this blog

Source: html

Introduction: This is one of my favorite ideas. I used it in an application but have never formally studied it or written it up as a general method. Sensitivity analysis is when you check how inferences change when you vary fit several different models or when you vary inputs within a model. Sensitivity analysis is often recommended but is typically difficult to do, what with the hassle of carrying around all these different estimates. In Bayesian inference, sensitivity analysis is associated with varying the prior distribution, which irritates me: why not consider sensitivity to the likelihood, as that’s typically just as arbitrary as the prior while having a much larger effect on the inferences. So we came up with static sensitivity analysis , which is a way to assess sensitivity to assumptions while fitting only one model. The idea is that Bayesian posterior simulation gives you a range of parameter values, and from these you can learn about sensitivity directly. The published exampl


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I used it in an application but have never formally studied it or written it up as a general method. [sent-2, score-0.216]

2 Sensitivity analysis is when you check how inferences change when you vary fit several different models or when you vary inputs within a model. [sent-3, score-0.42]

3 Sensitivity analysis is often recommended but is typically difficult to do, what with the hassle of carrying around all these different estimates. [sent-4, score-0.306]

4 In Bayesian inference, sensitivity analysis is associated with varying the prior distribution, which irritates me: why not consider sensitivity to the likelihood, as that’s typically just as arbitrary as the prior while having a much larger effect on the inferences. [sent-5, score-1.363]

5 So we came up with static sensitivity analysis , which is a way to assess sensitivity to assumptions while fitting only one model. [sent-6, score-1.185]

6 The idea is that Bayesian posterior simulation gives you a range of parameter values, and from these you can learn about sensitivity directly. [sent-7, score-0.645]

7 One of the products of the analysis was estimation of the percent of PERC metabolized at high and low doses. [sent-9, score-1.082]

8 We fit a multilevel model to data from six experimental subjects, so we obtained inference for the percent metabolized at each dose for each person and the distribution of these percents over the general population. [sent-10, score-1.347]

9 Here’s the static sensitivity analysis: Each plot shows inferences for two quantities of interest–percent metabolized at each of the two doses–with the dots representing different draws from the fitted posterior distribution. [sent-11, score-1.367]

10 (The percent metabolized is lower at high doses (an effect of saturation of the metabolic process in the liver), so in this case it’s possible to “cheat” and display two summaries on each plot. [sent-12, score-1.212]

11 ) The four graphs show percent metabolized as a function of four different variables in the model. [sent-13, score-0.993]

12 (It would be possible to look at the other five subjects, but the set of graphs here gives the general idea. [sent-15, score-0.235]

13 ) To understand the static sensitivity analysis, consider the upper-left graph. [sent-16, score-0.611]

14 The simulations reveal some posterior uncertainty about the percent metabolized (it is estimated to be between about 10-40% at low dose and 0. [sent-17, score-1.308]

15 5-2% at high dose) and also on the fat-blood partition coefficient displayed on the x-axis (it is estimated to be somewhere between 65 and 110). [sent-18, score-0.495]

16 More to the point, the fat-blood partition coefficient influences the inference for metabolism at low dose but not at high dose. [sent-19, score-1.042]

17 This result can be directly interpreted as sensitivity to the prior distribution for this parameter: if you shift the prior to the left or right, you will shift the inferences up or down for percent metabolized at low dose, but not at high dose. [sent-20, score-1.91]

18 The scaling coefficient strongly influences the percent metabolized at high dose but has essentially no effect on the low-dose rate. [sent-22, score-1.558]

19 Then you’ll want to get good information about the fat-blood partition coefficient (if possible) but it’s not so important to get more precise on the scaling coefficient. [sent-24, score-0.379]

20 It’s a fun problem: it has applied importance but also links to a huge theoretical literature on sensitivity analysis. [sent-27, score-0.468]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('metabolized', 0.541), ('sensitivity', 0.468), ('dose', 0.312), ('percent', 0.225), ('partition', 0.156), ('static', 0.143), ('perc', 0.135), ('coefficient', 0.132), ('doses', 0.111), ('high', 0.11), ('analysis', 0.106), ('low', 0.1), ('influences', 0.092), ('scaling', 0.091), ('inferences', 0.09), ('prior', 0.084), ('inference', 0.078), ('posterior', 0.076), ('shift', 0.072), ('graphs', 0.068), ('studied', 0.067), ('subjects', 0.067), ('vary', 0.066), ('six', 0.064), ('distribution', 0.064), ('general', 0.063), ('metabolism', 0.062), ('metabolic', 0.058), ('toxicokinetics', 0.058), ('possible', 0.056), ('liver', 0.056), ('saturation', 0.056), ('effect', 0.055), ('four', 0.055), ('estimated', 0.054), ('frederic', 0.054), ('parameter', 0.053), ('hassle', 0.052), ('carrying', 0.052), ('irritates', 0.051), ('bois', 0.05), ('different', 0.049), ('gives', 0.048), ('typically', 0.047), ('cheat', 0.046), ('written', 0.045), ('primarily', 0.044), ('displayed', 0.043), ('inputs', 0.043), ('formally', 0.041)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999976 804 andrew gelman stats-2011-07-15-Static sensitivity analysis

Introduction: This is one of my favorite ideas. I used it in an application but have never formally studied it or written it up as a general method. Sensitivity analysis is when you check how inferences change when you vary fit several different models or when you vary inputs within a model. Sensitivity analysis is often recommended but is typically difficult to do, what with the hassle of carrying around all these different estimates. In Bayesian inference, sensitivity analysis is associated with varying the prior distribution, which irritates me: why not consider sensitivity to the likelihood, as that’s typically just as arbitrary as the prior while having a much larger effect on the inferences. So we came up with static sensitivity analysis , which is a way to assess sensitivity to assumptions while fitting only one model. The idea is that Bayesian posterior simulation gives you a range of parameter values, and from these you can learn about sensitivity directly. The published exampl

2 0.13643049 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

Introduction: The journal Rationality, Markets and Morals has finally posted all the articles in their special issue on the philosophy of Bayesian statistics. My contribution is called Induction and Deduction in Bayesian Data Analysis. I’ll also post my reactions to the other articles. I wrote these notes a few weeks ago and could post them all at once, but I think it will be easier if I post my reactions to each article separately. To start with my best material, here’s my reaction to David Cox and Deborah Mayo, “A Statistical Scientist Meets a Philosopher of Science.” I recommend you read all the way through my long note below; there’s good stuff throughout: 1. Cox: “[Philosophy] forces us to say what it is that we really want to know when we analyze a situation statistically.” This reminds me of a standard question that Don Rubin (who, unlike me, has little use for philosophy in his research) asks in virtually any situation: “What would you do if you had all the data?” For

3 0.12618972 1941 andrew gelman stats-2013-07-16-Priors

Introduction: Nick Firoozye writes: While I am absolutely sympathetic to the Bayesian agenda I am often troubled by the requirement of having priors. We must have priors on the parameter of an infinite number of model we have never seen before and I find this troubling. There is a similarly troubling problem in economics of utility theory. Utility is on consumables. To be complete a consumer must assign utility to all sorts of things they never would have encountered. More recent versions of utility theory instead make consumption goods a portfolio of attributes. Cadillacs are x many units of luxury y of transport etc etc. And we can automatically have personal utilities to all these attributes. I don’t ever see parameters. Some model have few and some have hundreds. Instead, I see data. So I don’t know how to have an opinion on parameters themselves. Rather I think it far more natural to have opinions on the behavior of models. The prior predictive density is a good and sensible notion. Also

4 0.12348846 342 andrew gelman stats-2010-10-14-Trying to be precise about vagueness

Introduction: I recently saw this article that Stephen Senn wrote a couple of years ago, criticizing Bayesian sensitivity analyses that relied on vague prior distributions. I’m moving more and more toward the idea that Bayesian analysis should include actual prior information, so I generally agree with his points. As I used to say when teaching Bayesian data analysis, a Bayesian model is modular, and different pieces can be swapped in and out as needed. So you might start with an extremely weak prior distribution, but if it makes a difference it’s time to bite the bullet and include more information. My only disagreement with Senn’s paper is in its recommendation to try the so-called fixed-effects analysis. Beyond the difficulties with terminology (the expressions “fixed” and “random” effects are defined in different ways by different people in the literature; see here for a rant on the topic which made its way into some of my articles and books), there is the problem that, when a model ge

5 0.11059441 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

Introduction: Some recent blog discussion revealed some confusion that I’ll try to resolve here. I wrote that I’m not a big fan of subjective priors. Various commenters had difficulty with this point, and I think the issue was most clearly stated by Bill Jeff re erys, who wrote : It seems to me that your prior has to reflect your subjective information before you look at the data. How can it not? But this does not mean that the (subjective) prior that you choose is irrefutable; Surely a prior that reflects prior information just does not have to be inconsistent with that information. But that still leaves a range of priors that are consistent with it, the sort of priors that one would use in a sensitivity analysis, for example. I think I see what Bill is getting at. A prior represents your subjective belief, or some approximation to your subjective belief, even if it’s not perfect. That sounds reasonable but I don’t think it works. Or, at least, it often doesn’t work. Let’s start

6 0.10281181 529 andrew gelman stats-2011-01-21-“City Opens Inquiry on Grading Practices at a Top-Scoring Bronx School”

7 0.10180142 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

8 0.10147231 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

9 0.099634022 726 andrew gelman stats-2011-05-22-Handling multiple versions of an outcome variable

10 0.094981432 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

11 0.091836326 974 andrew gelman stats-2011-10-26-NYC jobs in applied statistics, psychometrics, and causal inference!

12 0.085458428 673 andrew gelman stats-2011-04-20-Upper-income people still don’t realize they’re upper-income

13 0.084918343 846 andrew gelman stats-2011-08-09-Default priors update?

14 0.083043315 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

15 0.080414027 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

16 0.080309413 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability

17 0.079552405 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

18 0.079299964 923 andrew gelman stats-2011-09-24-What is the normal range of values in a medical test?

19 0.07836476 775 andrew gelman stats-2011-06-21-Fundamental difficulty of inference for a ratio when the denominator could be positive or negative

20 0.076948278 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.137), (1, 0.099), (2, 0.026), (3, 0.0), (4, -0.0), (5, -0.051), (6, 0.022), (7, 0.031), (8, -0.04), (9, 0.012), (10, -0.021), (11, 0.01), (12, 0.009), (13, 0.005), (14, 0.017), (15, 0.032), (16, 0.016), (17, -0.001), (18, 0.01), (19, 0.023), (20, -0.006), (21, 0.011), (22, 0.034), (23, -0.008), (24, 0.023), (25, 0.026), (26, -0.044), (27, -0.006), (28, -0.009), (29, 0.019), (30, 0.008), (31, 0.014), (32, -0.007), (33, 0.025), (34, -0.014), (35, 0.021), (36, 0.027), (37, -0.008), (38, 0.01), (39, 0.026), (40, 0.016), (41, -0.002), (42, -0.036), (43, -0.011), (44, 0.003), (45, -0.043), (46, -0.025), (47, 0.005), (48, -0.007), (49, 0.012)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97421324 804 andrew gelman stats-2011-07-15-Static sensitivity analysis

Introduction: This is one of my favorite ideas. I used it in an application but have never formally studied it or written it up as a general method. Sensitivity analysis is when you check how inferences change when you vary fit several different models or when you vary inputs within a model. Sensitivity analysis is often recommended but is typically difficult to do, what with the hassle of carrying around all these different estimates. In Bayesian inference, sensitivity analysis is associated with varying the prior distribution, which irritates me: why not consider sensitivity to the likelihood, as that’s typically just as arbitrary as the prior while having a much larger effect on the inferences. So we came up with static sensitivity analysis , which is a way to assess sensitivity to assumptions while fitting only one model. The idea is that Bayesian posterior simulation gives you a range of parameter values, and from these you can learn about sensitivity directly. The published exampl

2 0.75503045 1941 andrew gelman stats-2013-07-16-Priors

Introduction: Nick Firoozye writes: While I am absolutely sympathetic to the Bayesian agenda I am often troubled by the requirement of having priors. We must have priors on the parameter of an infinite number of model we have never seen before and I find this troubling. There is a similarly troubling problem in economics of utility theory. Utility is on consumables. To be complete a consumer must assign utility to all sorts of things they never would have encountered. More recent versions of utility theory instead make consumption goods a portfolio of attributes. Cadillacs are x many units of luxury y of transport etc etc. And we can automatically have personal utilities to all these attributes. I don’t ever see parameters. Some model have few and some have hundreds. Instead, I see data. So I don’t know how to have an opinion on parameters themselves. Rather I think it far more natural to have opinions on the behavior of models. The prior predictive density is a good and sensible notion. Also

3 0.73933071 2180 andrew gelman stats-2014-01-21-Everything I need to know about Bayesian statistics, I learned in eight schools.

Introduction: This post is by Phil. I’m aware that there  are  some people who use a Bayesian approach largely because it allows them to provide a highly informative prior distribution based subjective judgment, but that is not the appeal of Bayesian methods for a lot of us practitioners. It’s disappointing and surprising, twenty years after my initial experiences, to still hear highly informed professional statisticians who think that what distinguishes Bayesian statistics from Frequentist statistics is “subjectivity” ( as seen in  a recent blog post and its comments ). My first encounter with Bayesian statistics was just over 20 years ago. I was a postdoc at Lawrence Berkeley National Laboratory, with a new PhD in theoretical atomic physics but working on various problems related to the geographical and statistical distribution of indoor radon (a naturally occurring radioactive gas that can be dangerous if present at high concentrations). One of the issues I ran into right at the start was th

4 0.73431116 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

Introduction: Dan Lakeland writes: I have some questions about some basic statistical ideas and would like your opinion on them: 1) Parameters that manifestly DON’T exist: It makes good sense to me to think about Bayesian statistics as narrowing in on the value of parameters based on a model and some data. But there are cases where “the parameter” simply doesn’t make sense as an actual thing. Yet, it’s not really a complete fiction, like unicorns either, it’s some kind of “effective” thing maybe. Here’s an example of what I mean. I did a simple toy experiment where we dropped crumpled up balls of paper and timed their fall times. (see here: http://models.street-artists.org/?s=falling+ball ) It was pretty instructive actually, and I did it to figure out how to in a practical way use an ODE to get a likelihood in MCMC procedures. One of the parameters in the model is the radius of the spherical ball of paper. But the ball of paper isn’t a sphere, not even approximately. There’s no single valu

5 0.71364814 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

Introduction: Following up on Christian’s post [link fixed] on the topic, I’d like to offer a few thoughts of my own. In BDA, we express the idea that a noninformative prior is a placeholder: you can use the noninformative prior to get the analysis started, then if your posterior distribution is less informative than you would like, or if it does not make sense, you can go back and add prior information. Same thing for the data model (the “likelihood”), for that matter: it often makes sense to start with something simple and conventional and then go from there. So, in that sense, noninformative priors are no big deal, they’re just a way to get started. Just don’t take them too seriously. Traditionally in statistics we’ve worked with the paradigm of a single highly informative dataset with only weak external information. But if the data are sparse and prior information is strong, we have to think differently. And, when you increase the dimensionality of a problem, both these things hap

6 0.70837379 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

7 0.70491964 2027 andrew gelman stats-2013-09-17-Christian Robert on the Jeffreys-Lindley paradox; more generally, it’s good news when philosophical arguments can be transformed into technical modeling issues

8 0.70434695 669 andrew gelman stats-2011-04-19-The mysterious Gamma (1.4, 0.4)

9 0.70405263 1989 andrew gelman stats-2013-08-20-Correcting for multiple comparisons in a Bayesian regression model

10 0.70339566 2129 andrew gelman stats-2013-12-10-Cross-validation and Bayesian estimation of tuning parameters

11 0.70175409 1409 andrew gelman stats-2012-07-08-Is linear regression unethical in that it gives more weight to cases that are far from the average?

12 0.69799238 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

13 0.6954028 63 andrew gelman stats-2010-06-02-The problem of overestimation of group-level variance parameters

14 0.69328535 2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”

15 0.69240803 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

16 0.68993825 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

17 0.68923491 810 andrew gelman stats-2011-07-20-Adding more information can make the variance go up (depending on your model)

18 0.68886393 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

19 0.68852305 1723 andrew gelman stats-2013-02-15-Wacky priors can work well?

20 0.67481929 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.032), (16, 0.073), (24, 0.158), (27, 0.169), (55, 0.011), (63, 0.027), (72, 0.041), (73, 0.018), (86, 0.015), (99, 0.298)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97751105 134 andrew gelman stats-2010-07-08-“What do you think about curved lines connecting discrete data-points?”

Introduction: John Keltz writes: What do you think about curved lines connecting discrete data-points? (For example, here .) The problem with the smoothed graph is it seems to imply that something is going on in between the discrete data points, which is false. However, the straight-line version isn’t representing actual events either- it is just helping the eye connect each point. So maybe the curved version is also just helping the eye connect each point, and looks better doing it. In my own work (value-added modeling of achievement test scores) I use straight lines, but I guess I am not too bothered when people use smoothing. I’d appreciate your input. Regular readers will be unsurprised that, yes, I have an opinion on this one, and that this opinion is connected to some more general ideas about statistical graphics. In general I’m not a fan of the curved lines. They’re ok, but I don’t really see the point. I can connect the dots just fine without the curves. The more general id

2 0.97114509 930 andrew gelman stats-2011-09-28-Wiley Wegman chutzpah update: Now you too can buy a selection of garbled Wikipedia articles, for a mere $1400-$2800 per year!

Introduction: Someone passed on to a message from his university library announcing that the journal “Wiley Interdisciplinary Reviews: Computational Statistics” is no longer free. Librarians have to decide what to do, so I thought I’d offer the following consumer guide: Wiley Computational Statistics journal Wikipedia Frequency 6 issues per year Continuously updated Includes articles from Wikipedia? Yes Yes Cites the Wikipedia sources it uses? No Yes Edited by recipient of ASA Founders Award? Yes No Articles are subject to rigorous review? No Yes Errors, when discovered, get fixed? No Yes Number of vertices in n-dimensional hypercube? 2n 2 n Easy access to Brady Bunch trivia? No Yes Cost (North America) $1400-$2800 $0 Cost (UK) £986-£1972 £0 Cost (Europe) €1213-€2426 €0 The choice seems pretty clear to me! It’s funny for the Wiley journal to start charging now

3 0.97089779 802 andrew gelman stats-2011-07-13-Super Sam Fuld Needs Your Help (with Foul Ball stats)

Introduction: I was pleasantly surprised to have my recreational reading about baseball in the New Yorker interrupted by a digression on statistics. Sam Fuld of the Tampa Bay Rays, was the subjet of a Ben McGrath profile in the 4 July 2011 issue of the New Yorker , in an article titled Super Sam . After quoting a minor-league trainer who described Fuld as “a bit of a geek” (who isn’t these days?), McGrath gets into that lovely New Yorker detail: One could have pointed out the more persuasive and telling examples, such as the fact that in 2005, after his first pro season, with the Class-A Peoria Chiefs, Fuld applied for a fall internship with Stats, Inc., the research firm that supplies broadcasters with much of the data anad analysis that you hear in sports telecasts. After a description of what they had him doing, reviewing footage of games and cataloguing, he said “I thought, They have a stat for everything, but they don’t have any stats regarding foul balls.” Fuld’s

4 0.9621377 1472 andrew gelman stats-2012-08-28-Migrating from dot to underscore

Introduction: My C-oriented Stan collaborators have convinced me to use underscore (_) rather than dot (.) as much as possible in expressions in R. For example, I can name a variable n_years rather than n.years. This is fine. But I’m getting annoyed because I need to press the shift key every time I type the underscore. What do people do about this? I know that it’s easy enough to reassign keys (I could, for example, assign underscore to backslash, which I never use). I’m just wondering what C programmers actually do. Do they reassign the key or do they just get used to pressing Shift? P.S. In comments, Ben Hyde points to Google’s R style guide, which recommends that variable names use dots, not underscore or camel case, for variable names (for example, “avg.clicks” rather than “avg_Clicks” or “avgClicks”). I think they’re recommending this to be consistent with R coding conventions . I am switching to underscores in R variable names to be consistent with C. Otherwise we were run

5 0.96182287 465 andrew gelman stats-2010-12-13-$3M health care prediction challenge

Introduction: i received the following press release from the Heritage Provider Network, “the largest limited Knox-Keene licensed managed care organization in California.” I have no idea what this means, but I assume it’s some sort of HMO. In any case, this looks like it could be interesting: Participants in the Health Prize challenge will be given a data set comprised of the de-identified medical records of 100,000 individuals who are members of HPN. The teams will then need to predict the hospitalization of a set percentage of those members who went to the hospital during the year following the start date, and do so with a defined accuracy rate. The winners will receive the $3 million prize. . . . the contest is designed to spur involvement by others involved in analytics, such as those involved in data mining and predictive modeling who may not currently be working in health care. “We believe that doing so will bring innovative thinking to health analytics and may allow us to solve at

6 0.95873857 708 andrew gelman stats-2011-05-12-Improvement of 5 MPG: how many more auto deaths?

7 0.95751715 343 andrew gelman stats-2010-10-15-?

8 0.95475125 1238 andrew gelman stats-2012-03-31-Dispute about ethics of data sharing

same-blog 9 0.95423269 804 andrew gelman stats-2011-07-15-Static sensitivity analysis

10 0.95270389 347 andrew gelman stats-2010-10-17-Getting arm and lme4 running on the Mac

11 0.94848293 173 andrew gelman stats-2010-07-31-Editing and clutch hitting

12 0.94537151 652 andrew gelman stats-2011-04-07-Minor-league Stats Predict Major-league Performance, Sarah Palin, and Some Differences Between Baseball and Politics

13 0.94001424 66 andrew gelman stats-2010-06-03-How can news reporters avoid making mistakes when reporting on technical issues? Or, Data used to justify “Data Used to Justify Health Savings Can Be Shaky” can be shaky

14 0.93310583 1982 andrew gelman stats-2013-08-15-Blaming scientific fraud on the Kuhnians

15 0.93145442 341 andrew gelman stats-2010-10-14-Confusion about continuous probability densities

16 0.9241724 1113 andrew gelman stats-2012-01-11-Toshiro Kageyama on professionalism

17 0.92370927 120 andrew gelman stats-2010-06-30-You can’t put Pandora back in the box

18 0.92133629 2132 andrew gelman stats-2013-12-13-And now, here’s something that would make Ed Tufte spin in his . . . ummm, Tufte’s still around, actually, so let’s just say I don’t think he’d like it!

19 0.91943896 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

20 0.91109997 1988 andrew gelman stats-2013-08-19-BDA3 still (I hope) at 40% off! (and a link to one of my favorite papers)