andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1299 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: I saw an analysis recently that I didn’t like. I won’t go into the details, but basically it was a dose-response inference, where a continuous exposure was binned into three broad categories (terciles of the data) and the probability of an adverse event was computed for each tercile. The effect and the sample size was large enough that the terciles were statistically-significantly different from each other in probability of adverse event, with the probabilities increasing from low to mid to high exposure, as one would predict. I didn’t like this analysis because it is equivalent to fitting a step function. There is a tendency for people to interpret the (arbitrary) tercile boundaries as being meaningful thresholds even though the underlying dose-response relation has to be continuous. I’d prefer to start with a linear model and then add nonlinearity from there with a spline or whatever. At this point I stepped back and thought: Hey, the divide-into-three analysis does not lite
sentIndex sentText sentNum sentScore
1 I won’t go into the details, but basically it was a dose-response inference, where a continuous exposure was binned into three broad categories (terciles of the data) and the probability of an adverse event was computed for each tercile. [sent-2, score-1.518]
2 The effect and the sample size was large enough that the terciles were statistically-significantly different from each other in probability of adverse event, with the probabilities increasing from low to mid to high exposure, as one would predict. [sent-3, score-0.928]
3 I didn’t like this analysis because it is equivalent to fitting a step function. [sent-4, score-0.318]
4 There is a tendency for people to interpret the (arbitrary) tercile boundaries as being meaningful thresholds even though the underlying dose-response relation has to be continuous. [sent-5, score-0.563]
5 I’d prefer to start with a linear model and then add nonlinearity from there with a spline or whatever. [sent-6, score-0.335]
6 At this point I stepped back and thought: Hey, the divide-into-three analysis does not literally assume a step function. [sent-7, score-0.566]
7 It doesn’t assume anything at all; it’s just a data summary! [sent-8, score-0.104]
8 First on the grounds of interpretation: my applied colleagues really were interpreting the three-category model in terms of thresholds. [sent-12, score-0.278]
9 The three categories were: “0 to A”, “A to B”, and “B to infinity”. [sent-13, score-0.345]
10 And somebody really was saying something about the effect of exposure A or exposure B. [sent-14, score-0.812]
11 You can say that the categorical-input model is nothing but a summary, an estimate of averages—but by binning like this, you lose statistical efficiency. [sent-17, score-0.25]
12 And you become the slave to “statistical significance”; there’s the temptation to butcher your analysis and throw away tons of information, just so you can get a single clean, statistically significant result. [sent-18, score-0.563]
13 The more categories you have, the less of a concern it is to discretize. [sent-21, score-0.325]
14 And sometimes your data come in discrete form (see here , for example). [sent-22, score-0.081]
wordName wordTfidf (topN-words)
[('exposure', 0.363), ('terciles', 0.292), ('categories', 0.251), ('adverse', 0.235), ('event', 0.158), ('slave', 0.146), ('binned', 0.146), ('mid', 0.146), ('complaints', 0.137), ('discretize', 0.137), ('nonlinearity', 0.132), ('stepped', 0.127), ('summary', 0.125), ('analysis', 0.125), ('step', 0.118), ('spline', 0.117), ('temptation', 0.115), ('boundaries', 0.115), ('thresholds', 0.115), ('complaining', 0.111), ('infinity', 0.111), ('assume', 0.104), ('ain', 0.103), ('tons', 0.102), ('justify', 0.101), ('interpreting', 0.096), ('grounds', 0.096), ('computed', 0.095), ('three', 0.094), ('probability', 0.092), ('literally', 0.092), ('arbitrary', 0.091), ('averages', 0.089), ('meaningful', 0.089), ('input', 0.089), ('tendency', 0.087), ('effect', 0.086), ('model', 0.086), ('statistical', 0.084), ('broad', 0.084), ('relation', 0.081), ('discrete', 0.081), ('lose', 0.08), ('clean', 0.079), ('didn', 0.078), ('increasing', 0.077), ('interpret', 0.076), ('throw', 0.075), ('equivalent', 0.075), ('concern', 0.074)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 1299 andrew gelman stats-2012-05-04-Models, assumptions, and data summaries
Introduction: I saw an analysis recently that I didn’t like. I won’t go into the details, but basically it was a dose-response inference, where a continuous exposure was binned into three broad categories (terciles of the data) and the probability of an adverse event was computed for each tercile. The effect and the sample size was large enough that the terciles were statistically-significantly different from each other in probability of adverse event, with the probabilities increasing from low to mid to high exposure, as one would predict. I didn’t like this analysis because it is equivalent to fitting a step function. There is a tendency for people to interpret the (arbitrary) tercile boundaries as being meaningful thresholds even though the underlying dose-response relation has to be continuous. I’d prefer to start with a linear model and then add nonlinearity from there with a spline or whatever. At this point I stepped back and thought: Hey, the divide-into-three analysis does not lite
2 0.11911795 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?
Introduction: A research psychologist writes in with a question that’s so long that I’ll put my answer first, then put the question itself below the fold. Here’s my reply: As I wrote in my Anova paper and in my book with Jennifer Hill, I do think that multilevel models can completely replace Anova. At the same time, I think the central idea of Anova should persist in our understanding of these models. To me the central idea of Anova is not F-tests or p-values or sums of squares, but rather the idea of predicting an outcome based on factors with discrete levels, and understanding these factors using variance components. The continuous or categorical response thing doesn’t really matter so much to me. I have no problem using a normal linear model for continuous outcomes (perhaps suitably transformed) and a logistic model for binary outcomes. I don’t want to throw away interactions just because they’re not statistically significant. I’d rather partially pool them toward zero using an inform
3 0.113289 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes
Introduction: Konrad Scheffler writes: I was interested by your paper “Induction and deduction in Bayesian data analysis” and was wondering if you would entertain a few questions: – Under the banner of objective Bayesianism, I would posit something like this as a description of Bayesian inference: “Objective Bayesian probability is not a degree of belief (which would necessarily be subjective) but a measure of the plausibility of a hypothesis, conditional on a formally specified information state. One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such a
4 0.11164895 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis
Introduction: I’ve been writing a lot about my philosophy of Bayesian statistics and how it fits into Popper’s ideas about falsification and Kuhn’s ideas about scientific revolutions. Here’s my long, somewhat technical paper with Cosma Shalizi. Here’s our shorter overview for the volume on the philosophy of social science. Here’s my latest try (for an online symposium), focusing on the key issues. I’m pretty happy with my approach–the familiar idea that Bayesian data analysis iterates the three steps of model building, inference, and model checking–but it does have some unresolved (maybe unresolvable) problems. Here are a couple mentioned in the third of the above links. Consider a simple model with independent data y_1, y_2, .., y_10 ~ N(θ,σ^2), with a prior distribution θ ~ N(0,10^2) and σ known and taking on some value of approximately 10. Inference about μ is straightforward, as is model checking, whether based on graphs or numerical summaries such as the sample variance and skewn
Introduction: Commenter Rahul asked what I thought of this note by Scott Firestone ( link from Tyler Cowen) criticizing a recent discussion by Kevin Drum suggesting that lead exposure causes violent crime. Firestone writes: It turns out there was in fact a prospective study done—but its implications for Drum’s argument are mixed. The study was a cohort study done by researchers at the University of Cincinnati. Between 1979 and 1984, 376 infants were recruited. Their parents consented to have lead levels in their blood tested over time; this was matched with records over subsequent decades of the individuals’ arrest records, and specifically arrest for violent crime. Ultimately, some of these individuals were dropped from the study; by the end, 250 were selected for the results. The researchers found that for each increase of 5 micrograms of lead per deciliter of blood, there was a higher risk for being arrested for a violent crime, but a further look at the numbers shows a more mixe
6 0.097236216 451 andrew gelman stats-2010-12-05-What do practitioners need to know about regression?
8 0.093244717 1315 andrew gelman stats-2012-05-12-Question 2 of my final exam for Design and Analysis of Sample Surveys
10 0.090710968 1506 andrew gelman stats-2012-09-21-Building a regression model . . . with only 27 data points
11 0.088893764 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes
12 0.088799089 782 andrew gelman stats-2011-06-29-Putting together multinomial discrete regressions by combining simple logits
14 0.086835943 899 andrew gelman stats-2011-09-10-The statistical significance filter
15 0.085126616 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability
16 0.083406538 797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?
17 0.082959257 1605 andrew gelman stats-2012-12-04-Write This Book
18 0.082257465 1735 andrew gelman stats-2013-02-24-F-f-f-fake data
19 0.081583194 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?
20 0.08083684 1228 andrew gelman stats-2012-03-25-Continuous variables in Bayesian networks
topicId topicWeight
[(0, 0.179), (1, 0.07), (2, 0.038), (3, -0.039), (4, 0.019), (5, -0.035), (6, -0.015), (7, 0.02), (8, 0.045), (9, -0.022), (10, -0.024), (11, 0.008), (12, -0.003), (13, -0.045), (14, -0.028), (15, -0.003), (16, 0.024), (17, -0.017), (18, 0.005), (19, -0.005), (20, -0.026), (21, 0.009), (22, 0.004), (23, -0.017), (24, -0.018), (25, 0.051), (26, -0.002), (27, -0.019), (28, -0.002), (29, -0.029), (30, 0.006), (31, 0.031), (32, -0.023), (33, 0.025), (34, 0.017), (35, 0.023), (36, -0.02), (37, -0.03), (38, -0.023), (39, -0.006), (40, 0.006), (41, -0.007), (42, -0.029), (43, 0.013), (44, 0.034), (45, -0.028), (46, -0.012), (47, -0.006), (48, -0.012), (49, 0.015)]
simIndex simValue blogId blogTitle
same-blog 1 0.96971899 1299 andrew gelman stats-2012-05-04-Models, assumptions, and data summaries
Introduction: I saw an analysis recently that I didn’t like. I won’t go into the details, but basically it was a dose-response inference, where a continuous exposure was binned into three broad categories (terciles of the data) and the probability of an adverse event was computed for each tercile. The effect and the sample size was large enough that the terciles were statistically-significantly different from each other in probability of adverse event, with the probabilities increasing from low to mid to high exposure, as one would predict. I didn’t like this analysis because it is equivalent to fitting a step function. There is a tendency for people to interpret the (arbitrary) tercile boundaries as being meaningful thresholds even though the underlying dose-response relation has to be continuous. I’d prefer to start with a linear model and then add nonlinearity from there with a spline or whatever. At this point I stepped back and thought: Hey, the divide-into-three analysis does not lite
2 0.83981234 1195 andrew gelman stats-2012-03-04-Multiple comparisons dispute in the tabloids
Introduction: Yarden Katz writes: I’m probably not the first to point this out, but just in case, you might be interested in this article by T. Florian Jaeger, Daniel Pontillo, and Peter Graff on a statistical dispute [regarding the claim, "Phonemic Diversity Supports a Serial Founder Effect Model of Language Expansion from Africa"]. Seems directly relevant to your article on multiple hypothesis testing and associated talk at the Voodoo correlations meeting. Curious to know your thoughts on this if you think it’s blog-worthy. Here’s the abstract of the paper: Atkinson (Reports, 15 April 2011, p. 346) argues that the phonological complexity of languages reflects the loss of phonemic distinctions due to successive founder events during human migration (the serial founder hypothesis). Statistical simulations show that the type I error rate of Atkinson’s analysis is hugely inflated. The data at best support only a weak interpretation of the serial founder hypothesis. My reaction: I d
Introduction: Maggie Fox writes : Brain scans may be able to predict what you will do better than you can yourself . . . They found a way to interpret “real time” brain images to show whether people who viewed messages about using sunscreen would actually use sunscreen during the following week. The scans were more accurate than the volunteers were, Emily Falk and colleagues at the University of California Los Angeles reported in the Journal of Neuroscience. . . . About half the volunteers had correctly predicted whether they would use sunscreen. The research team analyzed and re-analyzed the MRI scans to see if they could find any brain activity that would do better. Activity in one area of the brain, a particular part of the medial prefrontal cortex, provided the best information. “From this region of the brain, we can predict for about three-quarters of the people whether they will increase their use of sunscreen beyond what they say they will do,” Lieberman said. “It is the one re
4 0.79246247 706 andrew gelman stats-2011-05-11-The happiness gene: My bottom line (for now)
Introduction: I had a couple of email exchanges with Jan-Emmanuel De Neve and James Fowler, two of the authors of the article on the gene that is associated with life satisfaction which we blogged the other day. (Bruno Frey, the third author of the article in question, is out of town according to his email.) Fowler also commented directly on the blog. I won’t go through all the details, but now I have a better sense of what’s going on. (Thanks, Jan and James!) Here’s my current understanding: 1. The original manuscript was divided into two parts: an article by De Neve alone published in the Journal of Human Genetics, and an article by De Neve, Fowler, Frey, and Nicholas Christakis submitted to Econometrica. The latter paper repeats the analysis from the Adolescent Health survey and also replicates with data from the Framingham heart study (hence Christakis’s involvement). The Framingham study measures a slightly different gene and uses a slightly life-satisfaction question com
5 0.78703123 2342 andrew gelman stats-2014-05-21-Models with constraints
Introduction: I had an interesting conversation with Aki about monotonicity constraints. We were discussing a particular set of Gaussian processes that we were fitting to the arsenic well-switching data (the example from the logistic regression chapter in my book with Jennifer) but some more general issues arose that I thought might interest you. The idea was to fit a model where the response (the logit probability of switching wells) was constrained to be monotonically increasing in your current arsenic level and monotonically decreasing in your current distance to the closest safe well. These constraints seem reasonable enough, but when we actually fit the model we found that doing Bayesian inference with the constraint pulled the estimate, not just toward monotonicity, but to a strong increase (for the increasing relation) or a strong decrease (for the decreasing relation). This makes sense from a statistical standpoint because if you restrict a parameter to be nonnegative, any posterior dis
6 0.7863096 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?
7 0.78590542 783 andrew gelman stats-2011-06-30-Don’t stop being a statistician once the analysis is done
8 0.78583515 1070 andrew gelman stats-2011-12-19-The scope for snooping
12 0.77942723 1971 andrew gelman stats-2013-08-07-I doubt they cheated
13 0.76093096 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things
14 0.75850606 2364 andrew gelman stats-2014-06-08-Regression and causality and variable ordering
15 0.75804436 1575 andrew gelman stats-2012-11-12-Thinking like a statistician (continuously) rather than like a civilian (discretely)
16 0.75756729 757 andrew gelman stats-2011-06-10-Controversy over the Christakis-Fowler findings on the contagion of obesity
17 0.75722772 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update
18 0.75525659 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?
19 0.75384241 1171 andrew gelman stats-2012-02-16-“False-positive psychology”
20 0.74994093 791 andrew gelman stats-2011-07-08-Censoring on one end, “outliers” on the other, what can we do with the middle?
topicId topicWeight
[(2, 0.011), (8, 0.011), (16, 0.077), (23, 0.013), (24, 0.178), (30, 0.01), (55, 0.3), (58, 0.013), (86, 0.026), (99, 0.263)]
simIndex simValue blogId blogTitle
1 0.95623875 1617 andrew gelman stats-2012-12-11-Math Talks :: Action Movies
Introduction: Jonathan Goodman gave the departmental seminar yesterday (10 Dec 2012) and I was amused by an extended analogy he made. After his (very clear) intro, he said that math talks were like action movies. The overall theorem and its applications provide the plot, and the proofs provide the action scenes.
2 0.9554863 1463 andrew gelman stats-2012-08-19-It is difficult to convey intonation in typed speech
Introduction: I just wanted to add the above comment to Bob’s notes on language. Spoken (and, to some extent, handwritten) language can be much more expressive than the typed version. I’m not just talking about slang or words such as baaaaad; I’m also talking about pauses that give logical structure to a sentence. For example, sentences such as “The girl who hit the ball where the dog used to be was the one who was climbing the tree when the dog came by” are effortless to understand in speech but can be difficult for a reader to follow. Often when I write, I need to untangle my sentences to keep them readable.
3 0.94462293 1896 andrew gelman stats-2013-06-13-Against the myth of the heroic visualization
Introduction: Alberto Cairo tells a fascinating story about John Snow, H. W. Acland, and the Mythmaking Problem: Every human community—nations, ethnic and cultural groups, professional guilds—inevitably raises a few of its members to the status of heroes and weaves myths around them. . . . The visual display of information is no stranger to heroes and myth. In fact, being a set of disciplines with a relatively small amount of practitioners and researchers, it has generated a staggering number of heroes, perhaps as a morale-enhancing mechanism. Most of us have heard of the wonders of William Playfair’s Commercial and Political Atlas, Florence Nightingale’s coxcomb charts, Charles Joseph Minard’s Napoleon’s march diagram, and Henry Beck’s 1933 redesign of the London Underground map. . . . Cairo’s goal, I think, is not to disparage these great pioneers of graphics but rather to put their work in perspective, recognizing the work of their excellent contemporaries. I would like to echo Cairo’
4 0.92761713 1131 andrew gelman stats-2012-01-20-Stan: A (Bayesian) Directed Graphical Model Compiler
Introduction: Here’s Bob’s talk from the NYC machine learning meetup . And here’s Stan himself:
same-blog 5 0.91466141 1299 andrew gelman stats-2012-05-04-Models, assumptions, and data summaries
Introduction: I saw an analysis recently that I didn’t like. I won’t go into the details, but basically it was a dose-response inference, where a continuous exposure was binned into three broad categories (terciles of the data) and the probability of an adverse event was computed for each tercile. The effect and the sample size was large enough that the terciles were statistically-significantly different from each other in probability of adverse event, with the probabilities increasing from low to mid to high exposure, as one would predict. I didn’t like this analysis because it is equivalent to fitting a step function. There is a tendency for people to interpret the (arbitrary) tercile boundaries as being meaningful thresholds even though the underlying dose-response relation has to be continuous. I’d prefer to start with a linear model and then add nonlinearity from there with a spline or whatever. At this point I stepped back and thought: Hey, the divide-into-three analysis does not lite
6 0.90347576 1107 andrew gelman stats-2012-01-08-More on essentialism
7 0.8861469 168 andrew gelman stats-2010-07-28-Colorless green, and clueless
8 0.87924308 2019 andrew gelman stats-2013-09-12-Recently in the sister blog
9 0.87804163 269 andrew gelman stats-2010-09-10-R vs. Stata, or, Different ways to estimate multilevel models
10 0.87528592 688 andrew gelman stats-2011-04-30-Why it’s so relaxing to think about social issues
11 0.86428988 333 andrew gelman stats-2010-10-10-Psychiatric drugs and the reduction in crime
12 0.8635906 620 andrew gelman stats-2011-03-19-Online James?
13 0.86171538 997 andrew gelman stats-2011-11-07-My contribution to the discussion on “Should voting be mandatory?”
14 0.85265303 874 andrew gelman stats-2011-08-27-What’s “the definition of a professional career”?
15 0.84598637 50 andrew gelman stats-2010-05-25-Looking for Sister Right
16 0.8350203 1406 andrew gelman stats-2012-07-05-Xiao-Li Meng and Xianchao Xie rethink asymptotics
17 0.80942112 13 andrew gelman stats-2010-04-30-Things I learned from the Mickey Kaus for Senate campaign
18 0.80919892 706 andrew gelman stats-2011-05-11-The happiness gene: My bottom line (for now)
19 0.80822718 1520 andrew gelman stats-2012-10-03-Advice that’s so eminently sensible but so difficult to follow
20 0.80258477 201 andrew gelman stats-2010-08-12-Are all rich people now liberals?