andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-86 knowledge-graph by maker-knowledge-mining

86 andrew gelman stats-2010-06-14-“Too much data”?


meta infos for this blog

Source: html

Introduction: Chris Hane writes: I am scientist needing to model a treatment effect on a population of ~500 people. The dependent variable in the model is the difference in a person’s pre-treatment 12 month total medical cost versus post-treatment cost. So there is large variation in costs, but not so much by using the difference between the pre and post treatment costs. The issue I’d like some advice on is that the treatment has already occurred so there is no possibility of creating a fully randomized control now. I do have a very large population of people to use as possible controls via propensity scoring or exact matching. If I had a few thousand people to possibly match, then I would use standard techniques. However, I have a potential population of over a hundred thousand people. An exact match of the possible controls to age, gender and region of the country still leaves a population of 10,000 controls. Even if I use propensity scores to weight the 10,000 observations (understan


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Chris Hane writes: I am scientist needing to model a treatment effect on a population of ~500 people. [sent-1, score-0.627]

2 The dependent variable in the model is the difference in a person’s pre-treatment 12 month total medical cost versus post-treatment cost. [sent-2, score-0.392]

3 So there is large variation in costs, but not so much by using the difference between the pre and post treatment costs. [sent-3, score-0.551]

4 The issue I’d like some advice on is that the treatment has already occurred so there is no possibility of creating a fully randomized control now. [sent-4, score-0.937]

5 I do have a very large population of people to use as possible controls via propensity scoring or exact matching. [sent-5, score-1.374]

6 If I had a few thousand people to possibly match, then I would use standard techniques. [sent-6, score-0.397]

7 However, I have a potential population of over a hundred thousand people. [sent-7, score-0.476]

8 An exact match of the possible controls to age, gender and region of the country still leaves a population of 10,000 controls. [sent-8, score-1.484]

9 Even if I use propensity scores to weight the 10,000 observations (understanding the problems that poses) I am concerned there are too many controls to see the effect of the treatment. [sent-9, score-1.268]

10 Would you suggest using narrower matching criteria to get the “best” matches, would weighting the observations be enough, or should I also consider creating many models by sampling from both treatment and control and averaging their results? [sent-10, score-1.405]

11 If you could point me to some papers that tackle similar issues that would be great. [sent-11, score-0.19]

12 My reply: Others know more about this than me, but my quick reaction is . [sent-12, score-0.068]

13 In a regression analysis, having more controls shouldn’t create any problems. [sent-17, score-0.511]

14 Don’t just control for age, sex, and region; control for as many relevant pre-treatment variables as you can get. [sent-19, score-0.52]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('controls', 0.445), ('treatment', 0.235), ('match', 0.234), ('control', 0.222), ('population', 0.208), ('propensity', 0.204), ('region', 0.187), ('thousand', 0.174), ('creating', 0.163), ('exact', 0.153), ('observations', 0.152), ('narrower', 0.138), ('pre', 0.138), ('age', 0.124), ('poses', 0.12), ('scoring', 0.117), ('tackle', 0.114), ('matches', 0.109), ('needing', 0.102), ('difference', 0.095), ('hundred', 0.094), ('occurred', 0.092), ('leaves', 0.091), ('dependent', 0.087), ('matching', 0.087), ('averaging', 0.086), ('weighting', 0.085), ('criteria', 0.085), ('possible', 0.084), ('large', 0.083), ('effect', 0.082), ('gender', 0.082), ('versus', 0.08), ('use', 0.08), ('randomized', 0.079), ('possibility', 0.078), ('concerned', 0.078), ('would', 0.076), ('many', 0.076), ('weight', 0.076), ('costs', 0.075), ('scores', 0.075), ('sex', 0.071), ('chris', 0.069), ('fully', 0.068), ('reaction', 0.068), ('possibly', 0.067), ('month', 0.066), ('create', 0.066), ('total', 0.064)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 86 andrew gelman stats-2010-06-14-“Too much data”?

Introduction: Chris Hane writes: I am scientist needing to model a treatment effect on a population of ~500 people. The dependent variable in the model is the difference in a person’s pre-treatment 12 month total medical cost versus post-treatment cost. So there is large variation in costs, but not so much by using the difference between the pre and post treatment costs. The issue I’d like some advice on is that the treatment has already occurred so there is no possibility of creating a fully randomized control now. I do have a very large population of people to use as possible controls via propensity scoring or exact matching. If I had a few thousand people to possibly match, then I would use standard techniques. However, I have a potential population of over a hundred thousand people. An exact match of the possible controls to age, gender and region of the country still leaves a population of 10,000 controls. Even if I use propensity scores to weight the 10,000 observations (understan

2 0.21981256 213 andrew gelman stats-2010-08-17-Matching at two levels

Introduction: Steve Porter writes with a question about matching for inferences in a hierarchical data structure. I’ve never thought about this particular issue, but it seems potentially important. Maybe one or more of you have some useful suggestions? Porter writes: After immersing myself in the relatively sparse literature on propensity scores with clustered data, it seems as if people take one of two approaches. If the treatment is at the cluster-level (like school policies), they match on only the cluster-level covariates. If the treatment is at the individual level, they match on individual-level covariates. (I have also found some papers that match on individual-level covariates when it seems as if the treatment is really at the cluster-level.) But what if there is a selection process at both levels? For my research question (effect of tenure systems on faculty behavior) there is a two-step selection process: first colleges choose whether to have a tenure system for faculty; then f

3 0.18046817 375 andrew gelman stats-2010-10-28-Matching for preprocessing data for causal inference

Introduction: Chris Blattman writes : Matching is not an identification strategy a solution to your endogeneity problem; it is a weighting scheme. Saying matching will reduce endogeneity bias is like saying that the best way to get thin is to weigh yourself in kilos. The statement makes no sense. It confuses technique with substance. . . . When you run a regression, you control for the X you can observe. When you match, you are simply matching based on those same X. . . . I see what Chris is getting at–matching, like regression, won’t help for the variables you’re not controlling for–but I disagree with his characterization of matching as a weighting scheme. I see matching as a way to restrict your analysis to comparable cases. The statistical motivation: robustness. If you had a good enough model, you wouldn’t neet to match, you’d just fit the model to the data. But in common practice we often use simple regression models and so it can be helpful to do some matching first before regress

4 0.18043518 7 andrew gelman stats-2010-04-27-Should Mister P be allowed-encouraged to reside in counter-factual populations?

Introduction: Lets say you are repeatedly going to recieve unselected sets of well done RCTs on various say medical treatments. One reasonable assumption with all of these treatments is that they are monotonic – either helpful or harmful for all. The treatment effect will (as always) vary for subgroups in the population – these will not be explicitly identified in the studies – but each study very likely will enroll different percentages of the variuos patient subgroups. Being all randomized studies these subgroups will be balanced in the treatment versus control arms – but each study will (as always) be estimating a different – but exchangeable – treatment effect (Exhangeable due to the ignorance about the subgroup memberships of the enrolled patients.) That reasonable assumption – monotonicity – will be to some extent (as always) wrong, but given that it is a risk believed well worth taking – if the average effect in any population is positive (versus negative) the average effect in any other

5 0.14659004 972 andrew gelman stats-2011-10-25-How do you interpret standard errors from a regression fit to the entire population?

Introduction: David Radwin asks a question which comes up fairly often in one form or another: How should one respond to requests for statistical hypothesis tests for population (or universe) data? I [Radwin] first encountered this issue as an undergraduate when a professor suggested a statistical significance test for my paper comparing roll call votes between freshman and veteran members of Congress. Later I learned that such tests apply only to samples because their purpose is to tell you whether the difference in the observed sample is likely to exist in the population. If you have data for the whole population, like all members of the 103rd House of Representatives, you do not need a test to discern the true difference in the population. Sometimes researchers assume some sort of superpopulation like “all possible Congresses” or “Congresses across all time” and that the members of any given Congress constitute a sample. In my current work in education research, it is sometimes asserted t

6 0.14144641 251 andrew gelman stats-2010-09-02-Interactions of predictors in a causal model

7 0.1187346 1310 andrew gelman stats-2012-05-09-Varying treatment effects, again

8 0.11792828 2 andrew gelman stats-2010-04-23-Modeling heterogenous treatment effects

9 0.11706369 936 andrew gelman stats-2011-10-02-Covariate Adjustment in RCT - Model Overfitting in Multilevel Regression

10 0.11375404 1455 andrew gelman stats-2012-08-12-Probabilistic screening to get an approximate self-weighted sample

11 0.10605227 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

12 0.10539912 1891 andrew gelman stats-2013-06-09-“Heterogeneity of variance in experimental studies: A challenge to conventional interpretations”

13 0.10507508 796 andrew gelman stats-2011-07-10-Matching and regression: two great tastes etc etc

14 0.10387731 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

15 0.10121967 352 andrew gelman stats-2010-10-19-Analysis of survey data: Design based models vs. hierarchical modeling?

16 0.10026923 1267 andrew gelman stats-2012-04-17-Hierarchical-multilevel modeling with “big data”

17 0.099776745 1523 andrew gelman stats-2012-10-06-Comparing people from two surveys, one of which is a simple random sample and one of which is not

18 0.098172858 553 andrew gelman stats-2011-02-03-is it possible to “overstratify” when assigning a treatment in a randomized control trial?

19 0.09756846 388 andrew gelman stats-2010-11-01-The placebo effect in pharma

20 0.096880041 2176 andrew gelman stats-2014-01-19-Transformations for non-normal data


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.171), (1, 0.055), (2, 0.092), (3, -0.103), (4, 0.069), (5, 0.021), (6, -0.007), (7, -0.012), (8, 0.076), (9, 0.053), (10, -0.019), (11, -0.0), (12, 0.033), (13, -0.014), (14, 0.029), (15, 0.019), (16, 0.025), (17, 0.013), (18, -0.007), (19, 0.057), (20, -0.039), (21, 0.032), (22, -0.028), (23, 0.0), (24, -0.02), (25, 0.071), (26, -0.037), (27, 0.005), (28, -0.031), (29, 0.05), (30, -0.024), (31, 0.011), (32, -0.036), (33, 0.07), (34, -0.025), (35, 0.023), (36, -0.01), (37, 0.013), (38, -0.031), (39, 0.06), (40, 0.006), (41, -0.086), (42, 0.024), (43, -0.011), (44, 0.039), (45, 0.003), (46, 0.043), (47, -0.011), (48, -0.002), (49, 0.05)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97041881 86 andrew gelman stats-2010-06-14-“Too much data”?

Introduction: Chris Hane writes: I am scientist needing to model a treatment effect on a population of ~500 people. The dependent variable in the model is the difference in a person’s pre-treatment 12 month total medical cost versus post-treatment cost. So there is large variation in costs, but not so much by using the difference between the pre and post treatment costs. The issue I’d like some advice on is that the treatment has already occurred so there is no possibility of creating a fully randomized control now. I do have a very large population of people to use as possible controls via propensity scoring or exact matching. If I had a few thousand people to possibly match, then I would use standard techniques. However, I have a potential population of over a hundred thousand people. An exact match of the possible controls to age, gender and region of the country still leaves a population of 10,000 controls. Even if I use propensity scores to weight the 10,000 observations (understan

2 0.83212638 1017 andrew gelman stats-2011-11-18-Lack of complete overlap

Introduction: Evens Salies writes: I have a question regarding a randomizing constraint in my current funded electricity experiment. After elimination of missing data we have 110 voluntary households from a larger population (resource constraints do not allow us to have more households!). I randomly assign them to threated and non treated where the treatment variable is some ICT that allows the treated to track their electricity consumption in real tim. The ICT is made of two devices, one that is plugged on the household’s modem and the other on the electric meter. A necessary condition for being treated is that the distance between the box and the meter be below some threshold (d), the value of which is 20 meters approximately. 50 ICTs can be installed. 60 households will be in the control group. But, I can only assign 6 households in the control group for whom d is less than 20. Therefore, I have only 6 households in the control group who have a counterfactual in the group of treated.

3 0.81439579 251 andrew gelman stats-2010-09-02-Interactions of predictors in a causal model

Introduction: Michael Bader writes: What is the best way to examine interactions of independent variables in a propensity weights framework? Let’s say we are interested in estimating breathing difficulty (measured on a continuous scale) and our main predictor is age of housing. The object is to estimate whether living in housing 20 years or older is associated with breathing difficulty compared counterfactually to those living in housing less than 20 years old; as a secondary question, we want to know whether that effect differs for those in poverty compared to those not in poverty. In our first-stage propensity model, we include whether the respondent lives in poverty. The weights applied to the other covariates in the propensity model are similar to those living in poverty compared to those who are not. Now, can I simply interact the poverty variable with the age of construction variable to look at the interaction of age of housing and poverty on breathing difficulty? My thought is no —

4 0.78852922 1523 andrew gelman stats-2012-10-06-Comparing people from two surveys, one of which is a simple random sample and one of which is not

Introduction: Juli writes: I’m helping a professor out with an analysis, and I was hoping that you might be able to point me to some relevant literature… She has two studies that have been completed already (so we can’t go back to the planning stage in terms of sampling, unfortunately). Both studies are based around the population of adults in LA who attended LA public high schools at some point, so that is the same for both studies. Study #1 uses random digit dialing, so I consider that one to be SRS. Study #2, however, is a convenience sample in which all participants were involved with one of eight community-based organizations (CBOs). Of course, both studies can be analyzed independently, but she was hoping for there to be some way to combine/compare the two studies. Specifically, I am working on looking at the civic engagement of the adults in both studies. In study #1, this means looking at factors such as involvement in student government. In study #2, this means looking at involv

5 0.78769261 213 andrew gelman stats-2010-08-17-Matching at two levels

Introduction: Steve Porter writes with a question about matching for inferences in a hierarchical data structure. I’ve never thought about this particular issue, but it seems potentially important. Maybe one or more of you have some useful suggestions? Porter writes: After immersing myself in the relatively sparse literature on propensity scores with clustered data, it seems as if people take one of two approaches. If the treatment is at the cluster-level (like school policies), they match on only the cluster-level covariates. If the treatment is at the individual level, they match on individual-level covariates. (I have also found some papers that match on individual-level covariates when it seems as if the treatment is really at the cluster-level.) But what if there is a selection process at both levels? For my research question (effect of tenure systems on faculty behavior) there is a two-step selection process: first colleges choose whether to have a tenure system for faculty; then f

6 0.75945878 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

7 0.75743419 287 andrew gelman stats-2010-09-20-Paul Rosenbaum on those annoying pre-treatment variables that are sort-of instruments and sort-of covariates

8 0.75253397 1294 andrew gelman stats-2012-05-01-Modeling y = a + b + c

9 0.73711818 553 andrew gelman stats-2011-02-03-is it possible to “overstratify” when assigning a treatment in a randomized control trial?

10 0.73579723 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

11 0.73273736 393 andrew gelman stats-2010-11-04-Estimating the effect of A on B, and also the effect of B on A

12 0.72428435 936 andrew gelman stats-2011-10-02-Covariate Adjustment in RCT - Model Overfitting in Multilevel Regression

13 0.71891063 7 andrew gelman stats-2010-04-27-Should Mister P be allowed-encouraged to reside in counter-factual populations?

14 0.70783919 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

15 0.70495838 375 andrew gelman stats-2010-10-28-Matching for preprocessing data for causal inference

16 0.70492089 2274 andrew gelman stats-2014-03-30-Adjudicating between alternative interpretations of a statistical interaction?

17 0.70115066 791 andrew gelman stats-2011-07-08-Censoring on one end, “outliers” on the other, what can we do with the middle?

18 0.69843054 753 andrew gelman stats-2011-06-09-Allowing interaction terms to vary

19 0.69638371 2193 andrew gelman stats-2014-01-31-Into the thicket of variation: More on the political orientations of parents of sons and daughters, and a return to the tradeoff between internal and external validity in design and interpretation of research studies

20 0.68744886 708 andrew gelman stats-2011-05-12-Improvement of 5 MPG: how many more auto deaths?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.03), (15, 0.014), (16, 0.022), (21, 0.028), (24, 0.256), (53, 0.018), (61, 0.01), (63, 0.011), (73, 0.055), (85, 0.011), (86, 0.059), (95, 0.051), (99, 0.33)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98363233 86 andrew gelman stats-2010-06-14-“Too much data”?

Introduction: Chris Hane writes: I am scientist needing to model a treatment effect on a population of ~500 people. The dependent variable in the model is the difference in a person’s pre-treatment 12 month total medical cost versus post-treatment cost. So there is large variation in costs, but not so much by using the difference between the pre and post treatment costs. The issue I’d like some advice on is that the treatment has already occurred so there is no possibility of creating a fully randomized control now. I do have a very large population of people to use as possible controls via propensity scoring or exact matching. If I had a few thousand people to possibly match, then I would use standard techniques. However, I have a potential population of over a hundred thousand people. An exact match of the possible controls to age, gender and region of the country still leaves a population of 10,000 controls. Even if I use propensity scores to weight the 10,000 observations (understan

2 0.97655839 801 andrew gelman stats-2011-07-13-On the half-Cauchy prior for a global scale parameter

Introduction: Nick Polson and James Scott write : We generalize the half-Cauchy prior for a global scale parameter to the wider class of hypergeometric inverted-beta priors. We derive expressions for posterior moments and marginal densities when these priors are used for a top-level normal variance in a Bayesian hierarchical model. Finally, we prove a result that characterizes the frequentist risk of the Bayes estimators under all priors in the class. These arguments provide an alternative, classical justification for the use of the half-Cauchy prior in Bayesian hierarchical models, complementing the arguments in Gelman (2006). This makes me happy, of course. It’s great to be validated. The only think I didn’t catch is how they set the scale parameter for the half-Cauchy prior. In my 2006 paper I frame it as a weakly informative prior and recommend that the scale be set based on actual prior knowledge. But Polson and Scott are talking about a default choice. I used to think that such a

3 0.97639596 63 andrew gelman stats-2010-06-02-The problem of overestimation of group-level variance parameters

Introduction: John Lawson writes: I have been experimenting using Bayesian Methods to estimate variance components, and I have noticed that even when I use a noninformative prior, my estimates are never close to the method of moments or REML estimates. In every case I have tried, the sum of the Bayesian estimated variance components is always larger than the sum of the estimates obtained by method of moments or REML. For data sets I have used that arise from a simple one-way random effects model, the Bayesian estimates of the between groups variance component is usually larger than the method of moments or REML estimates. When I use a uniform prior on the between standard deviation (as you recommended in your 2006 paper ) rather than an inverse gamma prior on the between variance component, the between variance component is usually reduced. However, for the dyestuff data in Davies(1949, p74), the opposite appears to be the case. I am a worried that the Bayesian estimators of the varian

4 0.97638923 1240 andrew gelman stats-2012-04-02-Blogads update

Introduction: A few months ago I reported on someone who wanted to insert text links into the blog. I asked her how much they would pay and got no answer. Yesterday, though, I received this reply: Hello Andrew, I am sorry for the delay in getting back to you. I’d like to make a proposal for your site. Please refer below. We would like to place a simple text link ad on page http://andrewgelman.com/2011/07/super_sam_fuld/ to link to *** with the key phrase ***. We will incorporate the key phrase into a sentence so it would read well. Rest assured it won’t sound obnoxious or advertorial. We will then process the final text link code as soon as you agree to our proposal. We can offer you $200 for this with the assumption that you will keep the link “live” on that page for 12 months or longer if you prefer. Please get back to us with a quick reply on your thoughts on this and include your Paypal ID for payment process. Hoping for a positive response from you. I wrote back: Hi,

5 0.97580367 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

Introduction: I’ve had a couple of email conversations in the past couple days on dependence in multivariate prior distributions. Modeling the degrees of freedom and scale parameters in the t distribution First, in our Stan group we’ve been discussing the choice of priors for the degrees-of-freedom parameter in the t distribution. I wrote that also there’s the question of parameterization. It does not necessarily make sense to have independent priors on the df and scale parameters. In some sense, the meaning of the scale parameter changes with the df. Prior dependence between correlation and scale parameters in the scaled inverse-Wishart model The second case of parameterization in prior distribution arose from an email I received from Chris Chatham pointing me to this exploration by Matt Simpson of the scaled inverse-Wishart prior distribution for hierarchical covariance matrices. Simpson writes: A popular prior for Σ is the inverse-Wishart distribution [ not the same as the

6 0.97402501 2129 andrew gelman stats-2013-12-10-Cross-validation and Bayesian estimation of tuning parameters

7 0.97371805 494 andrew gelman stats-2010-12-31-Type S error rates for classical and Bayesian single and multiple comparison procedures

8 0.97320914 1941 andrew gelman stats-2013-07-16-Priors

9 0.97305095 899 andrew gelman stats-2011-09-10-The statistical significance filter

10 0.97244394 1465 andrew gelman stats-2012-08-21-D. Buggin

11 0.97237784 669 andrew gelman stats-2011-04-19-The mysterious Gamma (1.4, 0.4)

12 0.97180593 2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”

13 0.97153521 1367 andrew gelman stats-2012-06-05-Question 26 of my final exam for Design and Analysis of Sample Surveys

14 0.97115517 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

15 0.97057223 1792 andrew gelman stats-2013-04-07-X on JLP

16 0.97039181 301 andrew gelman stats-2010-09-28-Correlation, prediction, variation, etc.

17 0.97010636 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals

18 0.9698754 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

19 0.96921933 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

20 0.96838725 1944 andrew gelman stats-2013-07-18-You’ll get a high Type S error rate if you use classical statistical methods to analyze data from underpowered studies