andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-472 knowledge-graph by maker-knowledge-mining

472 andrew gelman stats-2010-12-17-So-called fixed and random effects


meta infos for this blog

Source: html

Introduction: Someone writes: I am hoping you can give me some advice about when to use fixed and random effects model. I am currently working on a paper that examines the effect of . . . by comparing states . . . It got reviewed . . . by three economists and all suggest that we run a fixed effects model. We ran a hierarchial model in the paper that allow the intercept and slope to vary before and after . . . My question is which is correct? We have ran it both ways and really it makes no difference which model you run, the results are very similar. But for my own learning, I would really like to understand which to use under what circumstances. Is the fact that we use the whole population reason enough to just run a fixed effect model? Perhaps you can suggest a good reference to this question of when to run a fixed vs. random effects model. I’m not always sure what is meant by a “fixed effects model”; see my paper on Anova for discussion of the problems with this terminology: http://w


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Someone writes: I am hoping you can give me some advice about when to use fixed and random effects model. [sent-1, score-1.078]

2 I am currently working on a paper that examines the effect of . [sent-2, score-0.538]

3 by three economists and all suggest that we run a fixed effects model. [sent-11, score-1.091]

4 We ran a hierarchial model in the paper that allow the intercept and slope to vary before and after . [sent-12, score-1.017]

5 We have ran it both ways and really it makes no difference which model you run, the results are very similar. [sent-16, score-0.385]

6 But for my own learning, I would really like to understand which to use under what circumstances. [sent-17, score-0.165]

7 Is the fact that we use the whole population reason enough to just run a fixed effect model? [sent-18, score-1.397]

8 Perhaps you can suggest a good reference to this question of when to run a fixed vs. [sent-19, score-1.013]

9 I’m not always sure what is meant by a “fixed effects model”; see my paper on Anova for discussion of the problems with this terminology: http://www. [sent-21, score-0.665]

10 pdf Sometimes there is a concern about fitting multilevel models when there are correlations; see this paper for discussion of how to deal with this: http://www. [sent-25, score-0.676]

11 pdf The short answer to your question is that, no, the fact that you use the whole population should not determine the model you fit. [sent-29, score-1.01]

12 In particular, there is no reason for you to use a model with group-level variance equal to infinity. [sent-30, score-0.641]

13 There is various literature with conflicting recommendations on the topic (see my Anova paper for references), but, as I discuss in that paper, a lot of these recommendations are less coherent than they might seem at first. [sent-31, score-0.921]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('fixed', 0.402), ('run', 0.26), ('anova', 0.244), ('paper', 0.213), ('effects', 0.212), ('model', 0.202), ('recommendations', 0.2), ('ran', 0.183), ('use', 0.165), ('http', 0.154), ('examines', 0.144), ('suggest', 0.144), ('conflicting', 0.141), ('random', 0.13), ('population', 0.129), ('terminology', 0.128), ('intercept', 0.126), ('whole', 0.125), ('question', 0.121), ('slope', 0.121), ('fact', 0.111), ('reviewed', 0.106), ('reason', 0.104), ('coherent', 0.102), ('effect', 0.101), ('hoping', 0.096), ('equal', 0.093), ('references', 0.093), ('vary', 0.092), ('correlations', 0.091), ('determine', 0.091), ('meant', 0.088), ('concern', 0.087), ('reference', 0.086), ('discussion', 0.084), ('comparing', 0.081), ('fitting', 0.08), ('currently', 0.08), ('allow', 0.08), ('variance', 0.077), ('advice', 0.073), ('economists', 0.073), ('learning', 0.073), ('deal', 0.073), ('multilevel', 0.071), ('see', 0.068), ('states', 0.067), ('correct', 0.067), ('short', 0.066), ('discuss', 0.065)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 472 andrew gelman stats-2010-12-17-So-called fixed and random effects

Introduction: Someone writes: I am hoping you can give me some advice about when to use fixed and random effects model. I am currently working on a paper that examines the effect of . . . by comparing states . . . It got reviewed . . . by three economists and all suggest that we run a fixed effects model. We ran a hierarchial model in the paper that allow the intercept and slope to vary before and after . . . My question is which is correct? We have ran it both ways and really it makes no difference which model you run, the results are very similar. But for my own learning, I would really like to understand which to use under what circumstances. Is the fact that we use the whole population reason enough to just run a fixed effect model? Perhaps you can suggest a good reference to this question of when to run a fixed vs. random effects model. I’m not always sure what is meant by a “fixed effects model”; see my paper on Anova for discussion of the problems with this terminology: http://w

2 0.29945976 1241 andrew gelman stats-2012-04-02-Fixed effects and identification

Introduction: Tom Clark writes: Drew Linzer and I [Tom] have been working on a paper about the use of modeled (“random”) and unmodeled (“fixed”) effects. Not directly in response to the paper, but in conversations about the topic over the past few months, several people have said to us things to the effect of “I prefer fixed effects over random effects because I care about identification.” Neither Drew nor I has any idea what this comment is supposed to mean. Have you come across someone saying something like this? Do you have any thoughts about what these people could possibly mean? I want to respond to this concern when people raise it, but I have failed thus far to inquire what is meant and so do not know what to say. My reply: I have a “cultural” reply, which is that so-called fixed effects are thought to make fewer assumptions, and making fewer assumptions is considered a generally good thing that serious people do, and identification is considered a concern of serious people, so they g

3 0.28568819 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

Introduction: Stuart Buck writes: I have a question about fixed effects vs. random effects . Amongst economists who study teacher value-added, it has become common to see people saying that they estimated teacher fixed effects (via least squares dummy variables, so that there is a parameter for each teacher), but that they then applied empirical Bayes shrinkage so that the teacher effects are brought closer to the mean. (See this paper by Jacob and Lefgren, for example.) Can that really be what they are doing? Why wouldn’t they just run random (modeled) effects in the first place? I feel like there’s something I’m missing. My reply: I don’t know the full story here, but I’m thinking there are two goals, first to get an unbiased estimate of an overall treatment effect (and there the econometricians prefer so-called fixed effects; I disagree with them on this but I know where they’re coming from) and second to estimate individual teacher effects (and there it makes sense to use so-called

4 0.28141224 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

Introduction: A research psychologist writes in with a question that’s so long that I’ll put my answer first, then put the question itself below the fold. Here’s my reply: As I wrote in my Anova paper and in my book with Jennifer Hill, I do think that multilevel models can completely replace Anova. At the same time, I think the central idea of Anova should persist in our understanding of these models. To me the central idea of Anova is not F-tests or p-values or sums of squares, but rather the idea of predicting an outcome based on factors with discrete levels, and understanding these factors using variance components. The continuous or categorical response thing doesn’t really matter so much to me. I have no problem using a normal linear model for continuous outcomes (perhaps suitably transformed) and a logistic model for binary outcomes. I don’t want to throw away interactions just because they’re not statistically significant. I’d rather partially pool them toward zero using an inform

5 0.25873703 1194 andrew gelman stats-2012-03-04-Multilevel modeling even when you’re not interested in predictions for new groups

Introduction: Fred Wu writes: I work at National Prescribing Services in Australia. I have a database representing say, antidiabetic drug utilisation for the entire Australia in the past few years. I planned to do a longitudinal analysis across GP Division Network (112 divisions in AUS) using mixed-effects models (or as you called in your book varying intercept and varying slope) on this data. The problem here is: as data actually represent the population who use antidiabetic drugs in AUS, should I use 112 fixed dummy variables to capture the random variations or use varying intercept and varying slope for the model ? Because some one may aruge, like divisions in AUS or states in USA can hardly be considered from a “superpopulation”, then fixed dummies should be used. What I think is the population are those who use the drugs, what will happen when the rest need to use them? In terms of exchangeability, using varying intercept and varying slopes can be justified. Also you provided in y

6 0.19095907 653 andrew gelman stats-2011-04-08-Multilevel regression with shrinkage for “fixed” effects

7 0.18695271 342 andrew gelman stats-2010-10-14-Trying to be precise about vagueness

8 0.18390471 2145 andrew gelman stats-2013-12-24-Estimating and summarizing inference for hierarchical variance parameters when the number of groups is small

9 0.1834217 383 andrew gelman stats-2010-10-31-Analyzing the entire population rather than a sample

10 0.16709195 753 andrew gelman stats-2011-06-09-Allowing interaction terms to vary

11 0.16484305 1786 andrew gelman stats-2013-04-03-Hierarchical array priors for ANOVA decompositions

12 0.15948689 1267 andrew gelman stats-2012-04-17-Hierarchical-multilevel modeling with “big data”

13 0.15418218 2086 andrew gelman stats-2013-11-03-How best to compare effects measured in two different time periods?

14 0.1490286 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

15 0.14132926 759 andrew gelman stats-2011-06-11-“2 level logit with 2 REs & large sample. computational nightmare – please help”

16 0.13932472 972 andrew gelman stats-2011-10-25-How do you interpret standard errors from a regression fit to the entire population?

17 0.13762057 417 andrew gelman stats-2010-11-17-Clutering and variance components

18 0.13431656 464 andrew gelman stats-2010-12-12-Finite-population standard deviation in a hierarchical model

19 0.13024877 1891 andrew gelman stats-2013-06-09-“Heterogeneity of variance in experimental studies: A challenge to conventional interpretations”

20 0.12934805 1120 andrew gelman stats-2012-01-15-Fun fight over the Grover search algorithm


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.227), (1, 0.134), (2, 0.075), (3, -0.089), (4, 0.077), (5, 0.005), (6, 0.022), (7, -0.105), (8, 0.125), (9, 0.065), (10, 0.019), (11, 0.048), (12, 0.014), (13, -0.045), (14, 0.062), (15, 0.009), (16, -0.054), (17, 0.045), (18, -0.078), (19, 0.065), (20, -0.017), (21, -0.017), (22, -0.019), (23, -0.027), (24, -0.084), (25, -0.086), (26, -0.13), (27, 0.132), (28, -0.01), (29, 0.002), (30, -0.03), (31, -0.035), (32, -0.052), (33, -0.037), (34, 0.017), (35, -0.041), (36, -0.048), (37, -0.021), (38, -0.007), (39, 0.033), (40, 0.049), (41, -0.057), (42, 0.052), (43, 0.058), (44, -0.057), (45, 0.013), (46, 0.044), (47, -0.097), (48, -0.038), (49, 0.052)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97289449 472 andrew gelman stats-2010-12-17-So-called fixed and random effects

Introduction: Someone writes: I am hoping you can give me some advice about when to use fixed and random effects model. I am currently working on a paper that examines the effect of . . . by comparing states . . . It got reviewed . . . by three economists and all suggest that we run a fixed effects model. We ran a hierarchial model in the paper that allow the intercept and slope to vary before and after . . . My question is which is correct? We have ran it both ways and really it makes no difference which model you run, the results are very similar. But for my own learning, I would really like to understand which to use under what circumstances. Is the fact that we use the whole population reason enough to just run a fixed effect model? Perhaps you can suggest a good reference to this question of when to run a fixed vs. random effects model. I’m not always sure what is meant by a “fixed effects model”; see my paper on Anova for discussion of the problems with this terminology: http://w

2 0.8885836 1241 andrew gelman stats-2012-04-02-Fixed effects and identification

Introduction: Tom Clark writes: Drew Linzer and I [Tom] have been working on a paper about the use of modeled (“random”) and unmodeled (“fixed”) effects. Not directly in response to the paper, but in conversations about the topic over the past few months, several people have said to us things to the effect of “I prefer fixed effects over random effects because I care about identification.” Neither Drew nor I has any idea what this comment is supposed to mean. Have you come across someone saying something like this? Do you have any thoughts about what these people could possibly mean? I want to respond to this concern when people raise it, but I have failed thus far to inquire what is meant and so do not know what to say. My reply: I have a “cultural” reply, which is that so-called fixed effects are thought to make fewer assumptions, and making fewer assumptions is considered a generally good thing that serious people do, and identification is considered a concern of serious people, so they g

3 0.85323876 1194 andrew gelman stats-2012-03-04-Multilevel modeling even when you’re not interested in predictions for new groups

Introduction: Fred Wu writes: I work at National Prescribing Services in Australia. I have a database representing say, antidiabetic drug utilisation for the entire Australia in the past few years. I planned to do a longitudinal analysis across GP Division Network (112 divisions in AUS) using mixed-effects models (or as you called in your book varying intercept and varying slope) on this data. The problem here is: as data actually represent the population who use antidiabetic drugs in AUS, should I use 112 fixed dummy variables to capture the random variations or use varying intercept and varying slope for the model ? Because some one may aruge, like divisions in AUS or states in USA can hardly be considered from a “superpopulation”, then fixed dummies should be used. What I think is the population are those who use the drugs, what will happen when the rest need to use them? In terms of exchangeability, using varying intercept and varying slopes can be justified. Also you provided in y

4 0.83682919 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

Introduction: Stuart Buck writes: I have a question about fixed effects vs. random effects . Amongst economists who study teacher value-added, it has become common to see people saying that they estimated teacher fixed effects (via least squares dummy variables, so that there is a parameter for each teacher), but that they then applied empirical Bayes shrinkage so that the teacher effects are brought closer to the mean. (See this paper by Jacob and Lefgren, for example.) Can that really be what they are doing? Why wouldn’t they just run random (modeled) effects in the first place? I feel like there’s something I’m missing. My reply: I don’t know the full story here, but I’m thinking there are two goals, first to get an unbiased estimate of an overall treatment effect (and there the econometricians prefer so-called fixed effects; I disagree with them on this but I know where they’re coming from) and second to estimate individual teacher effects (and there it makes sense to use so-called

5 0.82638431 2086 andrew gelman stats-2013-11-03-How best to compare effects measured in two different time periods?

Introduction: I received the following email from someone who wishes to remain anonymous: My colleague and I are trying to understand the best way to approach a problem involving measuring a group of individuals’ abilities across time, and are hoping you can offer some guidance. We are trying to analyze the combined effect of two distinct groups of people (A and B, with no overlap between A and B) who collaborate to produce a binary outcome, using a mixed logistic regression along the lines of the following. Outcome ~ (1 | A) + (1 | B) + Other variables What we’re interested in testing was whether the observed A random effects in period 1 are predictive of the A random effects in the following period 2. Our idea being create two models, each using a different period’s worth of data, to create two sets of A coefficients, then observe the relationship between the two. If the A’s have a persistent ability across periods, the coefficients should be correlated or show a linear-ish relationshi

6 0.81854039 1267 andrew gelman stats-2012-04-17-Hierarchical-multilevel modeling with “big data”

7 0.80570734 464 andrew gelman stats-2010-12-12-Finite-population standard deviation in a hierarchical model

8 0.80501556 1891 andrew gelman stats-2013-06-09-“Heterogeneity of variance in experimental studies: A challenge to conventional interpretations”

9 0.79776108 1686 andrew gelman stats-2013-01-21-Finite-population Anova calculations for models with interactions

10 0.79360378 2145 andrew gelman stats-2013-12-24-Estimating and summarizing inference for hierarchical variance parameters when the number of groups is small

11 0.77122372 653 andrew gelman stats-2011-04-08-Multilevel regression with shrinkage for “fixed” effects

12 0.75882131 417 andrew gelman stats-2010-11-17-Clutering and variance components

13 0.7536723 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

14 0.74802864 753 andrew gelman stats-2011-06-09-Allowing interaction terms to vary

15 0.74596632 77 andrew gelman stats-2010-06-09-Sof[t]

16 0.73708105 759 andrew gelman stats-2011-06-11-“2 level logit with 2 REs & large sample. computational nightmare – please help”

17 0.733441 851 andrew gelman stats-2011-08-12-year + (1|year)

18 0.7311461 269 andrew gelman stats-2010-09-10-R vs. Stata, or, Different ways to estimate multilevel models

19 0.71390229 1786 andrew gelman stats-2013-04-03-Hierarchical array priors for ANOVA decompositions

20 0.71057314 823 andrew gelman stats-2011-07-26-Including interactions or not


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(7, 0.036), (15, 0.026), (16, 0.076), (24, 0.125), (64, 0.023), (76, 0.012), (79, 0.045), (84, 0.044), (95, 0.049), (99, 0.458)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99020147 472 andrew gelman stats-2010-12-17-So-called fixed and random effects

Introduction: Someone writes: I am hoping you can give me some advice about when to use fixed and random effects model. I am currently working on a paper that examines the effect of . . . by comparing states . . . It got reviewed . . . by three economists and all suggest that we run a fixed effects model. We ran a hierarchial model in the paper that allow the intercept and slope to vary before and after . . . My question is which is correct? We have ran it both ways and really it makes no difference which model you run, the results are very similar. But for my own learning, I would really like to understand which to use under what circumstances. Is the fact that we use the whole population reason enough to just run a fixed effect model? Perhaps you can suggest a good reference to this question of when to run a fixed vs. random effects model. I’m not always sure what is meant by a “fixed effects model”; see my paper on Anova for discussion of the problems with this terminology: http://w

2 0.98230672 1807 andrew gelman stats-2013-04-17-Data problems, coding errors…what can be done?

Introduction: This post is by Phil A recent post on this blog discusses a prominent case of an Excel error leading to substantially wrong results from a statistical analysis. Excel is notorious for this because it is easy to add a row or column of data (or intermediate results) but forget to update equations so that they correctly use the new data. That particular error is less common in a language like R because R programmers usually refer to data by variable name (or by applying functions to a named variable), so the same code works even if you add or remove data. Still, there is plenty of opportunity for errors no matter what language one uses. Andrew ran into problems fairly recently, and also blogged about another instance. I’ve never had to retract a paper, but that’s partly because I haven’t published a whole lot of papers. Certainly I have found plenty of substantial errors pretty late in some of my data analyses, and I obviously don’t have sufficient mechanisms in place to be sure

3 0.98224497 1213 andrew gelman stats-2012-03-15-Economics now = Freudian psychology in the 1950s: More on the incoherence of “economics exceptionalism”

Introduction: What follows is a long response to a comment on someone else’s blog . The quote is, “Thinking like an economist simply means that you scientifically approach human social behavior. . . .” I’ll give the context in a bit, but first let me say that I thought this topic might be worth one more discussion because I suspect that the sort of economics exceptionalism that I will discuss is widely disseminated in college econ courses as well as in books such as the Freakonomics series. It’s great to have pride in human achievements but at some point too much group self-regard can be distorting. My best analogy to economics exceptionalism is Freudianism in the 1950s: Back then, Freudian psychiatrists were on the top of the world. Not only were they well paid, well respected, and secure in their theoretical foundations, they were also at the center of many important conversations. Even those people who disagreed with them felt the need to explain why the Freudians were wrong. Freudian

4 0.9818393 785 andrew gelman stats-2011-07-02-Experimental reasoning in social science

Introduction: As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.” At the same time, in my capacity as a social scientist, I’ve published many applied research papers, almost none of which have used experimental data. In the present article, I’ll address the following questions: 1. Why do I agree with the consensus characterization of randomized experimentation as a gold standard? 2. Given point 1 above, why does almost all my research use observational data? In confronting these issues, we must consider some general issues in the strategy of social science research. We also take from the psychology methods literature a more nuanced perspective that considers several differen

5 0.98101723 822 andrew gelman stats-2011-07-26-Any good articles on the use of error bars?

Introduction: Hadley Wickham asks: I was wondering if you knew of any good articles on the use of error bars. I’m particularly looking for articles that discuss the difference between error of means and error of difference in the context of models (e.g. mixed models) where they are very different. I suspect every applied field has a couple of good articles, but it’s really hard to search for them. Can anyone help on this? My only advice is to get rid of those horrible crossbars at the ends of the error bars. The crossbars draw attention to the error bars’ endpoints, which are generally not important at all. See, for example, my Anova paper , for some examples of how I like error bars to look.

6 0.98066324 236 andrew gelman stats-2010-08-26-Teaching yourself mathematics

7 0.97981358 277 andrew gelman stats-2010-09-14-In an introductory course, when does learning occur?

8 0.97973537 1761 andrew gelman stats-2013-03-13-Lame Statistics Patents

9 0.97968704 1336 andrew gelman stats-2012-05-22-Battle of the Repo Man quotes: Reid Hastie’s turn

10 0.97966647 757 andrew gelman stats-2011-06-10-Controversy over the Christakis-Fowler findings on the contagion of obesity

11 0.97933233 750 andrew gelman stats-2011-06-07-Looking for a purpose in life: Update on that underworked and overpaid sociologist whose “main task as a university professor was self-cultivation”

12 0.97906792 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

13 0.97901058 1372 andrew gelman stats-2012-06-08-Stop me before I aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

14 0.97882926 2279 andrew gelman stats-2014-04-02-Am I too negative?

15 0.97851628 2030 andrew gelman stats-2013-09-19-Is coffee a killer? I don’t think the effect is as high as was estimated from the highest number that came out of a noisy study

16 0.9783501 1289 andrew gelman stats-2012-04-29-We go to war with the data we have, not the data we want

17 0.97814023 186 andrew gelman stats-2010-08-04-“To find out what happens when you change something, it is necessary to change it.”

18 0.9780426 2008 andrew gelman stats-2013-09-04-Does it matter that a sample is unrepresentative? It depends on the size of the treatment interactions

19 0.97804183 2151 andrew gelman stats-2013-12-27-Should statistics have a Nobel prize?

20 0.97792292 1527 andrew gelman stats-2012-10-10-Another reason why you can get good inferences from a bad model