andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-328 knowledge-graph by maker-knowledge-mining

328 andrew gelman stats-2010-10-08-Displaying a fitted multilevel model


meta infos for this blog

Source: html

Introduction: Elissa Brown writes: I’m working on some data using a multinomial model (3 categories for the response & 2 predictors-1 continuous and 1 binary), and I’ve been looking and looking for some sort of nice graphical way to show my model at work. Something like a predicted probabilities plot. I know you can do this for the levels of Y with just one covariate, but is this still a valid way to describe the multinomial model (just doing a pred plot for each covariate)? What’s the deal, is there really no way to graphically represent a successful multinomial model? Also, is it unreasonable to break down your model into a binary response just to get some ROC curves? This seems like cheating. From what I’ve found so far, it seems that people just avoid graphical support when discussing their fitted multinomial models. My reply: It’s hard for me to think about this sort of thing in the abstract with no context. We do have one example in chapter 6 of ARM where we display data and fitted m


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Elissa Brown writes: I’m working on some data using a multinomial model (3 categories for the response & 2 predictors-1 continuous and 1 binary), and I’ve been looking and looking for some sort of nice graphical way to show my model at work. [sent-1, score-1.926]

2 I know you can do this for the levels of Y with just one covariate, but is this still a valid way to describe the multinomial model (just doing a pred plot for each covariate)? [sent-3, score-1.209]

3 What’s the deal, is there really no way to graphically represent a successful multinomial model? [sent-4, score-0.919]

4 Also, is it unreasonable to break down your model into a binary response just to get some ROC curves? [sent-5, score-0.91]

5 From what I’ve found so far, it seems that people just avoid graphical support when discussing their fitted multinomial models. [sent-7, score-1.182]

6 My reply: It’s hard for me to think about this sort of thing in the abstract with no context. [sent-8, score-0.149]

7 We do have one example in chapter 6 of ARM where we display data and fitted model together in a plot–it’s from our storable votes project–but maybe it’s not quite general enough for your problem. [sent-9, score-0.886]

8 I’m sure, though, that there is a good solution, and likely it’s a solution that’s worth programming and writing up in a journal article. [sent-10, score-0.232]

9 I certainly agree that it’s a bad idea to break up your response into binary just to use some convenient binary-data tools. [sent-11, score-0.76]

10 If you must dichotomize your data, please throw out the middle third or half . [sent-12, score-0.373]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('multinomial', 0.555), ('binary', 0.268), ('covariate', 0.242), ('model', 0.189), ('break', 0.179), ('fitted', 0.179), ('graphical', 0.169), ('response', 0.161), ('plot', 0.153), ('storable', 0.15), ('solution', 0.144), ('roc', 0.144), ('graphically', 0.131), ('unreasonable', 0.113), ('curves', 0.111), ('brown', 0.105), ('looking', 0.102), ('arm', 0.093), ('convenient', 0.092), ('categories', 0.091), ('votes', 0.091), ('programming', 0.088), ('valid', 0.086), ('predicted', 0.085), ('throw', 0.082), ('successful', 0.082), ('discussing', 0.081), ('display', 0.081), ('continuous', 0.08), ('probabilities', 0.08), ('middle', 0.079), ('abstract', 0.076), ('levels', 0.076), ('represent', 0.076), ('third', 0.076), ('way', 0.075), ('describe', 0.075), ('nice', 0.075), ('sort', 0.073), ('avoid', 0.073), ('half', 0.071), ('project', 0.068), ('deal', 0.068), ('chapter', 0.067), ('seems', 0.065), ('please', 0.065), ('data', 0.065), ('together', 0.064), ('certainly', 0.06), ('support', 0.06)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 328 andrew gelman stats-2010-10-08-Displaying a fitted multilevel model

Introduction: Elissa Brown writes: I’m working on some data using a multinomial model (3 categories for the response & 2 predictors-1 continuous and 1 binary), and I’ve been looking and looking for some sort of nice graphical way to show my model at work. Something like a predicted probabilities plot. I know you can do this for the levels of Y with just one covariate, but is this still a valid way to describe the multinomial model (just doing a pred plot for each covariate)? What’s the deal, is there really no way to graphically represent a successful multinomial model? Also, is it unreasonable to break down your model into a binary response just to get some ROC curves? This seems like cheating. From what I’ve found so far, it seems that people just avoid graphical support when discussing their fitted multinomial models. My reply: It’s hard for me to think about this sort of thing in the abstract with no context. We do have one example in chapter 6 of ARM where we display data and fitted m

2 0.23178348 2163 andrew gelman stats-2014-01-08-How to display multinominal logit results graphically?

Introduction: Adriana Lins de Albuquerque writes: Do you have any suggestions for the best way to represent multinominal logit results graphically? I am using stata. My reply: I don’t know from Stata, but here are my suggestions: 1. If the categories are unordered, break them up into a series of binary choices in a tree structure (for example, non-voter or voter, then voting for left or right, then voting for left party A or B, then voting for right party C or D). Each of these is a binary split and so can be displayed using the usual techniques for logit (as in chapters 3 and 4 of ARM). 2. If the categories are ordered, see Figure 6.4 of ARM for an example (from our analysis of storable votes).

3 0.17001744 782 andrew gelman stats-2011-06-29-Putting together multinomial discrete regressions by combining simple logits

Introduction: When predicting 0/1 data we can use logit (or probit or robit or some other robust model such as invlogit (0.01 + 0.98*X*beta)). Logit is simple enough and we can use bayesglm to regularize and avoid the problem of separation. What if there are more than 2 categories? If they’re ordered (1, 2, 3, etc), we can do ordered logit (and use bayespolr() to avoid separation). If the categories are unordered (vanilla, chocolate, strawberry), there are unordered multinomial logit and probit models out there. But it’s not so easy to fit these multinomial model in a multilevel setting (with coefficients that vary by group), especially if the computation is embedded in an iterative routine such as mi where you have real time constraints at each step. So this got me wondering whether we could kluge it with logits. Here’s the basic idea (in the ordered and unordered forms): - If you have a variable that goes 1, 2, 3, etc., set up a series of logits: 1 vs. 2,3,…; 2 vs. 3,…; and so forth

4 0.16244975 154 andrew gelman stats-2010-07-18-Predictive checks for hierarchical models

Introduction: Daniel Corsi writes: I was wondering if you could help me with some code to set up a posterior predictive check for an unordered multinomial multilevel model. In this case the outcome is categories of bmi (underweight, nomral weight, and overweight) based on individuals from 360 different areas. What I would like to do is set up a replicated dataset to see how the number of overweight/underweight/normal weight individuals based on the model compares to the actual data and some kind of a graphical summary. I am following along with chapter 24 of the arm book but I want to verify that the replicated data accounts for the multilevel structure of the data of people within areas. I am attaching the code I used to run a simple model with only 2 predictors (area wealth and urban/rural designation). My reply: The Bugs code is a bit much for me to look at–but I do recommend that you run it from R, which will give you more flexibility in preprocessing and postprocessing the data. Beyon

5 0.15201712 852 andrew gelman stats-2011-08-13-Checking your model using fake data

Introduction: Someone sent me the following email: I tried to do a logistic regression . . . I programmed the model in different ways and got different answers . . . can’t get the results to match . . . What am I doing wrong? . . . Here’s my code . . . I didn’t have the time to look at his code so I gave the following general response: One way to check things is to try simulating data from the fitted model, then fit your model again to the simulated data and see what happens. P.S. He followed my suggestion and responded a few days later: Yeah, that did the trick! I was treating a factor variable as a covariate! I love it when generic advice works out!

6 0.14218342 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

7 0.1321805 1462 andrew gelman stats-2012-08-18-Standardizing regression inputs

8 0.12486663 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

9 0.11104077 935 andrew gelman stats-2011-10-01-When should you worry about imputed data?

10 0.10982943 2145 andrew gelman stats-2013-12-24-Estimating and summarizing inference for hierarchical variance parameters when the number of groups is small

11 0.1097181 780 andrew gelman stats-2011-06-27-Bridges between deterministic and probabilistic models for binary data

12 0.10801905 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

13 0.10617535 1196 andrew gelman stats-2012-03-04-Piss-poor monocausal social science

14 0.10451462 24 andrew gelman stats-2010-05-09-Special journal issue on statistical methods for the social sciences

15 0.102033 772 andrew gelman stats-2011-06-17-Graphical tools for understanding multilevel models

16 0.10040827 1047 andrew gelman stats-2011-12-08-I Am Too Absolutely Heteroskedastic for This Probit Model

17 0.099721439 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

18 0.099501729 1228 andrew gelman stats-2012-03-25-Continuous variables in Bayesian networks

19 0.099387631 324 andrew gelman stats-2010-10-07-Contest for developing an R package recommendation system

20 0.099107534 1431 andrew gelman stats-2012-07-27-Overfitting


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.168), (1, 0.066), (2, 0.031), (3, 0.044), (4, 0.081), (5, -0.017), (6, -0.013), (7, -0.04), (8, 0.09), (9, 0.03), (10, 0.068), (11, 0.029), (12, -0.054), (13, -0.017), (14, -0.041), (15, -0.007), (16, 0.063), (17, -0.027), (18, -0.013), (19, -0.007), (20, 0.016), (21, -0.013), (22, -0.008), (23, -0.071), (24, -0.042), (25, -0.012), (26, 0.026), (27, -0.055), (28, -0.025), (29, -0.015), (30, -0.019), (31, -0.017), (32, 0.004), (33, 0.031), (34, -0.025), (35, -0.008), (36, 0.017), (37, 0.02), (38, -0.023), (39, 0.011), (40, 0.024), (41, -0.037), (42, -0.007), (43, 0.021), (44, -0.005), (45, 0.031), (46, -0.034), (47, -0.02), (48, 0.021), (49, 0.062)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95868206 328 andrew gelman stats-2010-10-08-Displaying a fitted multilevel model

Introduction: Elissa Brown writes: I’m working on some data using a multinomial model (3 categories for the response & 2 predictors-1 continuous and 1 binary), and I’ve been looking and looking for some sort of nice graphical way to show my model at work. Something like a predicted probabilities plot. I know you can do this for the levels of Y with just one covariate, but is this still a valid way to describe the multinomial model (just doing a pred plot for each covariate)? What’s the deal, is there really no way to graphically represent a successful multinomial model? Also, is it unreasonable to break down your model into a binary response just to get some ROC curves? This seems like cheating. From what I’ve found so far, it seems that people just avoid graphical support when discussing their fitted multinomial models. My reply: It’s hard for me to think about this sort of thing in the abstract with no context. We do have one example in chapter 6 of ARM where we display data and fitted m

2 0.87212956 782 andrew gelman stats-2011-06-29-Putting together multinomial discrete regressions by combining simple logits

Introduction: When predicting 0/1 data we can use logit (or probit or robit or some other robust model such as invlogit (0.01 + 0.98*X*beta)). Logit is simple enough and we can use bayesglm to regularize and avoid the problem of separation. What if there are more than 2 categories? If they’re ordered (1, 2, 3, etc), we can do ordered logit (and use bayespolr() to avoid separation). If the categories are unordered (vanilla, chocolate, strawberry), there are unordered multinomial logit and probit models out there. But it’s not so easy to fit these multinomial model in a multilevel setting (with coefficients that vary by group), especially if the computation is embedded in an iterative routine such as mi where you have real time constraints at each step. So this got me wondering whether we could kluge it with logits. Here’s the basic idea (in the ordered and unordered forms): - If you have a variable that goes 1, 2, 3, etc., set up a series of logits: 1 vs. 2,3,…; 2 vs. 3,…; and so forth

3 0.85480016 852 andrew gelman stats-2011-08-13-Checking your model using fake data

Introduction: Someone sent me the following email: I tried to do a logistic regression . . . I programmed the model in different ways and got different answers . . . can’t get the results to match . . . What am I doing wrong? . . . Here’s my code . . . I didn’t have the time to look at his code so I gave the following general response: One way to check things is to try simulating data from the fitted model, then fit your model again to the simulated data and see what happens. P.S. He followed my suggestion and responded a few days later: Yeah, that did the trick! I was treating a factor variable as a covariate! I love it when generic advice works out!

4 0.85303408 1875 andrew gelman stats-2013-05-28-Simplify until your fake-data check works, then add complications until you can figure out where the problem is coming from

Introduction: I received the following email: I am trying to develop a Bayesian model to represent the process through which individual consumers make online product rating decisions. In my model each individual faces total J product options and for each product option (j) each individual (i) needs to make three sequential decisions: - First he decides whether to consume a specific product option (j) or not (choice decision) - If he decides to consume a product option j, then after consumption he decides whether to rate it or not (incidence decision) - If he decides to rate product j then what finally he decides what rating (k) to assign to it (evaluation decision) We model this decision sequence in terms of three equations. A binary response variable in the first equation represents the choice decision. Another binary response variable in the second equation represents the incidence decision that is observable only when first selection decision is 1. Finally, an ordered response v

5 0.80408692 1141 andrew gelman stats-2012-01-28-Using predator-prey models on the Canadian lynx series

Introduction: The “Canadian lynx data” is one of the famous examples used in time series analysis. And the usual models that are fit to these data in the statistics time-series literature, don’t work well. Cavan Reilly and Angelique Zeringue write : Reilly and Zeringue then present their analysis. Their simple little predator-prey model with a weakly informative prior way outperforms the standard big-ass autoregression models. Check this out: Or, to put it into numbers, when they fit their model to the first 80 years and predict to the next 34, their root mean square out-of-sample error is 1480 (see scale of data above). In contrast, the standard model fit to these data (the SETAR model of Tong, 1990) has more than twice as many parameters but gets a worse-performing root mean square error of 1600, even when that model is fit to the entire dataset. (If you fit the SETAR or any similar autoregressive model to the first 80 years and use it to predict the next 34, the predictions

6 0.79134554 1735 andrew gelman stats-2013-02-24-F-f-f-fake data

7 0.78799438 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

8 0.78583992 20 andrew gelman stats-2010-05-07-Bayesian hierarchical model for the prediction of soccer results

9 0.78413355 935 andrew gelman stats-2011-10-01-When should you worry about imputed data?

10 0.77421391 823 andrew gelman stats-2011-07-26-Including interactions or not

11 0.77285057 1431 andrew gelman stats-2012-07-27-Overfitting

12 0.77014601 1395 andrew gelman stats-2012-06-27-Cross-validation (What is it good for?)

13 0.76882029 2133 andrew gelman stats-2013-12-13-Flexibility is good

14 0.76774311 1392 andrew gelman stats-2012-06-26-Occam

15 0.76194841 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

16 0.76056224 1047 andrew gelman stats-2011-12-08-I Am Too Absolutely Heteroskedastic for This Probit Model

17 0.75539982 448 andrew gelman stats-2010-12-03-This is a footnote in one of my papers

18 0.75453687 24 andrew gelman stats-2010-05-09-Special journal issue on statistical methods for the social sciences

19 0.75311095 929 andrew gelman stats-2011-09-27-Visual diagnostics for discrete-data regressions

20 0.7530427 1468 andrew gelman stats-2012-08-24-Multilevel modeling and instrumental variables


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.012), (16, 0.036), (24, 0.201), (27, 0.014), (31, 0.013), (34, 0.061), (37, 0.029), (50, 0.142), (51, 0.015), (55, 0.01), (63, 0.013), (86, 0.024), (99, 0.323)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95602554 328 andrew gelman stats-2010-10-08-Displaying a fitted multilevel model

Introduction: Elissa Brown writes: I’m working on some data using a multinomial model (3 categories for the response & 2 predictors-1 continuous and 1 binary), and I’ve been looking and looking for some sort of nice graphical way to show my model at work. Something like a predicted probabilities plot. I know you can do this for the levels of Y with just one covariate, but is this still a valid way to describe the multinomial model (just doing a pred plot for each covariate)? What’s the deal, is there really no way to graphically represent a successful multinomial model? Also, is it unreasonable to break down your model into a binary response just to get some ROC curves? This seems like cheating. From what I’ve found so far, it seems that people just avoid graphical support when discussing their fitted multinomial models. My reply: It’s hard for me to think about this sort of thing in the abstract with no context. We do have one example in chapter 6 of ARM where we display data and fitted m

2 0.95431507 1793 andrew gelman stats-2013-04-08-The Supreme Court meets the fallacy of the one-sided bet

Introduction: Doug Hartmann writes ( link from Jay Livingston): Justice Antonin Scalia’s comment in the Supreme Court hearings on the U.S. law defining marriage that “there’s considerable disagreement among sociologists as to what the consequences of raising a child in a single-sex family, whether that is harmful to the child or not.” Hartman argues that Scalia is factually incorrect—there is not actually “considerable disagreement among sociologists” on this issue—and quotes a recent report from the American Sociological Association to this effect. Assuming there’s no other considerable group of sociologists (Hartman knows of only one small group) arguing otherwise, it seems that Hartman has a point. Scalia would’ve been better off omitting the phrase “among sociologists”—then he’d have been on safe ground, because you can always find somebody to take a position on the issue. Jerry Falwell’s no longer around but there’s a lot more where he came from. Even among scientists, there’s

3 0.94328827 1805 andrew gelman stats-2013-04-16-Memo to Reinhart and Rogoff: I think it’s best to admit your errors and go on from there

Introduction: Jeff Ratto points me to this news article by Dean Baker reporting the work of three economists, Thomas Herndon, Michael Ash, and Robert Pollin, who found errors in a much-cited article by Carmen Reinhart and Kenneth Rogoff analyzing historical statistics of economic growth and public debt. Mike Konczal provides a clear summary; that’s where I got the above image. Errors in data processing and data analysis It turns out that Reinhart and Rogoff flubbed it. Herndon et al. write of “spreadsheet errors, omission of available data, weighting, and transcription.” The spreadsheet errors are the most embarrassing, but the other choices in data analysis seem pretty bad too. It can be tough to work with small datasets, so I have sympathy for Reinhart and Rogoff, but it does look like they were jumping to conclusions in their paper. Perhaps the urgency of the topic moved them to publish as fast as possible rather than carefully considering the impact of their data-analytic choi

4 0.94207907 818 andrew gelman stats-2011-07-23-Parallel JAGS RNGs

Introduction: As a matter of convention, we usually run 3 or 4 chains in JAGS. By default, this gives rise to chains that draw samples from 3 or 4 distinct pseudorandom number generators. I didn’t go and check whether it does things 111,222,333 or 123,123,123, but in any event the “parallel chains” in JAGS are samples drawn from distinct RNGs computed on a single processor core. But we all have multiple cores now, or we’re computing on a cluster or the cloud! So the behavior we’d like from rjags is to use the foreach package with each JAGS chain using a parallel-safe RNG. The default behavior with n.chain=1 will be that each parallel instance will use .RNG.name[1] , the Wichmann-Hill RNG. JAGS 2.2.0 includes a new lecuyer module (along with the glm module, which everyone should probably always use, and doesn’t have many undocumented tricks that I know of). But lecuyer is completely undocumented! I tried .RNG.name="lecuyer::Lecuyer" , .RNG.name="lecuyer::lecuyer" , and .RNG.name=

5 0.93540132 541 andrew gelman stats-2011-01-27-Why can’t I be more like Bill James, or, The use of default and default-like models

Introduction: During our discussion of estimates of teacher performance, Steve Sailer wrote : I suspect we’re going to take years to work the kinks out of overall rating systems. By way of analogy, Bill James kicked off the modern era of baseball statistics analysis around 1975. But he stuck to doing smaller scale analyses and avoided trying to build one giant overall model for rating players. In contrast, other analysts such as Pete Palmer rushed into building overall ranking systems, such as his 1984 book, but they tended to generate curious results such as the greatness of Roy Smalley Jr.. James held off until 1999 before unveiling his win share model for overall rankings. I remember looking at Pete Palmer’s book many years ago and being disappointed that he did everything through his Linear Weights formula. A hit is worth X, a walk is worth Y, etc. Some of this is good–it’s presumably an improvement on counting walks as 0 or 1 hits, also an improvement on counting doubles and triples a

6 0.93499899 1981 andrew gelman stats-2013-08-14-The robust beauty of improper linear models in decision making

7 0.93488562 374 andrew gelman stats-2010-10-27-No matter how famous you are, billions of people have never heard of you.

8 0.93118787 1636 andrew gelman stats-2012-12-23-Peter Bartlett on model complexity and sample size

9 0.92993832 210 andrew gelman stats-2010-08-16-What I learned from those tough 538 commenters

10 0.92657816 729 andrew gelman stats-2011-05-24-Deviance as a difference

11 0.92576468 936 andrew gelman stats-2011-10-02-Covariate Adjustment in RCT - Model Overfitting in Multilevel Regression

12 0.92452961 1792 andrew gelman stats-2013-04-07-X on JLP

13 0.92382503 61 andrew gelman stats-2010-05-31-A data visualization manifesto

14 0.92329228 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

15 0.92312032 1723 andrew gelman stats-2013-02-15-Wacky priors can work well?

16 0.92282808 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

17 0.92127788 970 andrew gelman stats-2011-10-24-Bell Labs

18 0.92125201 2305 andrew gelman stats-2014-04-25-Revised statistical standards for evidence (comments to Val Johnson’s comments on our comments on Val’s comments on p-values)

19 0.92102373 466 andrew gelman stats-2010-12-13-“The truth wears off: Is there something wrong with the scientific method?”

20 0.92089069 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability