andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-610 knowledge-graph by maker-knowledge-mining

610 andrew gelman stats-2011-03-13-Secret weapon with rare events


meta infos for this blog

Source: html

Introduction: Gregory Eady writes: I’m working on a paper examining the effect of superpower alliance on a binary DV (war). I hypothesize that the size of the effect is much higher during the Cold War than it is afterwards. I’m going to run a Chow test to check whether this effect differs significantly between 1960-1989 and 1990-2007 (Scott Long also has a method using predicted probabilities), but I’d also like to show the trend graphically, and thought that your “Secret Weapon” would be useful here. I wonder if there is anything I should be concerned about when doing this with a (rare-events) logistic regression. I was thinking to graph the coefficients in 5-year periods, moving a single year at a time (1960-64, 1961-65, 1962-66, and so on), reporting the coefficient in the graph for the middle year of each 5-year range). My reply: I don’t know nuthin bout no Chow test but, sure, I’d think the secret weapon would work. If you’re analyzing 5-year periods, it might be cleaner just to keep t


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Gregory Eady writes: I’m working on a paper examining the effect of superpower alliance on a binary DV (war). [sent-1, score-0.46]

2 I hypothesize that the size of the effect is much higher during the Cold War than it is afterwards. [sent-2, score-0.379]

3 I’m going to run a Chow test to check whether this effect differs significantly between 1960-1989 and 1990-2007 (Scott Long also has a method using predicted probabilities), but I’d also like to show the trend graphically, and thought that your “Secret Weapon” would be useful here. [sent-3, score-0.806]

4 I wonder if there is anything I should be concerned about when doing this with a (rare-events) logistic regression. [sent-4, score-0.16]

5 I was thinking to graph the coefficients in 5-year periods, moving a single year at a time (1960-64, 1961-65, 1962-66, and so on), reporting the coefficient in the graph for the middle year of each 5-year range). [sent-5, score-0.76]

6 My reply: I don’t know nuthin bout no Chow test but, sure, I’d think the secret weapon would work. [sent-6, score-0.701]

7 If you’re analyzing 5-year periods, it might be cleaner just to keep the periods disjoint. [sent-7, score-0.778]

8 Set the boundaries of these periods in a reasonable way (if necessary using periods of unequal lengths so that your intervals don’t straddle important potential change points). [sent-8, score-1.927]

9 I suppose in this case you could do 1960-64, 65-69, …, and this would break at 1989/90 so it would be fine. [sent-9, score-0.213]

10 If you’re really running into rare events, though, you might want 10-year periods rather than 5-year. [sent-10, score-0.726]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('periods', 0.583), ('chow', 0.28), ('weapon', 0.229), ('secret', 0.18), ('war', 0.158), ('straddle', 0.14), ('alliance', 0.134), ('effect', 0.132), ('dv', 0.129), ('gregory', 0.129), ('hypothesize', 0.129), ('graphically', 0.122), ('differs', 0.119), ('lengths', 0.117), ('unequal', 0.117), ('bout', 0.117), ('boundaries', 0.117), ('cleaner', 0.115), ('examining', 0.111), ('test', 0.11), ('cold', 0.109), ('graph', 0.102), ('significantly', 0.093), ('year', 0.092), ('scott', 0.088), ('trend', 0.087), ('concerned', 0.084), ('binary', 0.083), ('break', 0.083), ('analyzing', 0.08), ('coefficient', 0.08), ('predicted', 0.079), ('intervals', 0.079), ('events', 0.078), ('rare', 0.078), ('logistic', 0.076), ('coefficients', 0.075), ('probabilities', 0.074), ('moving', 0.074), ('middle', 0.074), ('necessary', 0.07), ('range', 0.069), ('reporting', 0.069), ('would', 0.065), ('running', 0.065), ('using', 0.062), ('size', 0.06), ('potential', 0.059), ('check', 0.059), ('higher', 0.058)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 610 andrew gelman stats-2011-03-13-Secret weapon with rare events

Introduction: Gregory Eady writes: I’m working on a paper examining the effect of superpower alliance on a binary DV (war). I hypothesize that the size of the effect is much higher during the Cold War than it is afterwards. I’m going to run a Chow test to check whether this effect differs significantly between 1960-1989 and 1990-2007 (Scott Long also has a method using predicted probabilities), but I’d also like to show the trend graphically, and thought that your “Secret Weapon” would be useful here. I wonder if there is anything I should be concerned about when doing this with a (rare-events) logistic regression. I was thinking to graph the coefficients in 5-year periods, moving a single year at a time (1960-64, 1961-65, 1962-66, and so on), reporting the coefficient in the graph for the middle year of each 5-year range). My reply: I don’t know nuthin bout no Chow test but, sure, I’d think the secret weapon would work. If you’re analyzing 5-year periods, it might be cleaner just to keep t

2 0.22195713 417 andrew gelman stats-2010-11-17-Clutering and variance components

Introduction: Raymond Lim writes: Do you have any recommendations on clustering and binary models? My particular problem is I’m running a firm fixed effect logit and want to cluster by industry-year (every combination of industry-year). My control variable of interest in measured by industry-year and when I cluster by industry-year, the standard errors are 300x larger than when I don’t cluster. Strangely, this problem only occurs when doing logit and not OLS (linear probability). Also, clustering just by field doesn’t blow up the errors. My hunch is it has something to do with the non-nested structure of year, but I don’t understand why this is only problematic under logit and not OLS. My reply: I’d recommend including four multilevel variance parameters, one for firm, one for industry, one for year, and one for industry-year. (In lmer, that’s (1 | firm) + (1 | industry) + (1 | year) + (1 | industry.year)). No need to include (1 | firm.year) since in your data this is the error term. Try

3 0.17617834 2120 andrew gelman stats-2013-12-02-Does a professor’s intervention in online discussions have the effect of prolonging discussion or cutting it off?

Introduction: Usually I don’t post answers to questions right away, but Mark Liberman was kind enough to answer my question yesterday so I think I should reciprocate. Mark asks: I’ve been playing around with data from Coursera transaction logs, for an economics course and a modern poetry course so far. For the Modern Poetry course, where there’s quite a bit of activity in the forums, the instructor (Al Filreis) is interested in what the factors are that lead to discussion threads being longer or shorter. For example, he wonders whether his own (fairly frequent) interventions have the effect of prolonging discussion or cutting it off. Some background explorations are here with the relevant stuff mostly at the end, including this . With respect to Al’s specific question, my thought was to look at each of his comments, each one being the nth in some sequence, and to look at the empirical probability of continuing (at all, or perhaps for at least 1,2,3,… additional turns) in those cases c

4 0.15725452 2319 andrew gelman stats-2014-05-05-Can we make better graphs of global temperature history?

Introduction: Chris Gittins sends along this post by Gavin Schmidt, who writes: Some editors at Wikipedia have made an attempt to produce a complete record for the Phanerozoic: But these collations are imperfect in many ways. On the last figure the time axis is a rather confusing mix of linear segments and logarithmic scaling, there is no calibration during overlap periods, and the scaling and baselining of the individual, differently sourced data is a little ad hoc. Wikipedia has figures for other time periods that have not been updated in years and treatment of uncertainties is haphazard (many originally from  GlobalWarmingArt ). I think this could all be done better. However, creating good graphics takes time and some skill, especially when the sources of data are so disparate. So this might be usefully done using some crowd-sourcing . . . In general, I’d give the advice that multiple graphs are a good idea, and that many graphics difficulties come from people trying to come up w

5 0.13123092 2086 andrew gelman stats-2013-11-03-How best to compare effects measured in two different time periods?

Introduction: I received the following email from someone who wishes to remain anonymous: My colleague and I are trying to understand the best way to approach a problem involving measuring a group of individuals’ abilities across time, and are hoping you can offer some guidance. We are trying to analyze the combined effect of two distinct groups of people (A and B, with no overlap between A and B) who collaborate to produce a binary outcome, using a mixed logistic regression along the lines of the following. Outcome ~ (1 | A) + (1 | B) + Other variables What we’re interested in testing was whether the observed A random effects in period 1 are predictive of the A random effects in the following period 2. Our idea being create two models, each using a different period’s worth of data, to create two sets of A coefficients, then observe the relationship between the two. If the A’s have a persistent ability across periods, the coefficients should be correlated or show a linear-ish relationshi

6 0.10069853 1311 andrew gelman stats-2012-05-10-My final exam for Design and Analysis of Sample Surveys

7 0.091994435 2294 andrew gelman stats-2014-04-17-If you get to the point of asking, just do it. But some difficulties do arise . . .

8 0.089690536 1690 andrew gelman stats-2013-01-23-When are complicated models helpful in psychology research and when are they overkill?

9 0.088777937 328 andrew gelman stats-2010-10-08-Displaying a fitted multilevel model

10 0.082415134 296 andrew gelman stats-2010-09-26-A simple semigraphic display

11 0.080945231 1605 andrew gelman stats-2012-12-04-Write This Book

12 0.074844211 1069 andrew gelman stats-2011-12-19-I got one of these letters once and was so irritated that I wrote back to the journal withdrawing my paper

13 0.07255777 315 andrew gelman stats-2010-10-03-He doesn’t trust the fit . . . r=.999

14 0.069813401 1607 andrew gelman stats-2012-12-05-The p-value is not . . .

15 0.068431854 1933 andrew gelman stats-2013-07-10-Please send all comments to -dev-ripley

16 0.066880129 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

17 0.066425785 2163 andrew gelman stats-2014-01-08-How to display multinominal logit results graphically?

18 0.06588921 851 andrew gelman stats-2011-08-12-year + (1|year)

19 0.064179495 502 andrew gelman stats-2011-01-04-Cash in, cash out graph

20 0.063908815 878 andrew gelman stats-2011-08-29-Infovis, infographics, and data visualization: Where I’m coming from, and where I’d like to go


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.128), (1, 0.005), (2, 0.051), (3, -0.022), (4, 0.072), (5, -0.055), (6, 0.008), (7, 0.018), (8, 0.028), (9, -0.004), (10, -0.005), (11, 0.009), (12, 0.032), (13, -0.052), (14, 0.02), (15, 0.009), (16, 0.011), (17, 0.004), (18, -0.012), (19, 0.006), (20, 0.002), (21, 0.036), (22, 0.007), (23, 0.003), (24, 0.02), (25, -0.016), (26, -0.005), (27, 0.01), (28, -0.001), (29, -0.016), (30, 0.024), (31, -0.017), (32, -0.033), (33, -0.015), (34, -0.001), (35, -0.042), (36, -0.009), (37, -0.007), (38, -0.006), (39, 0.04), (40, 0.018), (41, 0.015), (42, 0.028), (43, -0.035), (44, -0.016), (45, 0.016), (46, 0.007), (47, -0.022), (48, -0.001), (49, 0.016)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96359009 610 andrew gelman stats-2011-03-13-Secret weapon with rare events

Introduction: Gregory Eady writes: I’m working on a paper examining the effect of superpower alliance on a binary DV (war). I hypothesize that the size of the effect is much higher during the Cold War than it is afterwards. I’m going to run a Chow test to check whether this effect differs significantly between 1960-1989 and 1990-2007 (Scott Long also has a method using predicted probabilities), but I’d also like to show the trend graphically, and thought that your “Secret Weapon” would be useful here. I wonder if there is anything I should be concerned about when doing this with a (rare-events) logistic regression. I was thinking to graph the coefficients in 5-year periods, moving a single year at a time (1960-64, 1961-65, 1962-66, and so on), reporting the coefficient in the graph for the middle year of each 5-year range). My reply: I don’t know nuthin bout no Chow test but, sure, I’d think the secret weapon would work. If you’re analyzing 5-year periods, it might be cleaner just to keep t

2 0.74023861 507 andrew gelman stats-2011-01-07-Small world: MIT, asymptotic behavior of differential-difference equations, Susan Assmann, subgroup analysis, multilevel modeling

Introduction: A colleague recently sent me a copy of some articles on the estimation of treatment interactions (a topic that’s interested me for awhile). One of the articles, which appeared in the Lancet in 2000, was called “ Subgroup analysis and other (mis)uses of baseline data in clinical trials ,” by Susan F. Assmann, Stuart J. Pocock, Laura E. Enos, and Linda E. Kasten. . . . Hey, wait a minute–I know Susan Assmann! Well, I sort of know her. When I was a freshman in college, I asked my adviser, who was an applied math prof, if I could do some research. He connected me to Susan, who was one of his Ph.D. students, and she gave me a tiny part of her thesis to work on. The problem went as follows. You have a function f(x), for x going from 0 to infinity, that is defined as follows. Between 0 and 1, f(x)=x. Then, for x higher than 1, f’(x) = f(x) – f(x-1). The goal is to figure out what f(x) does. I think I’m getting this right here, but I might be getting confused on some of the detai

3 0.72778273 134 andrew gelman stats-2010-07-08-“What do you think about curved lines connecting discrete data-points?”

Introduction: John Keltz writes: What do you think about curved lines connecting discrete data-points? (For example, here .) The problem with the smoothed graph is it seems to imply that something is going on in between the discrete data points, which is false. However, the straight-line version isn’t representing actual events either- it is just helping the eye connect each point. So maybe the curved version is also just helping the eye connect each point, and looks better doing it. In my own work (value-added modeling of achievement test scores) I use straight lines, but I guess I am not too bothered when people use smoothing. I’d appreciate your input. Regular readers will be unsurprised that, yes, I have an opinion on this one, and that this opinion is connected to some more general ideas about statistical graphics. In general I’m not a fan of the curved lines. They’re ok, but I don’t really see the point. I can connect the dots just fine without the curves. The more general id

4 0.72150475 417 andrew gelman stats-2010-11-17-Clutering and variance components

Introduction: Raymond Lim writes: Do you have any recommendations on clustering and binary models? My particular problem is I’m running a firm fixed effect logit and want to cluster by industry-year (every combination of industry-year). My control variable of interest in measured by industry-year and when I cluster by industry-year, the standard errors are 300x larger than when I don’t cluster. Strangely, this problem only occurs when doing logit and not OLS (linear probability). Also, clustering just by field doesn’t blow up the errors. My hunch is it has something to do with the non-nested structure of year, but I don’t understand why this is only problematic under logit and not OLS. My reply: I’d recommend including four multilevel variance parameters, one for firm, one for industry, one for year, and one for industry-year. (In lmer, that’s (1 | firm) + (1 | industry) + (1 | year) + (1 | industry.year)). No need to include (1 | firm.year) since in your data this is the error term. Try

5 0.71898174 1470 andrew gelman stats-2012-08-26-Graphs showing regression uncertainty: the code!

Introduction: After our discussion of visual displays of regression uncertainty, I asked Solomon Hsiang and Lucas Leeman to send me their code. Both of them replied. Solomon wrote: The matlab and stata functions I wrote, as well as the script that replicates my figures, are all posted on my website . Also, I just added options to the main matlab function (vwregress.m) to make it display the spaghetti plot (similar to what Lucas did, but a simple bootstrap) and the shaded CI that you suggested (see figs below). They’re good suggestions. Personally, I [Hsiang] like the shaded CI better, since I think that all the visual activity in the spaghetti plot is a little distracting and sometimes adds visual weight in places where I wouldn’t want it. But the option is there in case people like it. Solomon then followed up with: I just thought of this small adjustment to your filled CI idea that seems neat. Cartographers like map projections that conserve area. We can do som

6 0.71843749 1609 andrew gelman stats-2012-12-06-Stephen Kosslyn’s principles of graphics and one more: There’s no need to cram everything into a single plot

7 0.71529406 560 andrew gelman stats-2011-02-06-Education and Poverty

8 0.71238214 1746 andrew gelman stats-2013-03-02-Fishing for cherries

9 0.70911467 1215 andrew gelman stats-2012-03-16-The “hot hand” and problems with hypothesis testing

10 0.69978464 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

11 0.69903946 843 andrew gelman stats-2011-08-07-Non-rant

12 0.69881707 2243 andrew gelman stats-2014-03-11-The myth of the myth of the myth of the hot hand

13 0.69658375 1253 andrew gelman stats-2012-04-08-Technology speedup graph

14 0.69494575 808 andrew gelman stats-2011-07-18-The estimated effect size is implausibly large. Under what models is this a piece of evidence that the true effect is small?

15 0.69466162 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

16 0.69263822 245 andrew gelman stats-2010-08-31-Predicting marathon times

17 0.68544519 726 andrew gelman stats-2011-05-22-Handling multiple versions of an outcome variable

18 0.68521887 502 andrew gelman stats-2011-01-04-Cash in, cash out graph

19 0.68422252 1116 andrew gelman stats-2012-01-13-Infographic on the economy

20 0.68281168 1357 andrew gelman stats-2012-06-01-Halloween-Valentine’s update


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(6, 0.016), (9, 0.029), (15, 0.016), (16, 0.024), (20, 0.011), (24, 0.203), (27, 0.016), (37, 0.016), (53, 0.012), (58, 0.015), (63, 0.063), (79, 0.015), (85, 0.118), (86, 0.019), (90, 0.017), (97, 0.018), (99, 0.28)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9661603 610 andrew gelman stats-2011-03-13-Secret weapon with rare events

Introduction: Gregory Eady writes: I’m working on a paper examining the effect of superpower alliance on a binary DV (war). I hypothesize that the size of the effect is much higher during the Cold War than it is afterwards. I’m going to run a Chow test to check whether this effect differs significantly between 1960-1989 and 1990-2007 (Scott Long also has a method using predicted probabilities), but I’d also like to show the trend graphically, and thought that your “Secret Weapon” would be useful here. I wonder if there is anything I should be concerned about when doing this with a (rare-events) logistic regression. I was thinking to graph the coefficients in 5-year periods, moving a single year at a time (1960-64, 1961-65, 1962-66, and so on), reporting the coefficient in the graph for the middle year of each 5-year range). My reply: I don’t know nuthin bout no Chow test but, sure, I’d think the secret weapon would work. If you’re analyzing 5-year periods, it might be cleaner just to keep t

2 0.95832562 533 andrew gelman stats-2011-01-23-The scalarization of America

Introduction: Mark Palko writes : You lose information when you go from a vector to a scalar. But what about this trick, which they told me about in high school? Combine two dimensions into one by interleaving the decimals. For example, if a=.11111 and b=.22222, then (a,b) = .1212121212.

3 0.95650864 843 andrew gelman stats-2011-08-07-Non-rant

Introduction: Dave Backus writes: I would love to see a better version of this [from Steve Hsu] if you have time. My reply: I actually think the graph is ok. It’s not perfect but it’s dieplaying a small set of numbers in a reasonably clear and coherent way! Everybody thinks I’m a curmudgeon but I like to mix it up on occasion and say something nice.

4 0.95043898 1534 andrew gelman stats-2012-10-15-The strange reappearance of Matthew Klam

Introduction: A few years ago I asked what happened to Matthew Klam, a talented writer who has a bizarrely professional-looking webpage but didn’t seem to be writing anymore. Good news! He published a new story in the New Yorker! Confusingly, he wrote it under the name “Justin Taylor,” but I’m not fooled (any more than I was fooled when that posthumous Updike story was published under the name “ Antonya Nelson “). I’m glad to see that Klam is back in action and look forward to seeing some stories under his own name as well.

5 0.94190168 375 andrew gelman stats-2010-10-28-Matching for preprocessing data for causal inference

Introduction: Chris Blattman writes : Matching is not an identification strategy a solution to your endogeneity problem; it is a weighting scheme. Saying matching will reduce endogeneity bias is like saying that the best way to get thin is to weigh yourself in kilos. The statement makes no sense. It confuses technique with substance. . . . When you run a regression, you control for the X you can observe. When you match, you are simply matching based on those same X. . . . I see what Chris is getting at–matching, like regression, won’t help for the variables you’re not controlling for–but I disagree with his characterization of matching as a weighting scheme. I see matching as a way to restrict your analysis to comparable cases. The statistical motivation: robustness. If you had a good enough model, you wouldn’t neet to match, you’d just fit the model to the data. But in common practice we often use simple regression models and so it can be helpful to do some matching first before regress

6 0.94136006 417 andrew gelman stats-2010-11-17-Clutering and variance components

7 0.93152672 2086 andrew gelman stats-2013-11-03-How best to compare effects measured in two different time periods?

8 0.92798209 1374 andrew gelman stats-2012-06-11-Convergence Monitoring for Non-Identifiable and Non-Parametric Models

9 0.92747831 2300 andrew gelman stats-2014-04-21-Ticket to Baaaath

10 0.92743438 2319 andrew gelman stats-2014-05-05-Can we make better graphs of global temperature history?

11 0.92705917 1966 andrew gelman stats-2013-08-03-Uncertainty in parameter estimates using multilevel models

12 0.9253884 2010 andrew gelman stats-2013-09-06-Would today’s captains of industry be happier in a 1950s-style world?

13 0.92288226 1176 andrew gelman stats-2012-02-19-Standardized writing styles and standardized graphing styles

14 0.92175186 1240 andrew gelman stats-2012-04-02-Blogads update

15 0.92116231 102 andrew gelman stats-2010-06-21-Why modern art is all in the mind

16 0.92105281 970 andrew gelman stats-2011-10-24-Bell Labs

17 0.92089283 2365 andrew gelman stats-2014-06-09-I hate polynomials

18 0.92084694 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability

19 0.92070818 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

20 0.920672 912 andrew gelman stats-2011-09-15-n = 2