andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1425 knowledge-graph by maker-knowledge-mining

1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings


meta infos for this blog

Source: html

Introduction: In a link to our back-and-forth on causal inference and the use of hierarchical models to bridge between different inferential settings, Elias Bareinboim (a computer scientist who is working with Judea Pearl) writes : In the past week, I have been engaged in a discussion with Andrew Gelman and his blog readers regarding causal inference, selection bias, confounding, and generalizability. I was trying to understand how his method which he calls “hierarchical modeling” would handle these issues and what guarantees it provides. . . . If anyone understands how “hierarchical modeling” can solve a simple toy problem (e.g., M-bias, control of confounding, mediation, generalizability), please share with us. In his post, Bareinboim raises a direct question about hierarchical modeling and also indirectly brings up larger questions about what is convincing evidence when evaluating a statistical method. As I wrote earlier, Bareinboim believes that “The only way investigators can decide w


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I was trying to understand how his method which he calls “hierarchical modeling” would handle these issues and what guarantees it provides. [sent-2, score-0.31]

2 If anyone understands how “hierarchical modeling” can solve a simple toy problem (e. [sent-6, score-0.31]

3 In his post, Bareinboim raises a direct question about hierarchical modeling and also indirectly brings up larger questions about what is convincing evidence when evaluating a statistical method. [sent-9, score-0.744]

4 As I wrote earlier, Bareinboim believes that “The only way investigators can decide whether ‘hierarchical modeling is the way to go’ is for someone to demonstrate the method on a toy example,” whereas I am more convinced by live applications. [sent-10, score-0.699]

5 Other people are convinced by theorems, while there is yet another set of researchers who are most convinced by performance on benchmark problems. [sent-11, score-0.303]

6 For now, let me answer Bareinboim’s immediate question about hierarchical modeling and inference. [sent-13, score-0.719]

7 I did not supply examples in that blog post but many many examples of hierarchical models appear in three of my four books and in many of my research articles . [sent-17, score-0.838]

8 This can be framed in a hierarchical model in which the J cases in your training set are a sample from population 1 and your new case is drawn from population 2. [sent-22, score-0.748]

9 And, as with hierarchical models in general, the more information you have in the observed X’s, the less variation you would hope to have in the thetas. [sent-25, score-0.429]

10 My recommended approach is to build a hierarchical model in which one component of variance represents this difference. [sent-28, score-0.628]

11 People don’t always think of hierarchical modeling here because in this version of the problem it might seem that J (the number of groups) is only 2. [sent-29, score-0.6]

12 But in many settings (such as the buildings example above), I think existing data has enough multiplicity that a research can learn about this variance component. [sent-30, score-0.454]

13 Even if not, even if J really is only 2, I like the idea of doing hierarchical modeling using a reasonable guess of the key variance parameter. [sent-31, score-0.69]

14 As I said, lots and lots, including our model for evaluating electoral systems and redistricting plans , our model for population toxicokinetics , missing data in multiple surveys , home radon , and income and voting . [sent-33, score-0.849]

15 When estimating effects of redistricting, or low-dose metabolism of perchloroethlyene, or missing survey responses, or radon levels, or voting patterns, there are no guarantees, we just have to do our best. [sent-37, score-0.28]

16 The multilevel modeling approach focuses on quantifying sources of variation, which is just what I’m looking for in the sorts of generalizations I want to make. [sent-40, score-0.431]

17 ” In one of his comments, Barenboim writes , “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee. [sent-44, score-0.289]

18 It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. [sent-46, score-0.292]

19 Arguably, many of the examples in Bayesian Data Analysis (for example, the 8 schools example in chapter 5) can be seen as toy problems. [sent-49, score-0.426]

20 As I wrote earlier, I don’t think theoretical proofs or toy problems are useless, I just find applied examples to be more convincing. [sent-50, score-0.421]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bareinboim', 0.418), ('hierarchical', 0.37), ('toy', 0.252), ('modeling', 0.23), ('guarantees', 0.213), ('buildings', 0.144), ('convinced', 0.12), ('examples', 0.113), ('model', 0.105), ('confounding', 0.104), ('redistricting', 0.104), ('population', 0.1), ('settings', 0.099), ('method', 0.097), ('radon', 0.095), ('lindley', 0.094), ('variance', 0.09), ('focuses', 0.083), ('evaluating', 0.081), ('lots', 0.078), ('cases', 0.073), ('provided', 0.067), ('metabolism', 0.066), ('thetas', 0.066), ('barenboim', 0.066), ('novick', 0.066), ('hold', 0.066), ('voting', 0.063), ('another', 0.063), ('approach', 0.063), ('inference', 0.063), ('question', 0.063), ('toxicokinetics', 0.062), ('toys', 0.062), ('conditional', 0.062), ('many', 0.061), ('assurance', 0.06), ('multiplicity', 0.06), ('variation', 0.059), ('four', 0.059), ('gelman', 0.059), ('assumptions', 0.058), ('dempster', 0.058), ('understands', 0.058), ('elias', 0.058), ('term', 0.057), ('theoretical', 0.056), ('let', 0.056), ('missing', 0.056), ('multilevel', 0.055)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000004 1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings

Introduction: In a link to our back-and-forth on causal inference and the use of hierarchical models to bridge between different inferential settings, Elias Bareinboim (a computer scientist who is working with Judea Pearl) writes : In the past week, I have been engaged in a discussion with Andrew Gelman and his blog readers regarding causal inference, selection bias, confounding, and generalizability. I was trying to understand how his method which he calls “hierarchical modeling” would handle these issues and what guarantees it provides. . . . If anyone understands how “hierarchical modeling” can solve a simple toy problem (e.g., M-bias, control of confounding, mediation, generalizability), please share with us. In his post, Bareinboim raises a direct question about hierarchical modeling and also indirectly brings up larger questions about what is convincing evidence when evaluating a statistical method. As I wrote earlier, Bareinboim believes that “The only way investigators can decide w

2 0.4091779 1469 andrew gelman stats-2012-08-25-Ways of knowing

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p

3 0.39779475 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

4 0.30224949 1383 andrew gelman stats-2012-06-18-Hierarchical modeling as a framework for extrapolation

Introduction: Phil recently posted on the challenge of extrapolation of inferences to new data. After telling the story of a colleague who flat-out refused to make predictions from his model of buildings to new data, Phil wrote, “This is an interesting problem because it is sort of outside the realm of statistics, and into some sort of meta-statistical area. How can you judge whether your results can be extrapolated to the ‘real world,’ if you cant get a real-world sample to compare to?” In reply, I wrote: I agree that this is an important and general problem, but I don’t think it is outside the realm of statistics! I think that one useful statistical framework here is multilevel modeling. Suppose you are applying a procedure to J cases and want to predict case J+1 (in this case, the cases are buildings and J=52). Let the parameters be theta_1,…,theta_{J+1}, with data y_1,…,y_{J+1}, and case-level predictors X_1,…,X_{J+1}. The question is how to generalize from (theta_1,…,theta_J) to theta_{

5 0.17194185 2273 andrew gelman stats-2014-03-29-References (with code) for Bayesian hierarchical (multilevel) modeling and structural equation modeling

Introduction: A student writes: I am new to Bayesian methods. While I am reading your book, I have some questions for you. I am interested in doing Bayesian hierarchical (multi-level) linear regression (e.g., random-intercept model) and Bayesian structural equation modeling (SEM)—for causality. Do you happen to know if I could find some articles, where authors could provide data w/ R and/or BUGS codes that I could replicate them? My reply: For Bayesian hierarchical (multi-level) linear regression and causal inference, see my book with Jennifer Hill. For Bayesian structural equation modeling, try google and you’ll find some good stuff. Also, I recommend Stan (http://mc-stan.org/) rather than Bugs.

6 0.17148866 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

7 0.16708659 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

8 0.15798391 1270 andrew gelman stats-2012-04-19-Demystifying Blup

9 0.15117322 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

10 0.15094471 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

11 0.14711942 383 andrew gelman stats-2010-10-31-Analyzing the entire population rather than a sample

12 0.1456693 2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

13 0.14383809 1482 andrew gelman stats-2012-09-04-Model checking and model understanding in machine learning

14 0.14079599 295 andrew gelman stats-2010-09-25-Clusters with very small numbers of observations

15 0.13931665 255 andrew gelman stats-2010-09-04-How does multilevel modeling affect the estimate of the grand mean?

16 0.13803296 1575 andrew gelman stats-2012-11-12-Thinking like a statistician (continuously) rather than like a civilian (discretely)

17 0.13782594 2007 andrew gelman stats-2013-09-03-Popper and Jaynes

18 0.13721505 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

19 0.12847644 2145 andrew gelman stats-2013-12-24-Estimating and summarizing inference for hierarchical variance parameters when the number of groups is small

20 0.12751491 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.269), (1, 0.17), (2, 0.0), (3, -0.008), (4, 0.0), (5, 0.06), (6, -0.067), (7, -0.011), (8, 0.09), (9, 0.062), (10, 0.012), (11, -0.0), (12, 0.012), (13, 0.029), (14, 0.024), (15, 0.032), (16, -0.04), (17, -0.015), (18, -0.003), (19, 0.041), (20, -0.007), (21, -0.055), (22, 0.014), (23, 0.052), (24, -0.002), (25, 0.014), (26, -0.005), (27, 0.006), (28, -0.004), (29, 0.013), (30, 0.015), (31, -0.038), (32, 0.002), (33, 0.011), (34, -0.045), (35, 0.012), (36, -0.012), (37, -0.039), (38, -0.036), (39, 0.048), (40, -0.057), (41, -0.005), (42, -0.025), (43, -0.053), (44, -0.089), (45, -0.007), (46, -0.009), (47, -0.019), (48, -0.061), (49, 0.005)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96955431 1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings

Introduction: In a link to our back-and-forth on causal inference and the use of hierarchical models to bridge between different inferential settings, Elias Bareinboim (a computer scientist who is working with Judea Pearl) writes : In the past week, I have been engaged in a discussion with Andrew Gelman and his blog readers regarding causal inference, selection bias, confounding, and generalizability. I was trying to understand how his method which he calls “hierarchical modeling” would handle these issues and what guarantees it provides. . . . If anyone understands how “hierarchical modeling” can solve a simple toy problem (e.g., M-bias, control of confounding, mediation, generalizability), please share with us. In his post, Bareinboim raises a direct question about hierarchical modeling and also indirectly brings up larger questions about what is convincing evidence when evaluating a statistical method. As I wrote earlier, Bareinboim believes that “The only way investigators can decide w

2 0.84183592 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

3 0.84035957 1383 andrew gelman stats-2012-06-18-Hierarchical modeling as a framework for extrapolation

Introduction: Phil recently posted on the challenge of extrapolation of inferences to new data. After telling the story of a colleague who flat-out refused to make predictions from his model of buildings to new data, Phil wrote, “This is an interesting problem because it is sort of outside the realm of statistics, and into some sort of meta-statistical area. How can you judge whether your results can be extrapolated to the ‘real world,’ if you cant get a real-world sample to compare to?” In reply, I wrote: I agree that this is an important and general problem, but I don’t think it is outside the realm of statistics! I think that one useful statistical framework here is multilevel modeling. Suppose you are applying a procedure to J cases and want to predict case J+1 (in this case, the cases are buildings and J=52). Let the parameters be theta_1,…,theta_{J+1}, with data y_1,…,y_{J+1}, and case-level predictors X_1,…,X_{J+1}. The question is how to generalize from (theta_1,…,theta_J) to theta_{

4 0.8072322 1763 andrew gelman stats-2013-03-14-Everyone’s trading bias for variance at some point, it’s just done at different places in the analyses

Introduction: Some things I respect When it comes to meta-models of statistics, here are two philosophies that I respect: 1. (My) Bayesian approach, which I associate with E. T. Jaynes, in which you construct models with strong assumptions, ride your models hard, check their fit to data, and then scrap them and improve them as necessary. 2. At the other extreme, model-free statistical procedures that are designed to work well under very weak assumptions—for example, instead of assuming a distribution is Gaussian, you would just want the procedure to work well under some conditions on the smoothness of the second derivative of the log density function. Both the above philosophies recognize that (almost) all important assumptions will be wrong, and they resolve this concern via aggressive model checking or via robustness. And of course there are intermediate positions, such as working with Bayesian models that have been shown to be robust, and then still checking them. Or, to flip it arou

5 0.80572641 2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

Introduction: This material should be familiar to many of you but could be helpful to newcomers. Pearl writes: ALL causal conclusions in nonexperimental settings must be based on untested, judgmental assumptions that investigators are prepared to defend on scientific grounds. . . . To understand what the world should be like for a given procedure to work is of no lesser scientific value than seeking evidence for how the world works . . . Assumptions are self-destructive in their honesty. The more explicit the assumption, the more criticism it invites . . . causal diagrams invite the harshest criticism because they make assumptions more explicit and more transparent than other representation schemes. As regular readers know (for example, search this blog for “Pearl”), I have not got much out of the causal-diagrams approach myself, but in general I think that when there are multiple, mathematically equivalent methods of getting the same answer, we tend to go with the framework we are used

6 0.79209638 2033 andrew gelman stats-2013-09-23-More on Bayesian methods and multilevel modeling

7 0.79033518 1270 andrew gelman stats-2012-04-19-Demystifying Blup

8 0.76687479 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

9 0.75547206 948 andrew gelman stats-2011-10-10-Combining data from many sources

10 0.75311077 1468 andrew gelman stats-2012-08-24-Multilevel modeling and instrumental variables

11 0.74685061 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

12 0.74660456 1934 andrew gelman stats-2013-07-11-Yes, worry about generalizing from data to population. But multilevel modeling is the solution, not the problem

13 0.74607193 383 andrew gelman stats-2010-10-31-Analyzing the entire population rather than a sample

14 0.73829669 1469 andrew gelman stats-2012-08-25-Ways of knowing

15 0.73665106 1395 andrew gelman stats-2012-06-27-Cross-validation (What is it good for?)

16 0.73323017 690 andrew gelman stats-2011-05-01-Peter Huber’s reflections on data analysis

17 0.7311464 1406 andrew gelman stats-2012-07-05-Xiao-Li Meng and Xianchao Xie rethink asymptotics

18 0.72762156 288 andrew gelman stats-2010-09-21-Discussion of the paper by Girolami and Calderhead on Bayesian computation

19 0.72670788 393 andrew gelman stats-2010-11-04-Estimating the effect of A on B, and also the effect of B on A

20 0.7209962 1136 andrew gelman stats-2012-01-23-Fight! (also a bit of reminiscence at the end)


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.015), (15, 0.023), (16, 0.035), (21, 0.032), (24, 0.087), (39, 0.011), (43, 0.011), (77, 0.017), (89, 0.011), (99, 0.66)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99900347 1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings

Introduction: In a link to our back-and-forth on causal inference and the use of hierarchical models to bridge between different inferential settings, Elias Bareinboim (a computer scientist who is working with Judea Pearl) writes : In the past week, I have been engaged in a discussion with Andrew Gelman and his blog readers regarding causal inference, selection bias, confounding, and generalizability. I was trying to understand how his method which he calls “hierarchical modeling” would handle these issues and what guarantees it provides. . . . If anyone understands how “hierarchical modeling” can solve a simple toy problem (e.g., M-bias, control of confounding, mediation, generalizability), please share with us. In his post, Bareinboim raises a direct question about hierarchical modeling and also indirectly brings up larger questions about what is convincing evidence when evaluating a statistical method. As I wrote earlier, Bareinboim believes that “The only way investigators can decide w

2 0.99822307 1952 andrew gelman stats-2013-07-23-Christakis response to my comment on his comments on social science (or just skip to the P.P.P.S. at the end)

Introduction: The other day, Nicholas Christakis wrote an article in the newspaper criticizing academic social science departments: The social sciences have stagnated. . . . This is not only boring but also counterproductive, constraining engagement with the scientific cutting edge and stifling the creation of new and useful knowledge. . . . I’m not suggesting that social scientists stop teaching and investigating classic topics like monopoly power, racial profiling and health inequality. But everyone knows that monopoly power is bad for markets, that people are racially biased and that illness is unequally distributed by social class. There are diminishing returns from the continuing study of many such topics. And repeatedly observing these phenomena does not help us fix them. I disagreed , saying that Christakis wasn’t giving social science research enough credit: I’m no economist so I can let others discuss the bit about “monopoly power is bad for markets.” I assume that the study by

3 0.99799228 1431 andrew gelman stats-2012-07-27-Overfitting

Introduction: Ilya Esteban writes: In traditional machine learning and statistical learning techniques, you spend a lot of time selecting your input features, fiddling with model parameter values, etc., all of which leads to the problem of overfitting the data and producing overly optimistic estimates for how good the model really is. You can use techniques such as cross-validation and out-of-sample validation data to try to limit the damage, but they are imperfect solutions at best. While Bayesian models have the great advantage of not forcing you to manually select among the various weights and input features, you still often end up trying different priors and model structures (especially with hierarchical models), before coming up with a “final” model. When applying Bayesian modeling to real world data sets, how does should you evaluate alternate priors and topologies for the model without falling into the same overfitting trap as you do with non-Bayesian models? If you try several different

4 0.99785078 809 andrew gelman stats-2011-07-19-“One of the easiest ways to differentiate an economist from almost anyone else in society”

Introduction: I think I’m starting to resolve a puzzle that’s been bugging me for awhile. Pop economists (or, at least, pop micro-economists) are often making one of two arguments: 1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist. 2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient. Argument 1 is associated with “why do they do that?” sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there’s some rational reason for what seems like silly or self-destructive behavior. Argument 2 is associated with “we can do better” claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficienc

5 0.99757934 638 andrew gelman stats-2011-03-30-More on the correlation between statistical and political ideology

Introduction: This is a chance for me to combine two of my interests–politics and statistics–and probably to irritate both halves of the readership of this blog. Anyway… I recently wrote about the apparent correlation between Bayes/non-Bayes statistical ideology and liberal/conservative political ideology: The Bayes/non-Bayes fissure had a bit of a political dimension–with anti-Bayesians being the old-line conservatives (for example, Ronald Fisher) and Bayesians having a more of a left-wing flavor (for example, Dennis Lindley). Lots of counterexamples at an individual level, but my impression is that on average the old curmudgeonly, get-off-my-lawn types were (with some notable exceptions) more likely to be anti-Bayesian. This was somewhat based on my experiences at Berkeley. Actually, some of the cranky anti-Bayesians were probably Democrats as well, but when they were being anti-Bayesian they seemed pretty conservative. Recently I received an interesting item from Gerald Cliff, a pro

6 0.99755496 740 andrew gelman stats-2011-06-01-The “cushy life” of a University of Illinois sociology professor

7 0.99712026 589 andrew gelman stats-2011-02-24-On summarizing a noisy scatterplot with a single comparison of two points

8 0.99710888 521 andrew gelman stats-2011-01-17-“the Tea Party’s ire, directed at Democrats and Republicans alike”

9 0.99677396 507 andrew gelman stats-2011-01-07-Small world: MIT, asymptotic behavior of differential-difference equations, Susan Assmann, subgroup analysis, multilevel modeling

10 0.99660611 1585 andrew gelman stats-2012-11-20-“I know you aren’t the plagiarism police, but . . .”

11 0.99605399 1315 andrew gelman stats-2012-05-12-Question 2 of my final exam for Design and Analysis of Sample Surveys

12 0.99601227 726 andrew gelman stats-2011-05-22-Handling multiple versions of an outcome variable

13 0.99570745 1536 andrew gelman stats-2012-10-16-Using economics to reduce bike theft

14 0.99558365 772 andrew gelman stats-2011-06-17-Graphical tools for understanding multilevel models

15 0.99554414 1096 andrew gelman stats-2012-01-02-Graphical communication for legal scholarship

16 0.99539363 1434 andrew gelman stats-2012-07-29-FindTheData.org

17 0.99537003 756 andrew gelman stats-2011-06-10-Christakis-Fowler update

18 0.99504477 180 andrew gelman stats-2010-08-03-Climate Change News

19 0.99450785 1949 andrew gelman stats-2013-07-21-Defensive political science responds defensively to an attack on social science

20 0.99402207 1813 andrew gelman stats-2013-04-19-Grad students: Participate in an online survey on statistics education