andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1769 knowledge-graph by maker-knowledge-mining

1769 andrew gelman stats-2013-03-18-Tibshirani announces new research result: A significance test for the lasso


meta infos for this blog

Source: html

Introduction: Lasso and me For a long time I was wrong about lasso. Lasso (“least absolute shrinkage and selection operator”) is a regularization procedure that shrinks regression coefficients toward zero, and in its basic form is equivalent to maximum penalized likelihood estimation with a penalty function that is proportional to the sum of the absolute values of the regression coefficients. I first heard about lasso from a talk that Trevor Hastie Rob Tibshirani gave at Berkeley in 1994 or 1995. He demonstrated that it shrunk regression coefficients to zero. I wasn’t impressed, first because it seemed like no big deal (if that’s the prior you use, that’s the shrinkage you get) and second because, from a Bayesian perspective, I don’t want to shrink things all the way to zero. In the sorts of social and environmental science problems I’ve worked on, just about nothing is zero. I’d like to control my noisy estimates but there’s nothing special about zero. At the end of the talk I stood


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I first heard about lasso from a talk that Trevor Hastie Rob Tibshirani gave at Berkeley in 1994 or 1995. [sent-3, score-0.685]

2 He demonstrated that it shrunk regression coefficients to zero. [sent-4, score-0.289]

3 At the end of the talk I stood up and asked Trevor Rob why he thought it was a good idea to have zero estimates and he Trevor Hastie stood up and said something about how it can be costly to keep track of lots of predictors so it could be efficient to set a bunch of coefficients to zero. [sent-8, score-0.336]

4 I didn’t buy it: if cost is a consideration, I’d think cost should be in the calculation, the threshold for setting a coefficient to zero should depend on the cost of the variable, and so on. [sent-9, score-0.279]

5 What I mean is, I think my reactions made sense: lasso corresponds to one particular penalty function and, if the goal is reducing cost of saving variables in a typical social-science regression problem, there’s no reason to use some single threshold. [sent-11, score-0.956]

6 First, whether or not lasso is just an implementation of Bayes, the fact is that mainstream Bayesians weren’t doing much of it. [sent-13, score-0.736]

7 We didn’t have anything like lasso in the first or the second edition of Bayesian Data Analysis, and in my applied work I had real difficulties with regression coefficients getting out of control (see Table 2 of this article from 2003, for example). [sent-14, score-1.018]

8 I could go around smugly thinking that lasso was a trivial implementation of a prior distribution, coupled with a silly posterior-mode summary. [sent-15, score-0.904]

9 So, yes, lasso was a great idea, and I didn’t get the point. [sent-23, score-0.685]

10 I see routine regression analysis all the time that does no regularization and as a result suffers from the usual problem of noisy estimates and dramatic overestimates of the magnitudes of effect. [sent-28, score-0.346]

11 Just look at the tables of regression coefficients in just about any quantitative empirical paper in political science or economics or public health. [sent-29, score-0.38]

12 I’d been thinking about the last paragraph here and how lasso has been so important and how I was so slow to catch on, and it seemed worth writing about. [sent-33, score-0.838]

13 We have discovered a test statistic for the lasso that has a very simple Exp(1) asymptotic distribution, accounting for the adaptive fitting. [sent-40, score-0.836]

14 It also could help bring the lasso into the mainstream. [sent-42, score-0.685]

15 It shows how a basic adaptive (frequentist) inference—difficult to do in standard least squares regression, falls out naturally in the lasso paradigm. [sent-43, score-0.775]

16 I looked it up on Google scholar and the 1996 lasso paper has over 7000 citations, with dozens of other papers on lasso having hundreds of citations each. [sent-45, score-1.49]

17 What fascinates me is that the lasso world is a sort of parallel Bayesian world. [sent-47, score-0.685]

18 So one thing I like about the lasso world is that it frees a whole group of researchers—those who, for whatever reason, feel uncomfortable with Bayesian methods—to act in what i consider an open-ended Bayesian way. [sent-50, score-0.732]

19 To get back to the paper that Rob sent me: I already think that lasso (by itself, and as an inspiration for Bayesian regularization) is a great idea. [sent-52, score-0.748]

20 But if it will help a new group of applied researchers hop on to the regularization bandwagon, I’m all for it. [sent-55, score-0.324]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('lasso', 0.685), ('tibshirani', 0.256), ('hastie', 0.173), ('rob', 0.158), ('bayesians', 0.145), ('regression', 0.137), ('trevor', 0.121), ('regularization', 0.115), ('coefficients', 0.109), ('adaptive', 0.09), ('cost', 0.076), ('prior', 0.073), ('tables', 0.071), ('ideas', 0.07), ('bayesian', 0.067), ('stood', 0.066), ('researchers', 0.065), ('paper', 0.063), ('shrinkage', 0.061), ('statistic', 0.061), ('penalty', 0.058), ('didn', 0.058), ('citations', 0.057), ('absolute', 0.056), ('term', 0.055), ('new', 0.054), ('worked', 0.053), ('parameters', 0.052), ('thinking', 0.052), ('zero', 0.051), ('slow', 0.051), ('implementation', 0.051), ('ve', 0.051), ('seemed', 0.05), ('noisy', 0.05), ('frequentist', 0.048), ('regressions', 0.048), ('group', 0.047), ('late', 0.045), ('somehow', 0.045), ('distribution', 0.045), ('always', 0.044), ('estimates', 0.044), ('work', 0.044), ('really', 0.043), ('applied', 0.043), ('smugly', 0.043), ('shrinks', 0.043), ('sidebar', 0.043), ('shrunk', 0.043)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 1769 andrew gelman stats-2013-03-18-Tibshirani announces new research result: A significance test for the lasso

Introduction: Lasso and me For a long time I was wrong about lasso. Lasso (“least absolute shrinkage and selection operator”) is a regularization procedure that shrinks regression coefficients toward zero, and in its basic form is equivalent to maximum penalized likelihood estimation with a penalty function that is proportional to the sum of the absolute values of the regression coefficients. I first heard about lasso from a talk that Trevor Hastie Rob Tibshirani gave at Berkeley in 1994 or 1995. He demonstrated that it shrunk regression coefficients to zero. I wasn’t impressed, first because it seemed like no big deal (if that’s the prior you use, that’s the shrinkage you get) and second because, from a Bayesian perspective, I don’t want to shrink things all the way to zero. In the sorts of social and environmental science problems I’ve worked on, just about nothing is zero. I’d like to control my noisy estimates but there’s nothing special about zero. At the end of the talk I stood

2 0.276216 1849 andrew gelman stats-2013-05-09-Same old same old

Introduction: In an email I sent to a colleague who’s writing about lasso and Bayesian regression for R users: The one thing you might want to add, to fit with your pragmatic perspective, is to point out that these different methods are optimal under different assumptions about the data. However, these assumptions are never true (even in the rare cases where you have a believable prior, it won’t really follow the functional form assumed by bayesglm ; even in the rare cases where you have a real loss function, it won’t really follow the mathematical form assumed by lasso etc), but these methods can still be useful and be given the interpretation of regularized estimates. Another thing that someone might naively think is that regularization is fine but “ unbiased ” is somehow the most honest. In practice, if you stick to “unbiased” methods such as least squares, you’ll restrict the number of variables you can include in your model. So in reality you suffer from omitted-variable bias. So th

3 0.24832439 1877 andrew gelman stats-2013-05-30-Infill asymptotics and sprawl asymptotics

Introduction: Anirban Bhattacharya, Debdeep Pati, Natesh Pillai, and David Dunson write : Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting computational problems in high dimensions. This has motivated an amazing variety of continuous shrinkage priors, which can be expressed as global-local scale mixtures of Gaussians, facilitating computation. In sharp contrast to the corresponding frequentist literature, very little is known about the properties of such priors. Focusing on a broad class of shrinkage priors, we provide precise results on prior and posterior concentration. Interestingly, we demonstrate that most commonly used shrinkage priors, including the Bayesian Lasso, are suboptimal in hig

4 0.21286954 1763 andrew gelman stats-2013-03-14-Everyone’s trading bias for variance at some point, it’s just done at different places in the analyses

Introduction: Some things I respect When it comes to meta-models of statistics, here are two philosophies that I respect: 1. (My) Bayesian approach, which I associate with E. T. Jaynes, in which you construct models with strong assumptions, ride your models hard, check their fit to data, and then scrap them and improve them as necessary. 2. At the other extreme, model-free statistical procedures that are designed to work well under very weak assumptions—for example, instead of assuming a distribution is Gaussian, you would just want the procedure to work well under some conditions on the smoothness of the second derivative of the log density function. Both the above philosophies recognize that (almost) all important assumptions will be wrong, and they resolve this concern via aggressive model checking or via robustness. And of course there are intermediate positions, such as working with Bayesian models that have been shown to be robust, and then still checking them. Or, to flip it arou

5 0.21028158 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

Introduction: Robert Bell pointed me to this post by Brad De Long on Bayesian statistics, and then I also noticed this from Noah Smith, who wrote: My impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there’s really not much “there” there… despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I’m happy to see the discussion going in this direction. Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”. You’d get all sorts of free-floating skepticism about any prior distribution at all, even while people were accepting without question (and doing theory on) logistic regressions, proportional hazards models, and all sorts of strong strong models. (In the subfield of survey sampling, various prominent researchers would refuse to mode

6 0.14289251 2136 andrew gelman stats-2013-12-16-Whither the “bet on sparsity principle” in a nonsparse world?

7 0.14201261 1319 andrew gelman stats-2012-05-14-I hate to get all Gerd Gigerenzer on you here, but . . .

8 0.13809107 247 andrew gelman stats-2010-09-01-How does Bayes do it?

9 0.13625517 2245 andrew gelman stats-2014-03-12-More on publishing in journals

10 0.1362469 2357 andrew gelman stats-2014-06-02-Why we hate stepwise regression

11 0.13034311 1267 andrew gelman stats-2012-04-17-Hierarchical-multilevel modeling with “big data”

12 0.12297497 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

13 0.11948039 1469 andrew gelman stats-2012-08-25-Ways of knowing

14 0.11929592 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

15 0.11840938 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

16 0.11639038 2129 andrew gelman stats-2013-12-10-Cross-validation and Bayesian estimation of tuning parameters

17 0.1157151 2151 andrew gelman stats-2013-12-27-Should statistics have a Nobel prize?

18 0.1154391 2127 andrew gelman stats-2013-12-08-The never-ending (and often productive) race between theory and practice

19 0.1154246 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

20 0.1145271 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.243), (1, 0.096), (2, -0.042), (3, 0.031), (4, -0.007), (5, -0.017), (6, 0.036), (7, -0.01), (8, -0.007), (9, 0.024), (10, 0.027), (11, -0.011), (12, 0.018), (13, 0.007), (14, 0.029), (15, -0.011), (16, -0.015), (17, -0.023), (18, 0.003), (19, 0.01), (20, 0.015), (21, -0.009), (22, 0.028), (23, 0.028), (24, 0.009), (25, 0.008), (26, 0.023), (27, -0.038), (28, 0.003), (29, -0.004), (30, 0.052), (31, 0.049), (32, 0.028), (33, -0.026), (34, 0.025), (35, -0.074), (36, -0.001), (37, 0.027), (38, -0.009), (39, -0.032), (40, -0.01), (41, 0.052), (42, 0.002), (43, -0.001), (44, 0.066), (45, -0.016), (46, 0.001), (47, 0.02), (48, -0.002), (49, -0.007)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97175676 1769 andrew gelman stats-2013-03-18-Tibshirani announces new research result: A significance test for the lasso

Introduction: Lasso and me For a long time I was wrong about lasso. Lasso (“least absolute shrinkage and selection operator”) is a regularization procedure that shrinks regression coefficients toward zero, and in its basic form is equivalent to maximum penalized likelihood estimation with a penalty function that is proportional to the sum of the absolute values of the regression coefficients. I first heard about lasso from a talk that Trevor Hastie Rob Tibshirani gave at Berkeley in 1994 or 1995. He demonstrated that it shrunk regression coefficients to zero. I wasn’t impressed, first because it seemed like no big deal (if that’s the prior you use, that’s the shrinkage you get) and second because, from a Bayesian perspective, I don’t want to shrink things all the way to zero. In the sorts of social and environmental science problems I’ve worked on, just about nothing is zero. I’d like to control my noisy estimates but there’s nothing special about zero. At the end of the talk I stood

2 0.84741098 1849 andrew gelman stats-2013-05-09-Same old same old

Introduction: In an email I sent to a colleague who’s writing about lasso and Bayesian regression for R users: The one thing you might want to add, to fit with your pragmatic perspective, is to point out that these different methods are optimal under different assumptions about the data. However, these assumptions are never true (even in the rare cases where you have a believable prior, it won’t really follow the functional form assumed by bayesglm ; even in the rare cases where you have a real loss function, it won’t really follow the mathematical form assumed by lasso etc), but these methods can still be useful and be given the interpretation of regularized estimates. Another thing that someone might naively think is that regularization is fine but “ unbiased ” is somehow the most honest. In practice, if you stick to “unbiased” methods such as least squares, you’ll restrict the number of variables you can include in your model. So in reality you suffer from omitted-variable bias. So th

3 0.84715629 327 andrew gelman stats-2010-10-07-There are never 70 distinct parameters

Introduction: Sam Seaver writes: I’m a graduate student in computational biology, and I’m relatively new to advanced statistics, and am trying to teach myself how best to approach a problem I have. My dataset is a small sparse matrix of 150 cases and 70 predictors, it is sparse as in many zeros, not many ‘NA’s. Each case is a nutrient that is fed into an in silico organism, and its response is whether or not it stimulates growth, and each predictor is one of 70 different pathways that the nutrient may or may not belong to. Because all of the nutrients do not belong to all of the pathways, there are thus many zeros in my matrix. My goal is to be able to use the pathways themselves to predict whether or not a nutrient could stimulate growth, thus I wanted to compute regression coefficients for each pathway, with which I could apply to other nutrients for other species. There are quite a few singularities in the dataset (summary(glm) reports that 14 coefficients are not defined because of sin

4 0.81098938 788 andrew gelman stats-2011-07-06-Early stopping and penalized likelihood

Introduction: Maximum likelihood gives the beat fit to the training data but in general overfits, yielding overly-noisy parameter estimates that don’t perform so well when predicting new data. A popular solution to this overfitting problem takes advantage of the iterative nature of most maximum likelihood algorithms by stopping early. In general, an iterative optimization algorithm goes from a starting point to the maximum of some objective function. If the starting point has some good properties, then early stopping can work well, keeping some of the virtues of the starting point while respecting the data. This trick can be performed the other way, too, starting with the data and then processing it to move it toward a model. That’s how the iterative proportional fitting algorithm of Deming and Stephan (1940) works to fit multivariate categorical data to known margins. In any case, the trick is to stop at the right point–not so soon that you’re ignoring the data but not so late that you en

5 0.80829269 833 andrew gelman stats-2011-07-31-Untunable Metropolis

Introduction: Michael Margolis writes: What are we to make of it when a Metropolis-Hastings step just won’t tune? That is, the acceptance rate is zero at expected-jump-size X, and way above 1/2 at X-exp(-16) (i.e., machine precision ). I’ve solved my practical problem by writing that I would have liked to include results from a diffuse prior, but couldn’t. But I’m bothered by the poverty of my intuition. And since everything I’ve read says this is an issue of efficiency, rather than accuracy, I wonder if I could solve it just by running massive and heavily thinned chains. My reply: I can’t see how this could happen in a well-specified problem! I suspect it’s a bug. Otherwise try rescaling your variables so that your parameters will have values on the order of magnitude of 1. To which Margolis responded: I hardly wrote any of the code, so I can’t speak to the bug question — it’s binomial kriging from the R package geoRglm. And there are no covariates to scale — just the zero and one

6 0.78538191 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

7 0.78473192 1157 andrew gelman stats-2012-02-07-Philosophy of Bayesian statistics: my reactions to Hendry

8 0.77717793 2129 andrew gelman stats-2013-12-10-Cross-validation and Bayesian estimation of tuning parameters

9 0.76978284 247 andrew gelman stats-2010-09-01-How does Bayes do it?

10 0.76777226 248 andrew gelman stats-2010-09-01-Ratios where the numerator and denominator both change signs

11 0.76543587 1466 andrew gelman stats-2012-08-22-The scaled inverse Wishart prior distribution for a covariance matrix in a hierarchical model

12 0.75853336 10 andrew gelman stats-2010-04-29-Alternatives to regression for social science predictions

13 0.75799423 1520 andrew gelman stats-2012-10-03-Advice that’s so eminently sensible but so difficult to follow

14 0.75467771 421 andrew gelman stats-2010-11-19-Just chaid

15 0.75443929 1750 andrew gelman stats-2013-03-05-Watership Down, thick description, applied statistics, immutability of stories, and playing tennis with a net

16 0.75309998 776 andrew gelman stats-2011-06-22-Deviance, DIC, AIC, cross-validation, etc

17 0.74940443 1445 andrew gelman stats-2012-08-06-Slow progress

18 0.74855483 726 andrew gelman stats-2011-05-22-Handling multiple versions of an outcome variable

19 0.73964 1409 andrew gelman stats-2012-07-08-Is linear regression unethical in that it gives more weight to cases that are far from the average?

20 0.73841894 738 andrew gelman stats-2011-05-30-Works well versus well understood


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(6, 0.012), (15, 0.04), (16, 0.068), (21, 0.017), (24, 0.153), (42, 0.012), (55, 0.013), (57, 0.011), (63, 0.01), (69, 0.12), (76, 0.012), (86, 0.059), (89, 0.015), (99, 0.308)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98102009 406 andrew gelman stats-2010-11-10-Translating into Votes: The Electoral Impact of Spanish-Language Ballots

Introduction: Dan Hopkins sends along this article : [Hopkins] uses regression discontinuity design to estimate the turnout and election impacts of Spanish-language assistance provided under Section 203 of the Voting Rights Act. Analyses of two different data sets – the Latino National Survey and California 1998 primary election returns – show that Spanish-language assistance increased turnout for citizens who speak little English. The California results also demonstrate that election procedures an influence outcomes, as support for ending bilingual education dropped markedly in heavily Spanish-speaking neighborhoods with Spanish-language assistance. The California analyses find hints of backlash among non-Hispanic white precincts, but not with the same size or certainty. Small changes in election procedures can influence who votes as well as what wins. Beyond the direct relevance of these results, I find this paper interesting as an example of research that is fundamentally quantitative. Th

2 0.97601426 89 andrew gelman stats-2010-06-16-A historical perspective on financial bailouts

Introduction: Thomas Ferguson and Robert Johnson write : Financial crises are staggeringly costly. Only major wars rival them in the burdens they place on public finances. Taxpayers typically transfer enormous resources to banks, their stockholders, and creditors, while public debt explodes and the economy runs below full employment for years. This paper compares how relatively large, developed countries have handled bailouts over time. It analyzes why some have done better than others at containing costs and protecting taxpayers. The paper argues that political variables – the nature of competition within party systems and voting turnout – help explain why some countries do more than others to limit the moral hazards of bailouts. I know next to nothing about this topic, so I’ll just recommend you click through and read the article yourself. Here’s a bit more: Many recent papers have analyzed financial crises using large data bases filled with cases from all over the world. Our [Ferguson

3 0.97286022 158 andrew gelman stats-2010-07-22-Tenants and landlords

Introduction: Matthew Yglesias and Megan McArdle argue about the economics of landlord/tenant laws in D.C., a topic I know nothing about. But it did remind me of a few stories . . . 1. In grad school, I shared half of a two-family house with three other students. At some point, our landlord (who lived in the other half of the house) decided he wanted to sell the place, so he had a real estate agent coming by occasionally to show the house to people. She was just a flat-out liar (which I guess fits my impression based on screenings of Glengarry Glen Ross). I could never decide, when I was around and she was lying to a prospective buyer, whether to call her on it. Sometimes I did, sometimes I didn’t. 2. A year after I graduated, the landlord actually did sell the place but then, when my friends moved out, he refused to pay back their security deposit. There was some debate about getting the place repainted, I don’t remember the details. So they sued the landlord in Mass. housing court

4 0.97033721 923 andrew gelman stats-2011-09-24-What is the normal range of values in a medical test?

Introduction: Geoffrey Sheean writes: I am having trouble thinking Bayesianly about the so-called ‘normal’ or ‘reference’ values that I am supposed to use in some of the tests I perform. These values are obtained from purportedly healthy people. Setting aside concerns about ascertainment bias, non-parametric distributions, and the like, the values are usually obtained by setting the limits at ± 2SD from the mean. In some cases, supposedly because of a non-normal distribution, the third highest and lowest value observed in the healthy group sets the limits, on the assumption that no more than 2 results (out of 20 samples) are allowed to exceed these values: if there are 3 or more, then the test is assumed to be abnormal and the reference range is said to reflect the 90th percentile. The results are binary – normal, abnormal. The relevance to the diseased state is this. People who are known unequivocally to have condition X show Y abnormalities in these tests. Therefore, when people suspected

5 0.96310228 1909 andrew gelman stats-2013-06-21-Job openings at conservative political analytics firm!

Introduction: After posting that announcement about Civis Analytics, I wrote, “If a reconstituted Romney Analytics team is hiring, let me know and I’ll post that ad too.” Adam Schaeffer obliged : Not sure about Romney’s team, but Evolving Strategies is looking for sharp folks who lean right: Evolving Strategies is a political communications research firm specializing in randomized controlled experiments in the “lab” and in the “field.” ES is bringing a scientific revolution to free-market/conservative politics. We are looking for people who are obsessive about getting things right and creative in their work. A ideal candidate will have a deep understanding of the academic literature in their field, highly developed skills, a commitment to academic rigor, but an intuitive understanding of practical political concerns and objectives as well. We’re looking for new talent to help with our fast-growing portfolio in these areas: High-level data processing, statistical analysis and modelin

same-blog 6 0.96280265 1769 andrew gelman stats-2013-03-18-Tibshirani announces new research result: A significance test for the lasso

7 0.95978808 1357 andrew gelman stats-2012-06-01-Halloween-Valentine’s update

8 0.9579075 656 andrew gelman stats-2011-04-11-Jonathan Chait and I agree about the importance of the fundamentals in determining presidential elections

9 0.95235372 749 andrew gelman stats-2011-06-06-“Sampling: Design and Analysis”: a course for political science graduate students

10 0.95202303 1267 andrew gelman stats-2012-04-17-Hierarchical-multilevel modeling with “big data”

11 0.95183039 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update

12 0.94625908 1310 andrew gelman stats-2012-05-09-Varying treatment effects, again

13 0.9460234 198 andrew gelman stats-2010-08-11-Multilevel modeling in R on a Mac

14 0.94576728 384 andrew gelman stats-2010-10-31-Two stories about the election that I don’t believe

15 0.94348001 1167 andrew gelman stats-2012-02-14-Extra babies on Valentine’s Day, fewer on Halloween?

16 0.94346082 518 andrew gelman stats-2011-01-15-Regression discontinuity designs: looking for the keys under the lamppost?

17 0.94324869 265 andrew gelman stats-2010-09-09-Removing the blindfold: visualising statistical models

18 0.93963099 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

19 0.93898863 1337 andrew gelman stats-2012-05-22-Question 12 of my final exam for Design and Analysis of Sample Surveys

20 0.93841553 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model