andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1739 knowledge-graph by maker-knowledge-mining

1739 andrew gelman stats-2013-02-26-An AI can build and try out statistical models using an open-ended generative grammar


meta infos for this blog

Source: html

Introduction: David Duvenaud writes: I’ve been following your recent discussions about how an AI could do statistics [see also here ]. I was especially excited about your suggestion for new statistical methods using “a language-like approach to recursively creating new models from a specified list of distributions and transformations, and an automatic approach to checking model fit.” Your discussion of these ideas was exciting to me and my colleagues because we recently did some work taking a step in this direction, automatically searching through a grammar over Gaussian process regression models. Roger Grosse previously did the same thing , but over matrix decomposition models using held-out predictive likelihood to check model fit. These are both examples of automatic Bayesian model-building by a search over more and more complex models, as you suggested. One nice thing is that both grammars include lots of standard models for free, and they seem to work pretty well, although the


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I was especially excited about your suggestion for new statistical methods using “a language-like approach to recursively creating new models from a specified list of distributions and transformations, and an automatic approach to checking model fit. [sent-2, score-0.856]

2 ” Your discussion of these ideas was exciting to me and my colleagues because we recently did some work taking a step in this direction, automatically searching through a grammar over Gaussian process regression models. [sent-3, score-0.549]

3 These are both examples of automatic Bayesian model-building by a search over more and more complex models, as you suggested. [sent-5, score-0.424]

4 One nice thing is that both grammars include lots of standard models for free, and they seem to work pretty well, although the search is of course computationally expensive. [sent-6, score-0.567]

5 But when you consider the size and scope of the space of models that is searched, and the fact that all steps of model construction, evaluation and search are automatic, it doesn’t seem like such an expensive process. [sent-8, score-0.642]

6 In my experience, working statisticians, machine learners, and data scientists rarely if ever explore such a space so systematically in large part because it seems impractically expensive to do so (in terms of both their own time and computation time, as well as perhaps other scarce resources). [sent-9, score-0.439]

7 Of course the “AI” in our work is still quite primitive and naive, both in terms of good modeling methods as you have developed and taught, and in terms of human intelligence more generally. [sent-10, score-0.397]

8 And the space of models we can consider automatically is still quite limited compared to what humans can do. [sent-11, score-0.42]

9 We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels. [sent-15, score-0.72]

10 We present a method for searching over this space of structures which mirrors the scientific discovery process. [sent-16, score-0.732]

11 The learned structures can often decompose functions into interpretable components and enable long-range extrapolation on time-series datasets. [sent-17, score-0.56]

12 Our structure search method outperforms many widely used kernels and kernel combination methods on a variety of prediction tasks. [sent-18, score-1.005]

13 I can’t comment on the details, especially as this sort of predictive regression problem isn’t the thing I typically work on, but I like the general idea of constructing models through some sort of generative grammar. [sent-19, score-0.444]

14 It seems to me a big step forward from the previous graphical-model paradigm in which the model is a static mixture of a bunch of conditional independence structures on a fixed set of variables. [sent-20, score-0.492]

15 As I’ve written many times (for example, with Shalizi in our instant-classic paper , rejoinder here ), I think discrete Bayesian model averaging is a poor model for science and a poor model for statistical inference. [sent-21, score-0.418]

16 Adams), introducing covariance kernels which enable automatic pattern discovery and extrapolation with Gaussian processes. [sent-31, score-0.938]

17 Our method is computationally simple (comparable to using standard smoothing kernels), and is grounded in modelling a spectral density with a Gaussian mixture. [sent-33, score-0.478]

18 With enough components in the mixture, we can approximate any spectral density (and thus any stationary covariance kernel) with arbitrary accuracy. [sent-34, score-0.394]

19 We show that the proposed method can automatically discover complex structure and extrapolate over long ranges. [sent-35, score-0.385]

20 However, our approach to automatic structure discovery is fundamentally different from the discussed “grammar of kernels” approach and previous related kernel composition approaches. [sent-36, score-1.042]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('kernel', 0.301), ('kernels', 0.277), ('automatic', 0.236), ('structures', 0.191), ('computationally', 0.164), ('ai', 0.164), ('space', 0.163), ('models', 0.139), ('spectral', 0.13), ('grammar', 0.13), ('gaussian', 0.126), ('search', 0.124), ('discovery', 0.122), ('expensive', 0.12), ('automatically', 0.118), ('extrapolation', 0.111), ('approach', 0.11), ('generative', 0.105), ('structure', 0.102), ('method', 0.101), ('methods', 0.1), ('enable', 0.1), ('model', 0.096), ('covariance', 0.092), ('components', 0.089), ('searching', 0.086), ('density', 0.083), ('terms', 0.082), ('mixture', 0.08), ('exciting', 0.078), ('machine', 0.074), ('work', 0.071), ('decompose', 0.069), ('formalisms', 0.069), ('mirrors', 0.069), ('shoehorn', 0.069), ('grammars', 0.069), ('regression', 0.066), ('wald', 0.065), ('multiplying', 0.065), ('recursively', 0.065), ('poor', 0.065), ('complex', 0.064), ('forward', 0.064), ('predictive', 0.063), ('learners', 0.062), ('primitive', 0.062), ('previous', 0.061), ('hopeless', 0.06), ('loom', 0.06)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000004 1739 andrew gelman stats-2013-02-26-An AI can build and try out statistical models using an open-ended generative grammar

Introduction: David Duvenaud writes: I’ve been following your recent discussions about how an AI could do statistics [see also here ]. I was especially excited about your suggestion for new statistical methods using “a language-like approach to recursively creating new models from a specified list of distributions and transformations, and an automatic approach to checking model fit.” Your discussion of these ideas was exciting to me and my colleagues because we recently did some work taking a step in this direction, automatically searching through a grammar over Gaussian process regression models. Roger Grosse previously did the same thing , but over matrix decomposition models using held-out predictive likelihood to check model fit. These are both examples of automatic Bayesian model-building by a search over more and more complex models, as you suggested. One nice thing is that both grammars include lots of standard models for free, and they seem to work pretty well, although the

2 0.20160452 1482 andrew gelman stats-2012-09-04-Model checking and model understanding in machine learning

Introduction: Last month I wrote : Computer scientists are often brilliant but they can be unfamiliar with what is done in the worlds of data collection and analysis. This goes the other way too: statisticians such as myself can look pretty awkward, reinventing (or failing to reinvent) various wheels when we write computer programs or, even worse, try to design software.Andrew MacNamara writes: Andrew MacNamara followed up with some thoughts: I [MacNamara] had some basic statistics training through my MBA program, after having completed an undergrad degree in computer science. Since then I’ve been very interested in learning more about statistical techniques, including things like GLM and censored data analyses as well as machine learning topics like neural nets, SVMs, etc. I began following your blog after some research into Bayesian analysis topics and I am trying to dig deeper on that side of things. One thing I have noticed is that there seems to be a distinction between data analysi

3 0.19666083 1740 andrew gelman stats-2013-02-26-“Is machine learning a subset of statistics?”

Introduction: Following up on our previous post , Andrew Wilson writes: I agree we are in a really exciting time for statistics and machine learning. There has been a lot of talk lately comparing machine learning with statistics. I am curious whether you think there are many fundamental differences between the fields, or just superficial differences — different popular approximate inference methods, slightly different popular application areas, etc. Is machine learning a subset of statistics? In the paper we discuss how we think machine learning is fundamentally about pattern discovery, and ultimately, fully automating the learning and decision making process. In other words, whatever a human does when he or she uses tools to analyze data, can be written down algorithmically and automated on a computer. I am not sure if the ambitions are similar in statistics — and I don’t have any conventional statistics background, which makes it harder to tell. I think it’s an interesting discussion.

4 0.1922949 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

Introduction: I’ve been writing a lot about my philosophy of Bayesian statistics and how it fits into Popper’s ideas about falsification and Kuhn’s ideas about scientific revolutions. Here’s my long, somewhat technical paper with Cosma Shalizi. Here’s our shorter overview for the volume on the philosophy of social science. Here’s my latest try (for an online symposium), focusing on the key issues. I’m pretty happy with my approach–the familiar idea that Bayesian data analysis iterates the three steps of model building, inference, and model checking–but it does have some unresolved (maybe unresolvable) problems. Here are a couple mentioned in the third of the above links. Consider a simple model with independent data y_1, y_2, .., y_10 ~ N(θ,σ^2), with a prior distribution θ ~ N(0,10^2) and σ known and taking on some value of approximately 10. Inference about μ is straightforward, as is model checking, whether based on graphs or numerical summaries such as the sample variance and skewn

5 0.19164093 2332 andrew gelman stats-2014-05-12-“The results (not shown) . . .”

Introduction: Pro tip: Don’t believe any claims about results not shown in a paper. Even if the paper has been published. Even if it’s been cited hundreds of times. If the results aren’t shown, they haven’t been checked. I learned this the hard way after receiving this note from Bin Liu, who wrote: Today I saw a paper [by Ziheng Yang and Carlos Rodríguez] titled “Searching for efficient Markov chain Monte Carlo proposal kernels.” The authors cited your work: “Gelman A, Roberts GO, Gilks WR (1996) Bayesian Statistics 5, eds Bernardo JM, et al. (Oxford Univ Press, Oxford), Vol 5, pp 599-607″, i.e. ref.6 in the paper. In the last sentence of pp.19310, the authors write that “… virtually no study has examined alternative kernels; this appears to be due to the influence of ref. 6, which claimed that different kernels had nearly identical performance. This conclusion is incorrect.” Here’s our paper, and here’s the offending quote, which appeared after we discussed results for the no

6 0.16909626 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

7 0.14482844 1908 andrew gelman stats-2013-06-21-Interpreting interactions in discrete-data regression

8 0.14060548 1788 andrew gelman stats-2013-04-04-When is there “hidden structure in data” to be discovered?

9 0.14041811 2156 andrew gelman stats-2014-01-01-“Though They May Be Unaware, Newlyweds Implicitly Know Whether Their Marriage Will Be Satisfying”

10 0.13551171 1459 andrew gelman stats-2012-08-15-How I think about mixture models

11 0.13412613 774 andrew gelman stats-2011-06-20-The pervasive twoishness of statistics; in particular, the “sampling distribution” and the “likelihood” are two different models, and that’s a good thing

12 0.13048306 1392 andrew gelman stats-2012-06-26-Occam

13 0.13017146 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

14 0.12860906 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

15 0.12711528 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

16 0.12601739 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

17 0.12429907 1188 andrew gelman stats-2012-02-28-Reference on longitudinal models?

18 0.11606069 1431 andrew gelman stats-2012-07-27-Overfitting

19 0.11564583 1406 andrew gelman stats-2012-07-05-Xiao-Li Meng and Xianchao Xie rethink asymptotics

20 0.1149279 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.234), (1, 0.134), (2, -0.052), (3, 0.048), (4, 0.018), (5, 0.042), (6, -0.064), (7, -0.045), (8, 0.045), (9, 0.052), (10, 0.015), (11, -0.006), (12, -0.063), (13, -0.013), (14, -0.033), (15, 0.003), (16, 0.034), (17, -0.028), (18, -0.013), (19, -0.018), (20, 0.03), (21, -0.035), (22, -0.016), (23, -0.006), (24, 0.015), (25, 0.011), (26, -0.027), (27, 0.052), (28, 0.011), (29, -0.023), (30, -0.016), (31, 0.023), (32, 0.031), (33, -0.024), (34, 0.019), (35, -0.032), (36, -0.016), (37, -0.006), (38, -0.013), (39, -0.018), (40, -0.025), (41, 0.014), (42, 0.035), (43, 0.051), (44, 0.029), (45, 0.011), (46, -0.059), (47, 0.008), (48, 0.016), (49, -0.071)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97250265 1739 andrew gelman stats-2013-02-26-An AI can build and try out statistical models using an open-ended generative grammar

Introduction: David Duvenaud writes: I’ve been following your recent discussions about how an AI could do statistics [see also here ]. I was especially excited about your suggestion for new statistical methods using “a language-like approach to recursively creating new models from a specified list of distributions and transformations, and an automatic approach to checking model fit.” Your discussion of these ideas was exciting to me and my colleagues because we recently did some work taking a step in this direction, automatically searching through a grammar over Gaussian process regression models. Roger Grosse previously did the same thing , but over matrix decomposition models using held-out predictive likelihood to check model fit. These are both examples of automatic Bayesian model-building by a search over more and more complex models, as you suggested. One nice thing is that both grammars include lots of standard models for free, and they seem to work pretty well, although the

2 0.87502187 964 andrew gelman stats-2011-10-19-An interweaving-transformation strategy for boosting MCMC efficiency

Introduction: Yaming Yu and Xiao-Li Meng write in with a cool new idea for improving the efficiency of Gibbs and Metropolis in multilevel models: For a broad class of multilevel models, there exist two well-known competing parameterizations, the centered parameterization (CP) and the non-centered parameterization (NCP), for effective MCMC implementation. Much literature has been devoted to the questions of when to use which and how to compromise between them via partial CP/NCP. This article introduces an alternative strategy for boosting MCMC efficiency via simply interweaving—but not alternating—the two parameterizations. This strategy has the surprising property that failure of both the CP and NCP chains to converge geometrically does not prevent the interweaving algorithm from doing so. It achieves this seemingly magical property by taking advantage of the discordance of the two parameterizations, namely, the sufficiency of CP and the ancillarity of NCP, to substantially reduce the Markovian

3 0.84334642 1482 andrew gelman stats-2012-09-04-Model checking and model understanding in machine learning

Introduction: Last month I wrote : Computer scientists are often brilliant but they can be unfamiliar with what is done in the worlds of data collection and analysis. This goes the other way too: statisticians such as myself can look pretty awkward, reinventing (or failing to reinvent) various wheels when we write computer programs or, even worse, try to design software.Andrew MacNamara writes: Andrew MacNamara followed up with some thoughts: I [MacNamara] had some basic statistics training through my MBA program, after having completed an undergrad degree in computer science. Since then I’ve been very interested in learning more about statistical techniques, including things like GLM and censored data analyses as well as machine learning topics like neural nets, SVMs, etc. I began following your blog after some research into Bayesian analysis topics and I am trying to dig deeper on that side of things. One thing I have noticed is that there seems to be a distinction between data analysi

4 0.83245581 1406 andrew gelman stats-2012-07-05-Xiao-Li Meng and Xianchao Xie rethink asymptotics

Introduction: In an article catchily entitled, “I got more data, my model is more refined, but my estimator is getting worse! Am I just dumb?”, Meng and Xie write: Possibly, but more likely you are merely a victim of conventional wisdom. More data or better models by no means guarantee better estimators (e.g., with a smaller mean squared error), when you are not following probabilistically principled methods such as MLE (for large samples) or Bayesian approaches. Estimating equations are par- ticularly vulnerable in this regard, almost a necessary price for their robustness. These points will be demonstrated via common tasks of estimating regression parameters and correlations, under simple mod- els such as bivariate normal and ARCH(1). Some general strategies for detecting and avoiding such pitfalls are suggested, including checking for self-efficiency (Meng, 1994, Statistical Science) and adopting a guiding working model. Using the example of estimating the autocorrelation ρ under a statio

5 0.82463384 421 andrew gelman stats-2010-11-19-Just chaid

Introduction: Reading somebody else’s statistics rant made me realize the inherent contradictions in much of my own statistical advice. Jeff Lax sent along this article by Philip Schrodt, along with the cryptic comment: Perhaps of interest to you. perhaps not. Not meant to be an excuse for you to rant against hypothesis testing again. In his article, Schrodt makes a reasonable and entertaining argument against the overfitting of data and the overuse of linear models. He states that his article is motivated by the quantitative papers he has been sent to review for journals or conferences, and he explicitly excludes “studies of United States voting behavior,” so at least I think Mister P is off the hook. I notice a bit of incoherence in Schrodt’s position–on one hand, he criticizes “kitchen-sink models” for overfitting and he criticizes “using complex methods without understanding the underlying assumptions” . . . but then later on he suggests that political scientists in this countr

6 0.81573242 496 andrew gelman stats-2011-01-01-Tukey’s philosophy

7 0.81278682 1392 andrew gelman stats-2012-06-26-Occam

8 0.80721313 1374 andrew gelman stats-2012-06-11-Convergence Monitoring for Non-Identifiable and Non-Parametric Models

9 0.80319089 1682 andrew gelman stats-2013-01-19-R package for Bayes factors

10 0.80301231 1856 andrew gelman stats-2013-05-14-GPstuff: Bayesian Modeling with Gaussian Processes

11 0.80212492 575 andrew gelman stats-2011-02-15-What are the trickiest models to fit?

12 0.8001703 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe

13 0.79697269 1690 andrew gelman stats-2013-01-23-When are complicated models helpful in psychology research and when are they overkill?

14 0.78213185 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

15 0.7809248 243 andrew gelman stats-2010-08-30-Computer models of the oil spill

16 0.78088337 1983 andrew gelman stats-2013-08-15-More on AIC, WAIC, etc

17 0.78062254 214 andrew gelman stats-2010-08-17-Probability-processing hardware

18 0.77944177 1740 andrew gelman stats-2013-02-26-“Is machine learning a subset of statistics?”

19 0.778938 1459 andrew gelman stats-2012-08-15-How I think about mixture models

20 0.77503371 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.012), (15, 0.057), (16, 0.06), (17, 0.012), (21, 0.033), (24, 0.142), (30, 0.04), (43, 0.012), (53, 0.022), (61, 0.135), (73, 0.011), (79, 0.013), (86, 0.037), (99, 0.264)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96824503 1028 andrew gelman stats-2011-11-26-Tenure lets you handle students who cheat

Introduction: The other day, a friend of mine who is an untenured professor (not in statistics or political science) was telling me about a class where many of the students seemed to be resubmitting papers that they had already written for previous classes. (The supposition was based on internal evidence of the topics of the submitted papers.) It would be possible to check this and then kick the cheating students out of the program—but why do it? It would be a lot of work, also some of the students who are caught might complain, then word would get around that my friend is a troublemaker. And nobody likes a troublemaker. Once my friend has tenure it would be possible to do the right thing. But . . . here’s the hitch: most college instructors do not have tenure, and one result, I suspect, is a decline in ethical standards. This is something I hadn’t thought of in our earlier discussion of job security for teachers: tenure gives you the freedom to kick out cheating students.

2 0.96673131 1975 andrew gelman stats-2013-08-09-Understanding predictive information criteria for Bayesian models

Introduction: Jessy, Aki, and I write : We review the Akaike, deviance, and Watanabe-Akaike information criteria from a Bayesian perspective, where the goal is to estimate expected out-of-sample-prediction error using a bias-corrected adjustment of within-sample error. We focus on the choices involved in setting up these measures, and we compare them in three simple examples, one theoretical and two applied. The contribution of this review is to put all these information criteria into a Bayesian predictive context and to better understand, through small examples, how these methods can apply in practice. I like this paper. It came about as a result of preparing Chapter 7 for the new BDA . I had difficulty understanding AIC, DIC, WAIC, etc., but I recognized that these methods served a need. My first plan was to just apply DIC and WAIC on a couple of simple examples (a linear regression and the 8 schools) and leave it at that. But when I did the calculations, I couldn’t understand the resu

3 0.96106631 16 andrew gelman stats-2010-05-04-Burgess on Kipling

Introduction: This is my last entry derived from Anthony Burgess’s book reviews , and it’ll be short. His review of Angus Wilson’s “The Strange Ride of Rudyard Kipling: His Life and Works” is a wonderfully balanced little thing. Nothing incredibly deep–like most items in the collection, the review is only two pages long–but I give it credit for being a rare piece of Kipling criticism I’ve seen that (a) seriously engages with the politics, without (b) congratulating itself on bravely going against the fashions of the politically incorrect chattering classes by celebrating Kipling’s magnificent achievement blah blah blah. Instead, Burgess shows respect for Kipling’s work and puts it in historical, biographical, and literary context. Burgess concludes that Wilson’s book “reminds us, in John Gross’s words, that Kipling ‘remains a haunting, unsettling presence, with whom we still have to come to terms.’ Still.” Well put, and generous of Burgess to end his review with another’s quote. Other cri

4 0.95623219 1662 andrew gelman stats-2013-01-09-The difference between “significant” and “non-significant” is not itself statistically significant

Introduction: Commenter Rahul asked what I thought of this note by Scott Firestone ( link from Tyler Cowen) criticizing a recent discussion by Kevin Drum suggesting that lead exposure causes violent crime. Firestone writes: It turns out there was in fact a prospective study done—but its implications for Drum’s argument are mixed. The study was a cohort study done by researchers at the University of Cincinnati. Between 1979 and 1984, 376 infants were recruited. Their parents consented to have lead levels in their blood tested over time; this was matched with records over subsequent decades of the individuals’ arrest records, and specifically arrest for violent crime. Ultimately, some of these individuals were dropped from the study; by the end, 250 were selected for the results. The researchers found that for each increase of 5 micrograms of lead per deciliter of blood, there was a higher risk for being arrested for a violent crime, but a further look at the numbers shows a more mixe

5 0.95594347 1370 andrew gelman stats-2012-06-07-Duncan Watts and the Titanic

Introduction: Daniel Mendelsohn recently asked , “Why do we love the Titanic?”, seeking to understand how it has happened that: It may not be true that ‘the three most written-about subjects of all time are Jesus, the Civil War, and the Titanic,’ as one historian has put it, but it’s not much of an exaggeration. . . . The inexhaustible interest suggests that the Titanic’s story taps a vein much deeper than the morbid fascination that has attached to other disasters. The explosion of the Hindenburg, for instance, and even the torpedoing, just three years after the Titanic sank, of the Lusitania, another great liner whose passenger list boasted the rich and the famous, were calamities that shocked the world but have failed to generate an obsessive preoccupation. . . . If the Titanic has gripped our imagination so forcefully for the past century, it must be because of something bigger than any fact of social or political or cultural history. To get to the bottom of why we can’t forget it, yo

6 0.95554042 714 andrew gelman stats-2011-05-16-NYT Labs releases Openpaths, a utility for saving your iphone data

7 0.94786239 827 andrew gelman stats-2011-07-28-Amusing case of self-defeating science writing

same-blog 8 0.94778293 1739 andrew gelman stats-2013-02-26-An AI can build and try out statistical models using an open-ended generative grammar

9 0.9473114 2349 andrew gelman stats-2014-05-26-WAIC and cross-validation in Stan!

10 0.94585764 1558 andrew gelman stats-2012-11-02-Not so fast on levees and seawalls for NY harbor?

11 0.93886423 776 andrew gelman stats-2011-06-22-Deviance, DIC, AIC, cross-validation, etc

12 0.93739116 21 andrew gelman stats-2010-05-07-Environmentally induced cancer “grossly underestimated”? Doubtful.

13 0.92972827 2156 andrew gelman stats-2014-01-01-“Though They May Be Unaware, Newlyweds Implicitly Know Whether Their Marriage Will Be Satisfying”

14 0.92459077 561 andrew gelman stats-2011-02-06-Poverty, educational performance – and can be done about it

15 0.92077422 1714 andrew gelman stats-2013-02-09-Partial least squares path analysis

16 0.91928178 9 andrew gelman stats-2010-04-28-But it all goes to pay for gas, car insurance, and tolls on the turnpike

17 0.91727453 2033 andrew gelman stats-2013-09-23-More on Bayesian methods and multilevel modeling

18 0.91525537 1433 andrew gelman stats-2012-07-28-LOL without the CATS

19 0.9122743 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

20 0.91162193 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper