andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1309 knowledge-graph by maker-knowledge-mining

1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!


meta infos for this blog

Source: html

Introduction: From August 1990. It was in the form of a note sent to all the people in the statistics group of Bell Labs, where I’d worked that summer. To all: Here’s the abstract of the work I’ve done this summer. It’s stored in the file, /fs5/gelman/abstract.bell, and copies of the Figures 1-3 are on Trevor’s desk. Any comments are of course appreciated; I’m at gelman@stat.berkeley.edu. On the Routine Use of Markov Chains for Simulation Andrew Gelman and Donald Rubin, 6 August 1990 corrected version: 8 August 1990 1. Simulation In probability and statistics we can often specify multivariate distributions many of whose properties we do not fully understand–perhaps, as in the Ising model of statistical physics, we can write the joint density function, up to a multiplicative constant that cannot be expressed in closed form. For an example in statistics, consider the Normal random effects model in the analysis of variance, which can be easily placed in a Bayesian fram


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Markov chain methods Drawing independent random samples is a wonderful tool that is unfortunately not available for every distribution; in particular, the Ising model and random effects posterior distributions mentioned above do not permit direct simulation. [sent-20, score-0.738]

2 , that is a sample from an ergodic Markov chain whose stationary distribution is F(x). [sent-28, score-0.568]

3 These samples xj are not independent; however, the stationary distribution of the Markov chain is correct, so if we take a long enough series, the set of values {x1, . [sent-30, score-0.741]

4 , xn} takes the place of the distribution just as an independent random sample does (although of course an independent sample carries more information than a Markov chain sample of the same length). [sent-33, score-1.008]

5 The Gibbs sampler is a similar algorithm, which produces a Markov chain that converges to the desired distribution, this time requiring draws from all the univariate conditional densities at each iteration. [sent-34, score-0.494]

6 Unfortunately, using a sample of a Markov chain to estimate a distribution raises an immediate question: how long a series is needed? [sent-38, score-0.783]

7 , r(x2000)} from the simulated Markov chain can serve as a substitute for the marginal distribution of r. [sent-51, score-0.573]

8 For comparison we ran the Gibbs sampler again for 2000 steps, but this time starting at a point x0 for which r(x0) = 1; Figure 3 displays the series r(xj), which again seems to have converged nicely. [sent-57, score-0.684]

9 The answer: parallel Markov chains To restate the general problem: we wish to summarize an intractable distribution F(x) by running the Gibbs sampler (or a similar method such as the Metropolis algorithm) until the distribution of the set of Markov chain iterates is close to F. [sent-65, score-0.899]

10 Again, we focus attention on a univariate summary, say r(x); we want to use the observed simulations rij to determine whether the series of r’s are close to convergence after n steps. [sent-80, score-0.697]

11 First assume for simplicity that the starting points of the simulated series are themselves independent random samples from F(x). [sent-83, score-0.798]

12 ) With independent starting points, all values of any series are independent of all the values of any other series, and the unconditional variance of any point rij is just the marginal variance var r under the distribution F. [sent-85, score-1.509]

13 ) Given the assumption of initial independence, this “between” estimate of variance (not the same as the usual “between” estimate in ANOVA) is unbiased for finite series of any length. [sent-88, score-0.489]

14 The discrepancy between the two estimates of var r suggests a test: declare the Markov chain to have converged when the within mean square is close to the variance estimate between series, with confidence intervals derived from classical ANOVA theory. [sent-94, score-0.753]

15 Once we are close enough to convergence to be satisfied, the variance estimates and degrees of freedom corrections alluded to above allow us to estimate the marginal summaries E r, var r, and Normal-theory confidence intervals for our Monte Carlo approximations. [sent-97, score-0.618]

16 We can run the series longer if more precision is desired, and can repeat the process to study the marginal distributions of other parameters (without, of course, having to simulate any new series of x’s). [sent-98, score-0.771]

17 In practice, the starting points of the parallel series can never be sampled independently with distribution F(x); the simulated series are thus no longer stationary for any finite n, formally invalidating the above analysis. [sent-99, score-1.191]

18 The m parallel series should then start far apart and grow closer as they approach stationarity, as in Figures 1 and 3; since the variance between series declines with n, the comparison-of-variances test should be conservative. [sent-102, score-0.791]

19 Second, we reduce the effect of the starting values by crudely throwing away the the first half of each simulated series until approximate convergence has been reached. [sent-103, score-0.686]

20 The idea of comparing parallel simulations is not new; for example, Fosdick (1959) applied the Metropolis algorithm to the Ising model by simulating four series independently, from each of two different starting points. [sent-106, score-0.581]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('markov', 0.365), ('series', 0.298), ('chain', 0.252), ('ising', 0.201), ('em', 0.185), ('distribution', 0.157), ('bf', 0.149), ('converged', 0.136), ('gibbs', 0.136), ('independent', 0.133), ('starting', 0.13), ('sampler', 0.12), ('convergence', 0.118), ('simulation', 0.115), ('variance', 0.114), ('fosdick', 0.111), ('xj', 0.111), ('carlo', 0.11), ('random', 0.105), ('monte', 0.104), ('var', 0.104), ('marginal', 0.097), ('geman', 0.095), ('metropolis', 0.088), ('pickard', 0.083), ('rij', 0.083), ('stationary', 0.083), ('figures', 0.082), ('parallel', 0.081), ('steps', 0.081), ('within', 0.079), ('distributions', 0.078), ('finite', 0.077), ('sample', 0.076), ('ss', 0.076), ('values', 0.073), ('journal', 0.073), ('simulations', 0.072), ('close', 0.068), ('simulated', 0.067), ('lattice', 0.066), ('samples', 0.065), ('method', 0.064), ('desired', 0.064), ('blah', 0.059), ('freedom', 0.059), ('degrees', 0.058), ('univariate', 0.058), ('handscomb', 0.055), ('kinderman', 0.055)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

Introduction: From August 1990. It was in the form of a note sent to all the people in the statistics group of Bell Labs, where I’d worked that summer. To all: Here’s the abstract of the work I’ve done this summer. It’s stored in the file, /fs5/gelman/abstract.bell, and copies of the Figures 1-3 are on Trevor’s desk. Any comments are of course appreciated; I’m at gelman@stat.berkeley.edu. On the Routine Use of Markov Chains for Simulation Andrew Gelman and Donald Rubin, 6 August 1990 corrected version: 8 August 1990 1. Simulation In probability and statistics we can often specify multivariate distributions many of whose properties we do not fully understand–perhaps, as in the Ising model of statistical physics, we can write the joint density function, up to a multiplicative constant that cannot be expressed in closed form. For an example in statistics, consider the Normal random effects model in the analysis of variance, which can be easily placed in a Bayesian fram

2 0.2877481 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

Introduction: Over the years I’ve written a dozen or so journal articles that have appeared with discussions, and I’ve participated in many published discussions of others’ articles as well. I get a lot out of these article-discussion-rejoinder packages, in all three of my roles as reader, writer, and discussant. Part 1: The story of an unsuccessful discussion The first time I had a discussion article was the result of an unfortunate circumstance. I had a research idea that resulted in an article with Don Rubin on monitoring the mixing of Markov chain simulations. I new the idea was great, but back then we worked pretty slowly so it was awhile before we had a final version to submit to a journal. (In retrospect I wish I’d just submitted the draft version as it was.) In the meantime I presented the paper at a conference. Our idea was very well received (I had a sheet of paper so people could write their names and addresses to get preprints, and we got either 50 or 150 (I can’t remembe

3 0.26976004 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

Introduction: Galin Jones, Steve Brooks, Xiao-Li Meng and I edited a handbook of Markov Chain Monte Carlo that has just been published . My chapter (with Kenny Shirley) is here , and it begins like this: Convergence of Markov chain simulations can be monitored by measuring the diffusion and mixing of multiple independently-simulated chains, but different levels of convergence are appropriate for different goals. When considering inference from stochastic simulation, we need to separate two tasks: (1) inference about parameters and functions of parameters based on broad characteristics of their distribution, and (2) more precise computation of expectations and other functions of probability distributions. For the first task, there is a natural limit to precision beyond which additional simulations add essentially nothing; for the second task, the appropriate precision must be decided from external considerations. We illustrate with an example from our current research, a hierarchical model of t

4 0.19732666 112 andrew gelman stats-2010-06-27-Sampling rate of human-scaled time series

Introduction: Bill Harris writes with two interesting questions involving time series analysis: I used to work in an organization that designed and made signal processing equipment. Antialiasing and windowing of time series was a big deal in performing analysis accurately. Now I’m in a place where I have to make inferences about human-scaled time series. It has dawned on me that the two are related. I’m not sure we often have data sampled at a rate at least twice the highest frequency present (not just the highest frequency of interest). The only articles I’ve seen about aliasing as applied to social science series are from Hinich or from related works . Box and Jenkins hint at it in section 13.3 of Time Series Analysis, but the analysis seems to be mostly heuristic. Yet I can imagine all sorts of time series subject to similar problems, from analyses of stock prices based on closing prices (mentioned in the latter article) to other economic series measured on a monthly basis to en

5 0.1854258 2157 andrew gelman stats-2014-01-02-2013

Introduction: There’s lots of overlap but I put each paper into only one category.  Also, I’ve included work that has been published in 2013 as well as work that has been completed this year and might appear in 2014 or later.  So you can can think of this list as representing roughly two years’ work. Political science: [2014] The twentieth-century reversal: How did the Republican states switch to the Democrats and vice versa? {\em Statistics and Public Policy}.  (Andrew Gelman) [2013] Hierarchical models for estimating state and demographic trends in U.S. death penalty public opinion. {\em Journal of the Royal Statistical Society A}.  (Kenneth Shirley and Andrew Gelman) [2013] Deep interactions with MRP: Election turnout and voting patterns among small electoral subgroups. {\em American Journal of Political Science}.  (Yair Ghitza and Andrew Gelman) [2013] Charles Murray’s {\em Coming Apart} and the measurement of social and political divisions. {\em Statistics, Politics and Policy}.

6 0.17954822 2081 andrew gelman stats-2013-10-29-My talk in Amsterdam tomorrow (Wed 29 Oct): Can we use Bayesian methods to resolve the current crisis of statistically-significant research findings that don’t hold up?

7 0.17290711 2034 andrew gelman stats-2013-09-23-My talk Tues 24 Sept at 12h30 at Université de Technologie de Compiègne

8 0.15413494 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization

9 0.14866157 246 andrew gelman stats-2010-08-31-Somewhat Bayesian multilevel modeling

10 0.14442679 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

11 0.13325058 1527 andrew gelman stats-2012-10-10-Another reason why you can get good inferences from a bad model

12 0.13211381 1469 andrew gelman stats-2012-08-25-Ways of knowing

13 0.12809525 1986 andrew gelman stats-2013-08-17-Somebody’s looking for a book on time series analysis in the style of Angrist and Pischke, or Gelman and Hill

14 0.12769288 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

15 0.12748747 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

16 0.12498727 984 andrew gelman stats-2011-11-01-David MacKay sez . . . 12??

17 0.12479657 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

18 0.12291903 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever

19 0.12090819 1267 andrew gelman stats-2012-04-17-Hierarchical-multilevel modeling with “big data”

20 0.11937599 961 andrew gelman stats-2011-10-16-The “Washington read” and the algebra of conditional distributions


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.231), (1, 0.153), (2, 0.01), (3, -0.001), (4, 0.038), (5, 0.009), (6, 0.013), (7, -0.058), (8, -0.059), (9, -0.04), (10, 0.024), (11, -0.036), (12, -0.023), (13, 0.008), (14, 0.008), (15, -0.031), (16, -0.018), (17, 0.01), (18, -0.011), (19, -0.032), (20, 0.034), (21, -0.003), (22, 0.083), (23, 0.026), (24, 0.094), (25, 0.037), (26, -0.098), (27, 0.099), (28, 0.081), (29, -0.022), (30, -0.012), (31, 0.034), (32, -0.032), (33, 0.059), (34, 0.044), (35, -0.035), (36, -0.047), (37, 0.031), (38, 0.003), (39, -0.018), (40, -0.003), (41, -0.001), (42, -0.036), (43, 0.001), (44, -0.039), (45, 0.003), (46, 0.002), (47, -0.047), (48, 0.052), (49, -0.039)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96420407 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

Introduction: From August 1990. It was in the form of a note sent to all the people in the statistics group of Bell Labs, where I’d worked that summer. To all: Here’s the abstract of the work I’ve done this summer. It’s stored in the file, /fs5/gelman/abstract.bell, and copies of the Figures 1-3 are on Trevor’s desk. Any comments are of course appreciated; I’m at gelman@stat.berkeley.edu. On the Routine Use of Markov Chains for Simulation Andrew Gelman and Donald Rubin, 6 August 1990 corrected version: 8 August 1990 1. Simulation In probability and statistics we can often specify multivariate distributions many of whose properties we do not fully understand–perhaps, as in the Ising model of statistical physics, we can write the joint density function, up to a multiplicative constant that cannot be expressed in closed form. For an example in statistics, consider the Normal random effects model in the analysis of variance, which can be easily placed in a Bayesian fram

2 0.78070033 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

Introduction: Galin Jones, Steve Brooks, Xiao-Li Meng and I edited a handbook of Markov Chain Monte Carlo that has just been published . My chapter (with Kenny Shirley) is here , and it begins like this: Convergence of Markov chain simulations can be monitored by measuring the diffusion and mixing of multiple independently-simulated chains, but different levels of convergence are appropriate for different goals. When considering inference from stochastic simulation, we need to separate two tasks: (1) inference about parameters and functions of parameters based on broad characteristics of their distribution, and (2) more precise computation of expectations and other functions of probability distributions. For the first task, there is a natural limit to precision beyond which additional simulations add essentially nothing; for the second task, the appropriate precision must be decided from external considerations. We illustrate with an example from our current research, a hierarchical model of t

3 0.73996216 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

Introduction: Tomas Iesmantas had asked me for advice on a regression problem with 50 parameters, and I’d recommended Hamiltonian Monte Carlo. A few weeks later he reported back: After trying several modifications (HMC for all parameters at once, HMC just for first level parameters and Riemman manifold Hamiltonian Monte Carlo method), I finally got it running with HMC just for first level parameters and for others using direct sampling, since conditional distributions turned out to have closed form. However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. In my model log-likelihood I had exponents and values of metrics matrix elements was very large and wh

4 0.73839957 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work

Introduction: Aki and I write : The very generality of the boostrap creates both opportunity and peril, allowing researchers to solve otherwise intractable problems but also sometimes leading to an answer with an inappropriately high level of certainty. We demonstrate with two examples from our own research: one problem where bootstrap smoothing was effective and led us to an improved method, and another case where bootstrap smoothing would not solve the underlying problem. Our point in these examples is not to disparage bootstrapping but rather to gain insight into where it will be more or less effective as a smoothing tool. An example where bootstrap smoothing works well Bayesian posterior distributions are commonly summarized using Monte Carlo simulations, and inferences for scalar parameters or quantities of interest can be summarized using 50% or 95% intervals. A interval for a continuous quantity is typically constructed either as a central probability interval (with probabili

5 0.73301303 2332 andrew gelman stats-2014-05-12-“The results (not shown) . . .”

Introduction: Pro tip: Don’t believe any claims about results not shown in a paper. Even if the paper has been published. Even if it’s been cited hundreds of times. If the results aren’t shown, they haven’t been checked. I learned this the hard way after receiving this note from Bin Liu, who wrote: Today I saw a paper [by Ziheng Yang and Carlos Rodríguez] titled “Searching for efficient Markov chain Monte Carlo proposal kernels.” The authors cited your work: “Gelman A, Roberts GO, Gilks WR (1996) Bayesian Statistics 5, eds Bernardo JM, et al. (Oxford Univ Press, Oxford), Vol 5, pp 599-607″, i.e. ref.6 in the paper. In the last sentence of pp.19310, the authors write that “… virtually no study has examined alternative kernels; this appears to be due to the influence of ref. 6, which claimed that different kernels had nearly identical performance. This conclusion is incorrect.” Here’s our paper, and here’s the offending quote, which appeared after we discussed results for the no

6 0.73087662 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever

7 0.72110546 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

8 0.71565729 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe

9 0.70958763 650 andrew gelman stats-2011-04-05-Monitor the efficiency of your Markov chain sampler using expected squared jumped distance!

10 0.70739901 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

11 0.70655304 246 andrew gelman stats-2010-08-31-Somewhat Bayesian multilevel modeling

12 0.69898194 2258 andrew gelman stats-2014-03-21-Random matrices in the news

13 0.69799256 2311 andrew gelman stats-2014-04-29-Bayesian Uncertainty Quantification for Differential Equations!

14 0.69683224 269 andrew gelman stats-2010-09-10-R vs. Stata, or, Different ways to estimate multilevel models

15 0.68780214 1363 andrew gelman stats-2012-06-03-Question about predictive checks

16 0.68744802 1374 andrew gelman stats-2012-06-11-Convergence Monitoring for Non-Identifiable and Non-Parametric Models

17 0.66708565 2180 andrew gelman stats-2014-01-21-Everything I need to know about Bayesian statistics, I learned in eight schools.

18 0.66690344 1339 andrew gelman stats-2012-05-23-Learning Differential Geometry for Hamiltonian Monte Carlo

19 0.6587311 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

20 0.65015191 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.012), (13, 0.01), (15, 0.055), (16, 0.063), (21, 0.035), (24, 0.137), (42, 0.01), (52, 0.013), (57, 0.034), (63, 0.017), (75, 0.119), (84, 0.026), (86, 0.062), (89, 0.018), (99, 0.239)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95353293 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

Introduction: From August 1990. It was in the form of a note sent to all the people in the statistics group of Bell Labs, where I’d worked that summer. To all: Here’s the abstract of the work I’ve done this summer. It’s stored in the file, /fs5/gelman/abstract.bell, and copies of the Figures 1-3 are on Trevor’s desk. Any comments are of course appreciated; I’m at gelman@stat.berkeley.edu. On the Routine Use of Markov Chains for Simulation Andrew Gelman and Donald Rubin, 6 August 1990 corrected version: 8 August 1990 1. Simulation In probability and statistics we can often specify multivariate distributions many of whose properties we do not fully understand–perhaps, as in the Ising model of statistical physics, we can write the joint density function, up to a multiplicative constant that cannot be expressed in closed form. For an example in statistics, consider the Normal random effects model in the analysis of variance, which can be easily placed in a Bayesian fram

2 0.95025086 28 andrew gelman stats-2010-05-12-Alert: Incompetent colleague wastes time of hardworking Wolfram Research publicist

Introduction: Marty McKee at Wolfram Research appears to have a very very stupid colleague. McKee wrote to Christian Robert: Your article, “Evidence and Evolution: A review”, caught the attention of one of my colleagues, who thought that it could be developed into an interesting Demonstration to add to the Wolfram Demonstrations Project. As Christian points out, adapting his book review into a computer demonstration would be quite a feat! I wonder what McKee’s colleague could be thinking? I recommend that Wolfram fire McKee’s colleague immediately: what an idiot! P.S. I’m not actually sure that McKee was the author of this email; I’m guessing this was the case because this other very similar email was written under his name. P.P.S. To head off the inevitable comments: Yes, yes, I know this is no big deal and I shouldn’t get bent out of shape about it. But . . . Wolfram Research has contributed such great things to the world, that I hate to think of them wasting any money paying

3 0.94616866 522 andrew gelman stats-2011-01-18-Problems with Haiti elections?

Introduction: Mark Weisbrot points me to this report trashing a recent OAS report on Haiti’s elections. Weisbrot writes: The two simplest things that are wrong with the OAS analysis are: (1) By looking only at a sample of the tally sheets and not using any statistical test, they have no idea how many other tally sheets would also be thrown out by the same criteria that they used, and how that would change the result and (2) The missing/quarantined tally sheets are much greater in number than the ones that they threw out; our analysis indicates that if these votes had been counted, the result would go the other way. I have not had a chance to take a look at this myself but I’m posting it here so that experts on election irregularities can see this and give their judgments. P.S. Weisbrot updates: We [Weisbrot et al.] published our actual paper on the OAS Mission’s Report today. The press release is here and gives a very good summary of the major problems with the OAS Mission rep

4 0.93972158 1396 andrew gelman stats-2012-06-27-Recently in the sister blog

Introduction: If Paul Krugman is right and it’s 1931, what happens next? What’s with Niall Ferguson? Hey, this reminds me of the Democrats in the U.S. . . . Would President Romney contract the economy? Inconsistency with prior knowledge triggers children’s causal explanatory reasoning

5 0.93271762 893 andrew gelman stats-2011-09-06-Julian Symons on Frances Newman

Introduction: “She was forty years old when she died. It is possible that her art might have developed to include a wider area of human experience, just as possible that the chilling climate of the thirties might have withered it altogether. But what she actually wrote was greatly talented. She deserves a place, although obviously not a foremost one, in any literary history of the years between the wars. The last letter she wrote, or rather dictated, to the printer of the Laforgue translations shows the invariable fastidiousness of her talent, a fastidiousness which is often infuriating but just as often impressive, and is in any case rare enough to be worth remembrance: To the Printer of Six Moral Tales This book is to be spelled and its words are to be hyphenated according to the usage of the Concise Oxford Dictionary. Page introduction continuously with the tales. Do not put brackets around the numbers of the pages. All the ‘todays’ and all the ‘tomorrows’ should be spelled w

6 0.91796935 1003 andrew gelman stats-2011-11-11-$

7 0.91772795 946 andrew gelman stats-2011-10-07-Analysis of Power Law of Participation

8 0.91637164 1067 andrew gelman stats-2011-12-18-Christopher Hitchens was a Bayesian

9 0.91105711 1808 andrew gelman stats-2013-04-17-Excel-bashing

10 0.90844697 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

11 0.90739983 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever

12 0.905981 2034 andrew gelman stats-2013-09-23-My talk Tues 24 Sept at 12h30 at Université de Technologie de Compiègne

13 0.9013778 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

14 0.89991426 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work

15 0.89905274 2318 andrew gelman stats-2014-05-04-Stan (& JAGS) Tutorial on Linear Mixed Models

16 0.89889371 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

17 0.89880788 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

18 0.89875436 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

19 0.89861751 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model

20 0.89844859 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes