andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2003 knowledge-graph by maker-knowledge-mining

2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs


meta infos for this blog

Source: html

Introduction: Hamiltonian Monte Carlo (HMC), as used by Stan , is only defined for continuous parameters. We’d love to be able to do discrete sampling. So I was excited when I saw this: Yichuan Zhang, Charles Sutton, Amos J Storkey, and Zoubin Ghahramani. 2012. Continuous Relaxations for Discrete Hamiltonian Monte Carlo . NIPS 25. Abstract: Continuous relaxations play an important role in discrete optimization, but have not seen much use in approximate probabilistic inference. Here we show that a general form of the Gaussian Integral Trick makes it possible to transform a wide class of discrete variable undirected models into fully continuous systems. The continuous representation allows the use of gradient-based Hamiltonian Monte Carlo for inference, results in new ways of estimating normalization constants (partition functions), and in general opens up a number of new avenues for inference in difficult discrete systems. We demonstrate some of these continuous relaxation inference a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Hamiltonian Monte Carlo (HMC), as used by Stan , is only defined for continuous parameters. [sent-1, score-0.305]

2 Abstract: Continuous relaxations play an important role in discrete optimization, but have not seen much use in approximate probabilistic inference. [sent-7, score-0.573]

3 Here we show that a general form of the Gaussian Integral Trick makes it possible to transform a wide class of discrete variable undirected models into fully continuous systems. [sent-8, score-0.814]

4 The continuous representation allows the use of gradient-based Hamiltonian Monte Carlo for inference, results in new ways of estimating normalization constants (partition functions), and in general opens up a number of new avenues for inference in difficult discrete systems. [sent-9, score-1.007]

5 We demonstrate some of these continuous relaxation inference algorithms on a number of illustrative problems. [sent-10, score-0.546]

6 The paper applies the “Gaussian integral trick” to “relax” a discrete Markov random field (MRF) distribution to a continuous one by adding auxiliary parameters (their formula 11). [sent-11, score-1.439]

7 From there, the discrete parameters are distributed as an easy-to-compute Bernoulli conditional on the auxiliary parameters (their formula 12). [sent-12, score-1.044]

8 They provide a simulated example for both standard and “frustrated” Boltzmann machines (a kind of MRF) and also for a natural language example involving a skip-chain conditional random field (CRF). [sent-13, score-0.391]

9 Stan already supports HMC and calculates all the derivatives automatically, so that part is already done. [sent-14, score-0.214]

10 We don’t have time right now, but it would be super awesome if someone could implement this model in Stan and write it up in such a form we could blog about it and include it in the Stan manual (with attribution, of course! [sent-16, score-0.262]

11 From my quick read, it looks like formula (11) can be used directly to parameterize and define the model and their formula (12) could be implemented in the generated quantities block to generate discrete outputs. [sent-18, score-0.921]

12 I’m sure Zhang and crew would be happy to supply more details of the CRF, but I’d be happy with the simple simulated Boltzmann machine example and even happier with another real example that’s not as complex as a skip-chain CRF. [sent-19, score-0.328]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('discrete', 0.383), ('continuous', 0.305), ('crf', 0.285), ('formula', 0.226), ('mrf', 0.19), ('relaxations', 0.19), ('boltzmann', 0.173), ('hamiltonian', 0.17), ('zhang', 0.163), ('carlo', 0.161), ('monte', 0.153), ('stan', 0.15), ('auxiliary', 0.146), ('integral', 0.142), ('hmc', 0.122), ('simulated', 0.115), ('gaussian', 0.105), ('trick', 0.104), ('parameters', 0.104), ('sutton', 0.086), ('calculates', 0.086), ('parameterize', 0.086), ('inference', 0.082), ('avenues', 0.081), ('opens', 0.081), ('nips', 0.081), ('illustrative', 0.081), ('conditional', 0.081), ('relaxation', 0.078), ('constants', 0.075), ('amos', 0.075), ('bernoulli', 0.073), ('partition', 0.073), ('crew', 0.073), ('happy', 0.07), ('derivatives', 0.069), ('super', 0.069), ('relax', 0.068), ('field', 0.068), ('manual', 0.067), ('random', 0.065), ('form', 0.064), ('frustrated', 0.063), ('transform', 0.062), ('awesome', 0.062), ('machines', 0.062), ('markov', 0.06), ('nuts', 0.06), ('attribution', 0.059), ('supports', 0.059)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs

Introduction: Hamiltonian Monte Carlo (HMC), as used by Stan , is only defined for continuous parameters. We’d love to be able to do discrete sampling. So I was excited when I saw this: Yichuan Zhang, Charles Sutton, Amos J Storkey, and Zoubin Ghahramani. 2012. Continuous Relaxations for Discrete Hamiltonian Monte Carlo . NIPS 25. Abstract: Continuous relaxations play an important role in discrete optimization, but have not seen much use in approximate probabilistic inference. Here we show that a general form of the Gaussian Integral Trick makes it possible to transform a wide class of discrete variable undirected models into fully continuous systems. The continuous representation allows the use of gradient-based Hamiltonian Monte Carlo for inference, results in new ways of estimating normalization constants (partition functions), and in general opens up a number of new avenues for inference in difficult discrete systems. We demonstrate some of these continuous relaxation inference a

2 0.23294294 1228 andrew gelman stats-2012-03-25-Continuous variables in Bayesian networks

Introduction: Antti Rasinen writes: I’m a former undergrad machine learning student and a current software engineer with a Bayesian hobby. Today my two worlds collided. I ask for some enlightenment. On your blog you’ve repeatedly advocated continuous distributions with Bayesian models. Today I read this article by Ricky Ho, who writes: The strength of Bayesian network is it is highly scalable and can learn incrementally because all we do is to count the observed variables and update the probability distribution table. Similar to Neural Network, Bayesian network expects all data to be binary, categorical variable will need to be transformed into multiple binary variable as described above. Numeric variable is generally not a good fit for Bayesian network. The last sentence seems to be at odds with what you’ve said. Sadly, I don’t have enough expertise to say which view of the world is correct. During my undergrad years our team wrote an implementation of the Junction Tree algorithm. We r

3 0.1797767 1772 andrew gelman stats-2013-03-20-Stan at Google this Thurs and at Berkeley this Fri noon

Introduction: Michael Betancourt will be speaking at Google and at the University of California, Berkeley. The Google talk is closed to outsiders (but if you work at Google, you should go!); the Berkeley talk is open to all: Friday March 22, 12:10 pm, Evans Hall 1011. Title of talk: Stan : Practical Bayesian Inference with Hamiltonian Monte Carlo Abstract: Practical implementations of Bayesian inference are often limited to approximation methods that only slowly explore the posterior distribution. By taking advantage of the curvature of the posterior, however, Hamiltonian Monte Carlo (HMC) efficiently explores even the most highly contorted distributions. In this talk I will review the foundations of and recent developments within HMC, concluding with a discussion of Stan, a powerful inference engine that utilizes HMC, automatic differentiation, and adaptive methods to minimize user input. This is cool stuff. And he’ll be showing the whirlpool movie!

4 0.17227744 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

Introduction: I’ve been writing a lot about my philosophy of Bayesian statistics and how it fits into Popper’s ideas about falsification and Kuhn’s ideas about scientific revolutions. Here’s my long, somewhat technical paper with Cosma Shalizi. Here’s our shorter overview for the volume on the philosophy of social science. Here’s my latest try (for an online symposium), focusing on the key issues. I’m pretty happy with my approach–the familiar idea that Bayesian data analysis iterates the three steps of model building, inference, and model checking–but it does have some unresolved (maybe unresolvable) problems. Here are a couple mentioned in the third of the above links. Consider a simple model with independent data y_1, y_2, .., y_10 ~ N(θ,σ^2), with a prior distribution θ ~ N(0,10^2) and σ known and taking on some value of approximately 10. Inference about μ is straightforward, as is model checking, whether based on graphs or numerical summaries such as the sample variance and skewn

5 0.16861972 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

Introduction: Tomas Iesmantas had asked me for advice on a regression problem with 50 parameters, and I’d recommended Hamiltonian Monte Carlo. A few weeks later he reported back: After trying several modifications (HMC for all parameters at once, HMC just for first level parameters and Riemman manifold Hamiltonian Monte Carlo method), I finally got it running with HMC just for first level parameters and for others using direct sampling, since conditional distributions turned out to have closed form. However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. In my model log-likelihood I had exponents and values of metrics matrix elements was very large and wh

6 0.16645314 1529 andrew gelman stats-2012-10-11-Bayesian brains?

7 0.15919037 1749 andrew gelman stats-2013-03-04-Stan in L.A. this Wed 3:30pm

8 0.15805553 1475 andrew gelman stats-2012-08-30-A Stan is Born

9 0.14507723 811 andrew gelman stats-2011-07-20-Kind of Bayesian

10 0.14463565 2291 andrew gelman stats-2014-04-14-Transitioning to Stan

11 0.13606007 1339 andrew gelman stats-2012-05-23-Learning Differential Geometry for Hamiltonian Monte Carlo

12 0.13120614 2161 andrew gelman stats-2014-01-07-My recent debugging experience

13 0.13094263 1950 andrew gelman stats-2013-07-22-My talks that were scheduled for Tues at the Data Skeptics meetup and Wed at the Open Statistical Programming meetup

14 0.12867606 217 andrew gelman stats-2010-08-19-The “either-or” fallacy of believing in discrete models: an example of folk statistics

15 0.12626721 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

16 0.12318274 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

17 0.1160453 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

18 0.10960512 653 andrew gelman stats-2011-04-08-Multilevel regression with shrinkage for “fixed” effects

19 0.10832721 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

20 0.10684731 1627 andrew gelman stats-2012-12-17-Stan and RStan 1.1.0


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.128), (1, 0.112), (2, -0.027), (3, 0.062), (4, 0.052), (5, 0.055), (6, 0.014), (7, -0.129), (8, -0.026), (9, -0.079), (10, -0.079), (11, -0.007), (12, -0.111), (13, -0.033), (14, 0.041), (15, -0.032), (16, 0.016), (17, 0.032), (18, -0.014), (19, 0.009), (20, -0.005), (21, -0.015), (22, -0.035), (23, -0.015), (24, 0.051), (25, 0.046), (26, -0.005), (27, 0.035), (28, -0.007), (29, -0.001), (30, 0.011), (31, 0.013), (32, -0.027), (33, -0.005), (34, -0.006), (35, 0.005), (36, 0.004), (37, -0.003), (38, -0.01), (39, 0.013), (40, -0.032), (41, 0.006), (42, 0.009), (43, 0.008), (44, -0.004), (45, -0.035), (46, 0.014), (47, 0.024), (48, 0.031), (49, -0.001)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96560621 2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs

Introduction: Hamiltonian Monte Carlo (HMC), as used by Stan , is only defined for continuous parameters. We’d love to be able to do discrete sampling. So I was excited when I saw this: Yichuan Zhang, Charles Sutton, Amos J Storkey, and Zoubin Ghahramani. 2012. Continuous Relaxations for Discrete Hamiltonian Monte Carlo . NIPS 25. Abstract: Continuous relaxations play an important role in discrete optimization, but have not seen much use in approximate probabilistic inference. Here we show that a general form of the Gaussian Integral Trick makes it possible to transform a wide class of discrete variable undirected models into fully continuous systems. The continuous representation allows the use of gradient-based Hamiltonian Monte Carlo for inference, results in new ways of estimating normalization constants (partition functions), and in general opens up a number of new avenues for inference in difficult discrete systems. We demonstrate some of these continuous relaxation inference a

2 0.84982783 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

Introduction: We interrupt our usual program of Ed Wegman Gregg Easterbrook Niall Ferguson mockery to deliver a serious update on our statistical computing project. Stan (“Sampling Through Adaptive Neighborhoods”) is our new C++ program (written mostly by Bob Carpenter) that draws samples from Bayesian models. Stan can take different sorts of inputs: you can write the model in a Bugs-like syntax and it goes from there, or you can write the log-posterior directly as a C++ function. Most of the computation is done using Hamiltonian Monte Carlo. HMC requires some tuning, so Matt Hoffman up and wrote a new algorithm, Nuts (the “No-U-Turn Sampler”) which optimizes HMC adaptively. In many settings, Nuts is actually more computationally efficient than the optimal static HMC! When the the Nuts paper appeared on Arxiv, Christian Robert noticed it and had some reactions . In response to Xian’s comments, Matt writes: Christian writes: I wonder about the computing time (and the “una

3 0.84748596 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

Introduction: Stan 1.2.0 and RStan 1.2.0 are now available for download. See: http://mc-stan.org/ Here are the highlights. Full Mass Matrix Estimation during Warmup Yuanjun Gao, a first-year grad student here at Columbia (!), built a regularized mass-matrix estimator. This helps for posteriors with high correlation among parameters and varying scales. We’re still testing this ourselves, so the estimation procedure may change in the future (don’t worry — it satisfies detailed balance as is, but we might be able to make it more computationally efficient in terms of time per effective sample). It’s not the default option. The major reason is the matrix operations required are expensive, raising the algorithm cost to , where is the average number of leapfrog steps, is the number of iterations, and is the number of parameters. Yuanjun did a great job with the Cholesky factorizations and implemented this about as efficiently as is possible. (His homework for Andrew’s class w

4 0.84679043 1627 andrew gelman stats-2012-12-17-Stan and RStan 1.1.0

Introduction: We’re happy to announce the availability of Stan and RStan versions 1.1.0, which are general tools for performing model-based Bayesian inference using the no-U-turn sampler, an adaptive form of Hamiltonian Monte Carlo. Information on downloading and installing and using them is available as always from Stan Home Page: http://mc-stan.org/ Let us know if you have any problems on the mailing lists or at the e-mails linked on the home page (please don’t use this web page). The full release notes follow. (R)Stan Version 1.1.0 Release Notes =================================== -- Backward Compatibility Issue * Categorical distribution recoded to match documentation; it now has support {1,...,K} rather than {0,...,K-1}. * (RStan) change default value of permuted flag from FALSE to TRUE for Stan fit S4 extract() method -- New Features * Conditional (if-then-else) statements * While statements -- New Functions * generalized multiply_lower_tri

5 0.84291559 2150 andrew gelman stats-2013-12-27-(R-Py-Cmd)Stan 2.1.0

Introduction: We’re happy to announce the release of Stan C++, CmdStan, RStan, and PyStan 2.1.0.  This is a minor feature release, but it is also an important bug fix release.  As always, the place to start is the (all new) Stan web pages: http://mc-stan.org   Major Bug in 2.0.0, 2.0.1 Stan 2.0.0 and Stan 2.0.1 introduced a bug in the implementation of the NUTS criterion that led to poor tail exploration and thus biased the posterior uncertainty downward.  There was no bug in NUTS in Stan 1.3 or earlier, and 2.1 has been extensively tested and tests put in place so this problem will not recur. If you are using Stan 2.0.0 or 2.0.1, you should switch to 2.1.0 as soon as possible and rerun any models you care about.   New Target Acceptance Rate Default for Stan 2.1.0 Another big change aimed at reducing posterior estimation bias was an increase in the target acceptance rate during adaptation from 0.65 to 0.80.  The bad news is that iterations will take around 50% longer

6 0.83210146 2242 andrew gelman stats-2014-03-10-Stan Model of the Week: PK Calculation of IV and Oral Dosing

7 0.82237291 2161 andrew gelman stats-2014-01-07-My recent debugging experience

8 0.81831872 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!

9 0.81331497 1799 andrew gelman stats-2013-04-12-Stan 1.3.0 and RStan 1.3.0 Ready for Action

10 0.81297708 2020 andrew gelman stats-2013-09-12-Samplers for Big Science: emcee and BAT

11 0.81027865 1580 andrew gelman stats-2012-11-16-Stantastic!

12 0.79837513 1475 andrew gelman stats-2012-08-30-A Stan is Born

13 0.77903056 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

14 0.75680417 2291 andrew gelman stats-2014-04-14-Transitioning to Stan

15 0.756082 2209 andrew gelman stats-2014-02-13-CmdStan, RStan, PyStan v2.2.0

16 0.74583513 1528 andrew gelman stats-2012-10-10-My talk at MIT on Thurs 11 Oct

17 0.72161579 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

18 0.72139984 1855 andrew gelman stats-2013-05-13-Stan!

19 0.70798546 1748 andrew gelman stats-2013-03-04-PyStan!

20 0.70554113 2349 andrew gelman stats-2014-05-26-WAIC and cross-validation in Stan!


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.015), (6, 0.032), (8, 0.015), (16, 0.064), (21, 0.013), (24, 0.083), (30, 0.066), (35, 0.017), (44, 0.015), (63, 0.018), (68, 0.033), (75, 0.015), (77, 0.021), (79, 0.012), (82, 0.157), (86, 0.079), (89, 0.015), (99, 0.194)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.93208134 1772 andrew gelman stats-2013-03-20-Stan at Google this Thurs and at Berkeley this Fri noon

Introduction: Michael Betancourt will be speaking at Google and at the University of California, Berkeley. The Google talk is closed to outsiders (but if you work at Google, you should go!); the Berkeley talk is open to all: Friday March 22, 12:10 pm, Evans Hall 1011. Title of talk: Stan : Practical Bayesian Inference with Hamiltonian Monte Carlo Abstract: Practical implementations of Bayesian inference are often limited to approximation methods that only slowly explore the posterior distribution. By taking advantage of the curvature of the posterior, however, Hamiltonian Monte Carlo (HMC) efficiently explores even the most highly contorted distributions. In this talk I will review the foundations of and recent developments within HMC, concluding with a discussion of Stan, a powerful inference engine that utilizes HMC, automatic differentiation, and adaptive methods to minimize user input. This is cool stuff. And he’ll be showing the whirlpool movie!

same-blog 2 0.92277801 2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs

Introduction: Hamiltonian Monte Carlo (HMC), as used by Stan , is only defined for continuous parameters. We’d love to be able to do discrete sampling. So I was excited when I saw this: Yichuan Zhang, Charles Sutton, Amos J Storkey, and Zoubin Ghahramani. 2012. Continuous Relaxations for Discrete Hamiltonian Monte Carlo . NIPS 25. Abstract: Continuous relaxations play an important role in discrete optimization, but have not seen much use in approximate probabilistic inference. Here we show that a general form of the Gaussian Integral Trick makes it possible to transform a wide class of discrete variable undirected models into fully continuous systems. The continuous representation allows the use of gradient-based Hamiltonian Monte Carlo for inference, results in new ways of estimating normalization constants (partition functions), and in general opens up a number of new avenues for inference in difficult discrete systems. We demonstrate some of these continuous relaxation inference a

3 0.91529471 1749 andrew gelman stats-2013-03-04-Stan in L.A. this Wed 3:30pm

Introduction: Michael Betancourt will be speaking at UCLA: The location for refreshment is in room 51-254 CHS at 3:00 PM. The place for the seminar is at CHS 33-105A at 3:30pm – 4:30pm, Wed 6 Mar. ["CHS" stands for Center for Health Sciences, the building of the UCLA schools of medicine and public health. Here's a map with directions .] Title of talk: Stan : Practical Bayesian Inference with Hamiltonian Monte Carlo Abstract: Practical implementations of Bayesian inference are often limited to approximation methods that only slowly explore the posterior distribution. By taking advantage of the curvature of the posterior, however, Hamiltonian Monte Carlo (HMC) efficiently explores even the most highly contorted distributions. In this talk I will review the foundations of and recent developments within HMC, concluding with a discussion of Stan, a powerful inference engine that utilizes HMC, automatic differentiation, and adaptive methods to minimize user input. This is cool stuff.

4 0.90309757 940 andrew gelman stats-2011-10-03-It depends upon what the meaning of the word “firm” is.

Introduction: David Hogg pointed me to this news article by Angela Saini: It’s not often that the quiet world of mathematics is rocked by a murder case. But last summer saw a trial that sent academics into a tailspin, and has since swollen into a fevered clash between science and the law. At its heart, this is a story about chance. And it begins with a convicted killer, “T”, who took his case to the court of appeal in 2010. Among the evidence against him was a shoeprint from a pair of Nike trainers, which seemed to match a pair found at his home. While appeals often unmask shaky evidence, this was different. This time, a mathematical formula was thrown out of court. The footwear expert made what the judge believed were poor calculations about the likelihood of the match, compounded by a bad explanation of how he reached his opinion. The conviction was quashed. . . . “The impact will be quite shattering,” says Professor Norman Fenton, a mathematician at Queen Mary, University of London.

5 0.8858391 335 andrew gelman stats-2010-10-11-How to think about Lou Dobbs

Introduction: I was unsurprised to read that Lou Dobbs, the former CNN host who crusaded against illegal immigrants, had actually hired a bunch of them himself to maintain his large house and his horse farm. (OK, I have to admit I was surprised by the part about the horse farm.) But I think most of the reactions to this story missed the point. Isabel Macdonald’s article that broke the story was entitled, “Lou Dobbs, American Hypocrite,” and most of the discussion went from there, with some commenters piling on Dobbs and others defending him by saying that Dobbs hired his laborers through contractors and may not have known they were in the country illegally. To me, though, the key issue is slightly different. And Macdonald’s story is relevant whether or not Dobbs knew he was hiring illegals. My point is not that Dobbs is a bad guy, or a hypocrite, or whatever. My point is that, in his setting, it would take an extraordinary effort to not hire illegal immigrants to take care of his house

6 0.88366765 178 andrew gelman stats-2010-08-03-(Partisan) visualization of health care legislation

7 0.8701998 1094 andrew gelman stats-2011-12-31-Using factor analysis or principal components analysis or measurement-error models for biological measurements in archaeology?

8 0.86974478 340 andrew gelman stats-2010-10-13-Randomized experiments, non-randomized experiments, and observational studies

9 0.86874372 1958 andrew gelman stats-2013-07-27-Teaching is hard

10 0.85795033 1440 andrew gelman stats-2012-08-02-“A Christmas Carol” as applied to plagiarism

11 0.85618675 699 andrew gelman stats-2011-05-06-Another stereotype demolished

12 0.84775978 67 andrew gelman stats-2010-06-03-More on that Dartmouth health care study

13 0.84473479 326 andrew gelman stats-2010-10-07-Peer pressure, selection, and educational reform

14 0.84439665 193 andrew gelman stats-2010-08-09-Besag

15 0.84079266 359 andrew gelman stats-2010-10-21-Applied Statistics Center miniconference: Statistical sampling in developing countries

16 0.84014678 1134 andrew gelman stats-2012-01-21-Lessons learned from a recent R package submission

17 0.83906138 1488 andrew gelman stats-2012-09-08-Annals of spam

18 0.83496344 1553 andrew gelman stats-2012-10-30-Real rothko, fake rothko

19 0.83159178 1963 andrew gelman stats-2013-07-31-Response by Jessica Tracy and Alec Beall to my critique of the methods in their paper, “Women Are More Likely to Wear Red or Pink at Peak Fertility”

20 0.83015686 366 andrew gelman stats-2010-10-24-Mankiw tax update