andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2340 knowledge-graph by maker-knowledge-mining

2340 andrew gelman stats-2014-05-20-Thermodynamic Monte Carlo: Michael Betancourt’s new method for simulating from difficult distributions and evaluating normalizing constants


meta infos for this blog

Source: html

Introduction: I hate to keep bumping our scheduled posts but this is just too important and too exciting to wait. So it’s time to jump the queue. The news is a paper from Michael Betancourt that presents a super-cool new way to compute normalizing constants: A common strategy for inference in complex models is the relaxation of a simple model into the more complex target model, for example the prior into the posterior in Bayesian inference. Existing approaches that attempt to generate such transformations, however, are sensitive to the pathologies of complex distributions and can be difficult to implement in practice. Leveraging the geometry of thermodynamic processes I introduce a principled and robust approach to deforming measures that presents a powerful new tool for inference. The idea is to generalize Hamiltonian Monte Carlo so that it moves through a family of distributions (that is, it transitions through an “inverse temperature” variable called beta that indexes the family) a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The idea is to generalize Hamiltonian Monte Carlo so that it moves through a family of distributions (that is, it transitions through an “inverse temperature” variable called beta that indexes the family) as a way of continuously sweeping through to efficiently compute the normalizing function. [sent-6, score-0.532]

2 The graphs above show a single HMC path with the temperature going from 0 to 1 on a simple test example. [sent-7, score-0.511]

3 Methods such as simulated annealing and simulated tempering are based on altering a temperature parameter. [sent-12, score-0.968]

4 In simulated annealing or tempering, you gradually increase beta, not too slowly (or you’ll be wasting your time) and not too fast (or you’ll break the smooth operation of your simulations, assuming this is all embedded in some Markov chain simulation algorithm). [sent-15, score-0.484]

5 You’ll want to move fast in areas where the distributions are changing slowly as a function of beta, slowly in areas where the distributions are changing fast, and you’ll want to stay still for awhile where there are phase transitions. [sent-16, score-0.605]

6 The physical analogy of tempering or annealing is clear—but the interesting thing is that it has problems. [sent-20, score-0.336]

7 In particular, you can’t actually set or change the temperature of a physical system. [sent-21, score-0.593]

8 What you can do is couple your system to a heat bath or cold bath and then let the temperature change naturally. [sent-22, score-1.27]

9 That is, you can connect your system to a heat pump or a refrigerator, you can’t stick it in the microwave oven or in a (nonexistent) “microwave cooler. [sent-23, score-0.378]

10 ” What does this imply for statistical algorithms for simulating from difficult distributions and computing normalizing constants? [sent-24, score-0.344]

11 Instead, we should alter the temperature in a more natural, physical way (that is, respecting the underlying differential equations) by mathematically coupling the system with a heat bath or cold bath and letting the temperature then evolve as it will. [sent-26, score-1.814]

12 This is what Betancourt’s thermodynamic Monte Carlo algorithm does. [sent-27, score-0.304]

13 And here’s some more intuition from Michael on what the algorithm is doing: In the thermodynamic evolution we have three things going on. [sent-29, score-0.522]

14 Secondly we have a heat bath that adds or removes momentum depending on current potential energy relative to the average potential energy. [sent-31, score-0.749]

15 Thirdly we have a temperature evolution term that leaches kinetic energy in order to fuel changes in temperature. [sent-32, score-1.269]

16 When the system is in equilibrium the kinetic energy is nonzero, some of it can be drained away to change the temperature, and then the system re-equilibrates by adding/removing kinetic to balance the average potential energy and then mixing the positions and momenta again. [sent-33, score-1.693]

17 But when the system is far away from equilibrium the kinetic energy will approach zero and the temperature evolution stalls until the positions and momenta equilibrate: the three evolution terms feed back into each other to maintain equilibrium. [sent-34, score-1.807]

18 When the trajectory is in the bulk of probability mass we have equilibrium and the temperature can evolve forward smoothly. [sent-36, score-0.852]

19 When the trajectory approaches a tail all of the energy converts to potential and there’s little kinetic energy to fuel the temperature evolution, which slows to a crawl. [sent-37, score-1.52]

20 This gives the trajectory a chance to bounce back towards the bulk, converting the potential back into kinetic energy which then restarts the temperature evolution. [sent-38, score-1.232]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('temperature', 0.456), ('kinetic', 0.307), ('bath', 0.219), ('evolution', 0.218), ('energy', 0.207), ('beta', 0.188), ('algorithm', 0.172), ('normalizing', 0.168), ('theta', 0.142), ('heat', 0.137), ('annealing', 0.132), ('tempering', 0.132), ('thermodynamic', 0.132), ('simulated', 0.124), ('distributions', 0.124), ('trajectory', 0.122), ('equilibrium', 0.112), ('betancourt', 0.106), ('system', 0.106), ('momenta', 0.102), ('potential', 0.093), ('slowly', 0.091), ('microwave', 0.088), ('evolve', 0.081), ('constants', 0.081), ('fuel', 0.081), ('bulk', 0.081), ('positions', 0.081), ('fast', 0.075), ('physical', 0.072), ('cold', 0.068), ('change', 0.065), ('complex', 0.065), ('distribution', 0.064), ('gradually', 0.062), ('carlo', 0.058), ('target', 0.055), ('monte', 0.055), ('path', 0.055), ('presents', 0.054), ('simulations', 0.053), ('difficult', 0.052), ('compute', 0.052), ('changing', 0.05), ('pinocchio', 0.047), ('oven', 0.047), ('bumping', 0.047), ('converts', 0.047), ('pathologies', 0.047), ('restarts', 0.047)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 2340 andrew gelman stats-2014-05-20-Thermodynamic Monte Carlo: Michael Betancourt’s new method for simulating from difficult distributions and evaluating normalizing constants

Introduction: I hate to keep bumping our scheduled posts but this is just too important and too exciting to wait. So it’s time to jump the queue. The news is a paper from Michael Betancourt that presents a super-cool new way to compute normalizing constants: A common strategy for inference in complex models is the relaxation of a simple model into the more complex target model, for example the prior into the posterior in Bayesian inference. Existing approaches that attempt to generate such transformations, however, are sensitive to the pathologies of complex distributions and can be difficult to implement in practice. Leveraging the geometry of thermodynamic processes I introduce a principled and robust approach to deforming measures that presents a powerful new tool for inference. The idea is to generalize Hamiltonian Monte Carlo so that it moves through a family of distributions (that is, it transitions through an “inverse temperature” variable called beta that indexes the family) a

2 0.28704336 1501 andrew gelman stats-2012-09-18-More studies on the economic effects of climate change

Introduction: After writing yesterday’s post , I was going through Solomon Hsiang’s blog and found a post pointing to three studies from researchers at business schools: Severe Weather and Automobile Assembly Productivity Gérard P. Cachon, Santiago Gallino and Marcelo Olivares Abstract: It is expected that climate change could lead to an increased frequency of severe weather. In turn, severe weather intuitively should hamper the productivity of work that occurs outside. But what is the effect of rain, snow, fog, heat and wind on work that occurs indoors, such as the production of automobiles? Using weekly production data from 64 automobile plants in the United States over a ten-year period, we find that adverse weather conditions lead to a significant reduction in production. For example, one additional day of high wind advisory by the National Weather Service (i.e., maximum winds generally in excess of 44 miles per hour) reduces production by 26%, which is comparable in order of magnitude t

3 0.22563145 180 andrew gelman stats-2010-08-03-Climate Change News

Introduction: I. State of the Climate report The National Oceanic and Atmospheric Administration recently released their “State of the Climate Report” for 2009 . The report has chapters discussing global climate (temperatures, water vapor, cloudiness, alpine glaciers,…); oceans (ocean heat content, sea level, sea surface temperatures, etc.); the arctic (sea ice extent, permafrost, vegetation, and so on); Antarctica (weather observations, sea ice extent,…), and regional climates. NOAA also provides a nice page that lets you display any of 11 relevant time-series datasets (land-surface air temperature, sea level, ocean heat content, September arctic sea-ice extent, sea-surface temperature, northern hemisphere snow cover, specific humidity, glacier mass balance, marine air temperature, tropospheric temperature, and stratospheric temperature). Each of the plots overlays data from several databases (not necessarily indepenedent of each other), and you can select which ones to include or leave

4 0.20246267 945 andrew gelman stats-2011-10-06-W’man < W’pedia, again

Introduction: Blogger Deep Climate looks at another paper by the 2002 recipient of the American Statistical Association’s Founders award. This time it’s not funny, it’s just sad. Here’s Wikipedia on simulated annealing: By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random “nearby” solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly “downhill” as T goes to zero. The allowance for “uphill” moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. And here’s Wegman: During each step of the algorithm, the variable that will eventually represent the minimum is replaced by a random solution that is chosen according to a temperature

5 0.19273058 2364 andrew gelman stats-2014-06-08-Regression and causality and variable ordering

Introduction: Bill Harris wrote in with a question: David Hogg points out in one of his general articles on data modeling that regression assumptions require one to put the variable with the highest variance in the ‘y’ position and the variable you know best (lowest variance) in the ‘x’ position. As he points out, others speak of independent and dependent variables, as if causality determined the form of a regression formula. In a quick scan of ARM and BDA, I don’t see clear advice, but I do see the use of ‘independent’ and ‘dependent.’ I recently did a model over data in which we know the ‘effect’ pretty well (we measure it), while we know the ’cause’ less well (it’s estimated by people who only need to get it approximately correct). A model of the form ’cause ~ effect’ fit visually much better than one of the form ‘effect ~ cause’, but interpreting it seems challenging. For a simplistic example, let the effect be energy use in a building for cooling (E), and let the cause be outdoor ai

6 0.17254838 1010 andrew gelman stats-2011-11-14-“Free energy” and economic resources

7 0.14950579 1201 andrew gelman stats-2012-03-07-Inference = data + model

8 0.13857055 1801 andrew gelman stats-2013-04-13-Can you write a program to determine the causal order?

9 0.13512452 1402 andrew gelman stats-2012-07-01-Ice cream! and temperature

10 0.11782354 906 andrew gelman stats-2011-09-14-Another day, another stats postdoc

11 0.11397856 1089 andrew gelman stats-2011-12-28-Path sampling for models of varying dimension

12 0.11160648 1500 andrew gelman stats-2012-09-17-“2% per degree Celsius . . . the magic number for how worker productivity responds to warm-hot temperatures”

13 0.1094402 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

14 0.10450456 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

15 0.1025049 1941 andrew gelman stats-2013-07-16-Priors

16 0.10150379 1486 andrew gelman stats-2012-09-07-Prior distributions for regression coefficients

17 0.10015166 2321 andrew gelman stats-2014-05-05-On deck this week

18 0.099964477 1018 andrew gelman stats-2011-11-19-Tempering and modes

19 0.095842659 1295 andrew gelman stats-2012-05-02-Selection bias, or, How you can think the experts don’t check their models, if you simply don’t look at what the experts actually are doing

20 0.093603954 547 andrew gelman stats-2011-01-31-Using sample size in the prior distribution


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.128), (1, 0.065), (2, 0.007), (3, 0.044), (4, 0.033), (5, 0.002), (6, 0.043), (7, -0.041), (8, -0.027), (9, -0.043), (10, -0.041), (11, 0.012), (12, -0.018), (13, -0.019), (14, -0.048), (15, -0.015), (16, 0.059), (17, 0.007), (18, 0.014), (19, -0.058), (20, 0.01), (21, 0.03), (22, -0.006), (23, 0.024), (24, 0.061), (25, 0.012), (26, -0.02), (27, 0.005), (28, 0.094), (29, 0.048), (30, -0.0), (31, 0.029), (32, -0.037), (33, -0.016), (34, -0.051), (35, -0.054), (36, -0.023), (37, 0.003), (38, 0.008), (39, -0.004), (40, -0.03), (41, 0.069), (42, -0.112), (43, -0.062), (44, -0.03), (45, -0.085), (46, -0.052), (47, -0.004), (48, -0.047), (49, 0.021)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96752572 2340 andrew gelman stats-2014-05-20-Thermodynamic Monte Carlo: Michael Betancourt’s new method for simulating from difficult distributions and evaluating normalizing constants

Introduction: I hate to keep bumping our scheduled posts but this is just too important and too exciting to wait. So it’s time to jump the queue. The news is a paper from Michael Betancourt that presents a super-cool new way to compute normalizing constants: A common strategy for inference in complex models is the relaxation of a simple model into the more complex target model, for example the prior into the posterior in Bayesian inference. Existing approaches that attempt to generate such transformations, however, are sensitive to the pathologies of complex distributions and can be difficult to implement in practice. Leveraging the geometry of thermodynamic processes I introduce a principled and robust approach to deforming measures that presents a powerful new tool for inference. The idea is to generalize Hamiltonian Monte Carlo so that it moves through a family of distributions (that is, it transitions through an “inverse temperature” variable called beta that indexes the family) a

2 0.72074956 1501 andrew gelman stats-2012-09-18-More studies on the economic effects of climate change

Introduction: After writing yesterday’s post , I was going through Solomon Hsiang’s blog and found a post pointing to three studies from researchers at business schools: Severe Weather and Automobile Assembly Productivity Gérard P. Cachon, Santiago Gallino and Marcelo Olivares Abstract: It is expected that climate change could lead to an increased frequency of severe weather. In turn, severe weather intuitively should hamper the productivity of work that occurs outside. But what is the effect of rain, snow, fog, heat and wind on work that occurs indoors, such as the production of automobiles? Using weekly production data from 64 automobile plants in the United States over a ten-year period, we find that adverse weather conditions lead to a significant reduction in production. For example, one additional day of high wind advisory by the National Weather Service (i.e., maximum winds generally in excess of 44 miles per hour) reduces production by 26%, which is comparable in order of magnitude t

3 0.72006506 180 andrew gelman stats-2010-08-03-Climate Change News

Introduction: I. State of the Climate report The National Oceanic and Atmospheric Administration recently released their “State of the Climate Report” for 2009 . The report has chapters discussing global climate (temperatures, water vapor, cloudiness, alpine glaciers,…); oceans (ocean heat content, sea level, sea surface temperatures, etc.); the arctic (sea ice extent, permafrost, vegetation, and so on); Antarctica (weather observations, sea ice extent,…), and regional climates. NOAA also provides a nice page that lets you display any of 11 relevant time-series datasets (land-surface air temperature, sea level, ocean heat content, September arctic sea-ice extent, sea-surface temperature, northern hemisphere snow cover, specific humidity, glacier mass balance, marine air temperature, tropospheric temperature, and stratospheric temperature). Each of the plots overlays data from several databases (not necessarily indepenedent of each other), and you can select which ones to include or leave

4 0.65988332 1424 andrew gelman stats-2012-07-22-Extreme events as evidence for differences in distributions

Introduction: I think Lawrence Summers would like this paper by James Hansen, Makiko Sato, and Reto Ruedy (link from Krugman via Palko ). Hansen et al. write: The distribution of seasonal mean temperature anomalies has shifted toward higher temperatures and the range of anomalies has increased. An important change is the emergence of a category of summertime extremely hot outliers, more than three standard deviations (σ) warmer than climatology. This hot extreme, which covered much less than 1% of Earth’s surface in the period of climatology, now typically covers about 10% of the land area. The point is that it makes sense to look at the whole distribution, but extreme events provide information also. P.S. Here are some papers by my Columbia colleague Wolfram Schenkler on potential impacts of global warming on agriculture.

5 0.64422154 1829 andrew gelman stats-2013-04-28-Plain old everyday Bayesianism!

Introduction: Sam Behseta writes: There is a report by Martin Tingley and Peter Huybers in Nature on the unprecedented high temperatures at northern latitudes (Russia, Greenland, etc). What is more interesting is the authors are have used a straightforward hierarchical Bayes model, and for the first time (as far as I can remember) the results are reported with a probability attached to them (P>0.99), as opposed to the usual p-value<0.01 business. This might be a sign that editors of big time science journals are welcoming Bayesian approaches. I agree. This is a good sign for statistical communication. Here are the key sentences from the abstract: Here, using a hierarchical Bayesian analysis of instrumental, tree-ring, ice-core and lake-sediment records, we show that the magnitude and frequency of recent warm temperature extremes at high northern latitudes are unprecedented in the past 600 years. The summers of 2005, 2007, 2010 and 2011 were warmer than those of all prior years back to 1

6 0.63713133 1500 andrew gelman stats-2012-09-17-“2% per degree Celsius . . . the magic number for how worker productivity responds to warm-hot temperatures”

7 0.61970121 1201 andrew gelman stats-2012-03-07-Inference = data + model

8 0.59922755 858 andrew gelman stats-2011-08-17-Jumping off the edge of the world

9 0.59052306 2020 andrew gelman stats-2013-09-12-Samplers for Big Science: emcee and BAT

10 0.58894873 2311 andrew gelman stats-2014-04-29-Bayesian Uncertainty Quantification for Differential Equations!

11 0.5871647 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

12 0.5840435 650 andrew gelman stats-2011-04-05-Monitor the efficiency of your Markov chain sampler using expected squared jumped distance!

13 0.58207154 945 andrew gelman stats-2011-10-06-W’man < W’pedia, again

14 0.58031458 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

15 0.54148477 1825 andrew gelman stats-2013-04-25-It’s binless! A program for computing normalizing functions

16 0.53940165 1897 andrew gelman stats-2013-06-13-When’s that next gamma-ray blast gonna come, already?

17 0.5382002 1522 andrew gelman stats-2012-10-05-High temperatures cause violent crime and implications for climate change, also some suggestions about how to better summarize these claims

18 0.53269547 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

19 0.53266102 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

20 0.53109843 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(5, 0.021), (13, 0.022), (15, 0.037), (16, 0.042), (21, 0.044), (22, 0.049), (24, 0.167), (36, 0.01), (42, 0.01), (48, 0.024), (53, 0.018), (57, 0.031), (59, 0.021), (77, 0.025), (79, 0.027), (82, 0.013), (86, 0.019), (99, 0.277)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98815227 2340 andrew gelman stats-2014-05-20-Thermodynamic Monte Carlo: Michael Betancourt’s new method for simulating from difficult distributions and evaluating normalizing constants

Introduction: I hate to keep bumping our scheduled posts but this is just too important and too exciting to wait. So it’s time to jump the queue. The news is a paper from Michael Betancourt that presents a super-cool new way to compute normalizing constants: A common strategy for inference in complex models is the relaxation of a simple model into the more complex target model, for example the prior into the posterior in Bayesian inference. Existing approaches that attempt to generate such transformations, however, are sensitive to the pathologies of complex distributions and can be difficult to implement in practice. Leveraging the geometry of thermodynamic processes I introduce a principled and robust approach to deforming measures that presents a powerful new tool for inference. The idea is to generalize Hamiltonian Monte Carlo so that it moves through a family of distributions (that is, it transitions through an “inverse temperature” variable called beta that indexes the family) a

2 0.9666158 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

Introduction: From my new article in the journal Epidemiology: Sander Greenland and Charles Poole accept that P values are here to stay but recognize that some of their most common interpretations have problems. The casual view of the P value as posterior probability of the truth of the null hypothesis is false and not even close to valid under any reasonable model, yet this misunderstanding persists even in high-stakes settings (as discussed, for example, by Greenland in 2011). The formal view of the P value as a probability conditional on the null is mathematically correct but typically irrelevant to research goals (hence, the popularity of alternative—if wrong—interpretations). A Bayesian interpretation based on a spike-and-slab model makes little sense in applied contexts in epidemiology, political science, and other fields in which true effects are typically nonzero and bounded (thus violating both the “spike” and the “slab” parts of the model). I find Greenland and Poole’s perspective t

3 0.96500731 878 andrew gelman stats-2011-08-29-Infovis, infographics, and data visualization: Where I’m coming from, and where I’d like to go

Introduction: I continue to struggle to convey my thoughts on statistical graphics so I’ll try another approach, this time giving my own story. For newcomers to this discussion: the background is that Antony Unwin and I wrote an article on the different goals embodied in information visualization and statistical graphics, but I have difficulty communicating on this point with the infovis people. Maybe if I tell my own story, and then they tell their stories, this will point a way forward to a more constructive discussion. So here goes. I majored in physics in college and I worked in a couple of research labs during the summer. Physicists graph everything. I did most of my plotting on graph paper–this continued through my second year of grad school–and became expert at putting points at 1/5, 2/5, 3/5, and 4/5 between the x and y grid lines. In grad school in statistics, I continued my physics habits and graphed everything I could. I did notice, though, that the faculty and the other

4 0.96350944 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

5 0.96328771 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

Introduction: Konrad Scheffler writes: I was interested by your paper “Induction and deduction in Bayesian data analysis” and was wondering if you would entertain a few questions: – Under the banner of objective Bayesianism, I would posit something like this as a description of Bayesian inference: “Objective Bayesian probability is not a degree of belief (which would necessarily be subjective) but a measure of the plausibility of a hypothesis, conditional on a formally specified information state. One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such a

6 0.96290791 1413 andrew gelman stats-2012-07-11-News flash: Probability and statistics are hard to understand

7 0.96290022 2055 andrew gelman stats-2013-10-08-A Bayesian approach for peer-review panels? and a speculation about Bruno Frey

8 0.96268004 963 andrew gelman stats-2011-10-18-Question on Type M errors

9 0.96150756 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals

10 0.96142447 2080 andrew gelman stats-2013-10-28-Writing for free

11 0.96120244 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

12 0.96092999 262 andrew gelman stats-2010-09-08-Here’s how rumors get started: Lineplots, dotplots, and nonfunctional modernist architecture

13 0.96022916 1792 andrew gelman stats-2013-04-07-X on JLP

14 0.95998305 1933 andrew gelman stats-2013-07-10-Please send all comments to -dev-ripley

15 0.95977008 486 andrew gelman stats-2010-12-26-Age and happiness: The pattern isn’t as clear as you might think

16 0.95955396 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

17 0.95948535 970 andrew gelman stats-2011-10-24-Bell Labs

18 0.95884168 1502 andrew gelman stats-2012-09-19-Scalability in education

19 0.95861995 35 andrew gelman stats-2010-05-16-Another update on the spam email study

20 0.95845294 1460 andrew gelman stats-2012-08-16-“Real data can be a pain”