andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-844 knowledge-graph by maker-knowledge-mining

844 andrew gelman stats-2011-08-07-Update on the new Handbook of MCMC


meta infos for this blog

Source: html

Introduction: It’s edited by Steve Brooks, Galin Jones, Xiao-Li Meng, and myself. Here’s the information and some sample chapters (including my own chapter with Ken Shirley on inference and monitoring convergence and Radford’s instant classic on Hamiltonian Monte Carlo). Sorry about the $100 price tag–nobody asked me about that! But if you’re doing these computations as part of your work, I think the book will be well worth it.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 It’s edited by Steve Brooks, Galin Jones, Xiao-Li Meng, and myself. [sent-1, score-0.2]

2 Here’s the information and some sample chapters (including my own chapter with Ken Shirley on inference and monitoring convergence and Radford’s instant classic on Hamiltonian Monte Carlo). [sent-2, score-1.354]

3 Sorry about the $100 price tag–nobody asked me about that! [sent-3, score-0.262]

4 But if you’re doing these computations as part of your work, I think the book will be well worth it. [sent-4, score-0.582]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('galin', 0.293), ('tag', 0.247), ('radford', 0.231), ('instant', 0.231), ('shirley', 0.231), ('jones', 0.231), ('computations', 0.226), ('ken', 0.222), ('monitoring', 0.219), ('meng', 0.207), ('edited', 0.2), ('convergence', 0.196), ('hamiltonian', 0.192), ('brooks', 0.182), ('carlo', 0.182), ('monte', 0.173), ('chapters', 0.169), ('steve', 0.166), ('sorry', 0.162), ('price', 0.153), ('classic', 0.146), ('nobody', 0.131), ('chapter', 0.124), ('asked', 0.109), ('worth', 0.108), ('sample', 0.099), ('inference', 0.093), ('including', 0.093), ('part', 0.079), ('information', 0.077), ('book', 0.076), ('well', 0.061), ('re', 0.052), ('work', 0.05), ('think', 0.032)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 844 andrew gelman stats-2011-08-07-Update on the new Handbook of MCMC

Introduction: It’s edited by Steve Brooks, Galin Jones, Xiao-Li Meng, and myself. Here’s the information and some sample chapters (including my own chapter with Ken Shirley on inference and monitoring convergence and Radford’s instant classic on Hamiltonian Monte Carlo). Sorry about the $100 price tag–nobody asked me about that! But if you’re doing these computations as part of your work, I think the book will be well worth it.

2 0.33230984 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

Introduction: Galin Jones, Steve Brooks, Xiao-Li Meng and I edited a handbook of Markov Chain Monte Carlo that has just been published . My chapter (with Kenny Shirley) is here , and it begins like this: Convergence of Markov chain simulations can be monitored by measuring the diffusion and mixing of multiple independently-simulated chains, but different levels of convergence are appropriate for different goals. When considering inference from stochastic simulation, we need to separate two tasks: (1) inference about parameters and functions of parameters based on broad characteristics of their distribution, and (2) more precise computation of expectations and other functions of probability distributions. For the first task, there is a natural limit to precision beyond which additional simulations add essentially nothing; for the second task, the appropriate precision must be decided from external considerations. We illustrate with an example from our current research, a hierarchical model of t

3 0.17121848 1339 andrew gelman stats-2012-05-23-Learning Differential Geometry for Hamiltonian Monte Carlo

Introduction: You can get a taste of Hamiltonian Monte Carlo (HMC) by reading the very gentle introduction in David MacKay’s general text on information theory: MacKay, D. 2003. Information Theory, Inference, and Learning Algorithms . Cambridge University Press. [see Chapter 31, which is relatively standalone and can be downloaded separately.] Follow this up with Radford Neal’s much more thorough introduction to HMC: Neal, R. 2011. MCMC Using Hamiltonian Dynamics . In Brooks, Gelman, Jones and Meng, eds., Handbook of Markov Chain Monte Carlo . Chapman and Hall/CRC Press. To understand why HMC works and set yourself on the path to understanding generalizations like Riemann manifold HMC , you’ll need to know a bit about differential geometry. I really liked the combination of these two books: Magnus, J. R. and H. Neudecker. 2007. Matrix Differential Calculus with Application in Statistics and Econometrics . 3rd Edition. Wiley? and Leimkuhler, B. and S.

4 0.12927213 639 andrew gelman stats-2011-03-31-Bayes: radical, liberal, or conservative?

Introduction: Radford writes : The word “conservative” gets used many ways, for various political purposes, but I would take it’s basic meaning to be someone who thinks there’s a lot of wisdom in traditional ways of doing things, even if we don’t understand exactly why those ways are good, so we should be reluctant to change unless we have a strong argument that some other way is better. This sounds very Bayesian, with a prior reducing the impact of new data. I agree completely, and I think Radford will very much enjoy my article with Aleks Jakulin , “Bayes: radical, liberal, or conservative?” Radford’s comment also fits with my increasing inclination to use informative prior distributions.

5 0.12819815 1772 andrew gelman stats-2013-03-20-Stan at Google this Thurs and at Berkeley this Fri noon

Introduction: Michael Betancourt will be speaking at Google and at the University of California, Berkeley. The Google talk is closed to outsiders (but if you work at Google, you should go!); the Berkeley talk is open to all: Friday March 22, 12:10 pm, Evans Hall 1011. Title of talk: Stan : Practical Bayesian Inference with Hamiltonian Monte Carlo Abstract: Practical implementations of Bayesian inference are often limited to approximation methods that only slowly explore the posterior distribution. By taking advantage of the curvature of the posterior, however, Hamiltonian Monte Carlo (HMC) efficiently explores even the most highly contorted distributions. In this talk I will review the foundations of and recent developments within HMC, concluding with a discussion of Stan, a powerful inference engine that utilizes HMC, automatic differentiation, and adaptive methods to minimize user input. This is cool stuff. And he’ll be showing the whirlpool movie!

6 0.11939935 1729 andrew gelman stats-2013-02-20-My beef with Brooks: the alternative to “good statistics” is not “no statistics,” it’s “bad statistics”

7 0.1122712 1749 andrew gelman stats-2013-03-04-Stan in L.A. this Wed 3:30pm

8 0.099079043 2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs

9 0.088975966 1458 andrew gelman stats-2012-08-14-1.5 million people were told that extreme conservatives are happier than political moderates. Approximately .0001 million Americans learned that the opposite is true.

10 0.084555209 1271 andrew gelman stats-2012-04-20-Education could use some systematic evaluation

11 0.084525473 2280 andrew gelman stats-2014-04-03-As the boldest experiment in journalism history, you admit you made a mistake

12 0.083421148 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

13 0.082889535 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

14 0.080305383 467 andrew gelman stats-2010-12-14-Do we need an integrated Bayesian-likelihood inference?

15 0.080102079 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

16 0.079369456 1025 andrew gelman stats-2011-11-24-Always check your evidence

17 0.077786729 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

18 0.072508529 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

19 0.071296513 1691 andrew gelman stats-2013-01-25-Extreem p-values!

20 0.068924293 499 andrew gelman stats-2011-01-03-5 books


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.069), (1, 0.018), (2, -0.02), (3, 0.027), (4, -0.001), (5, 0.042), (6, 0.006), (7, -0.021), (8, 0.001), (9, -0.043), (10, -0.015), (11, -0.025), (12, -0.055), (13, 0.024), (14, 0.055), (15, -0.003), (16, -0.023), (17, 0.005), (18, 0.041), (19, -0.017), (20, 0.009), (21, -0.011), (22, 0.048), (23, 0.021), (24, 0.039), (25, 0.029), (26, -0.008), (27, 0.006), (28, 0.045), (29, 0.031), (30, -0.039), (31, 0.002), (32, 0.017), (33, 0.004), (34, -0.017), (35, -0.005), (36, -0.002), (37, -0.035), (38, 0.013), (39, 0.041), (40, -0.029), (41, 0.015), (42, -0.011), (43, -0.022), (44, -0.006), (45, 0.023), (46, -0.004), (47, -0.026), (48, 0.029), (49, -0.022)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94150871 844 andrew gelman stats-2011-08-07-Update on the new Handbook of MCMC

Introduction: It’s edited by Steve Brooks, Galin Jones, Xiao-Li Meng, and myself. Here’s the information and some sample chapters (including my own chapter with Ken Shirley on inference and monitoring convergence and Radford’s instant classic on Hamiltonian Monte Carlo). Sorry about the $100 price tag–nobody asked me about that! But if you’re doing these computations as part of your work, I think the book will be well worth it.

2 0.73941588 1339 andrew gelman stats-2012-05-23-Learning Differential Geometry for Hamiltonian Monte Carlo

Introduction: You can get a taste of Hamiltonian Monte Carlo (HMC) by reading the very gentle introduction in David MacKay’s general text on information theory: MacKay, D. 2003. Information Theory, Inference, and Learning Algorithms . Cambridge University Press. [see Chapter 31, which is relatively standalone and can be downloaded separately.] Follow this up with Radford Neal’s much more thorough introduction to HMC: Neal, R. 2011. MCMC Using Hamiltonian Dynamics . In Brooks, Gelman, Jones and Meng, eds., Handbook of Markov Chain Monte Carlo . Chapman and Hall/CRC Press. To understand why HMC works and set yourself on the path to understanding generalizations like Riemann manifold HMC , you’ll need to know a bit about differential geometry. I really liked the combination of these two books: Magnus, J. R. and H. Neudecker. 2007. Matrix Differential Calculus with Application in Statistics and Econometrics . 3rd Edition. Wiley? and Leimkuhler, B. and S.

3 0.6923973 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

Introduction: Galin Jones, Steve Brooks, Xiao-Li Meng and I edited a handbook of Markov Chain Monte Carlo that has just been published . My chapter (with Kenny Shirley) is here , and it begins like this: Convergence of Markov chain simulations can be monitored by measuring the diffusion and mixing of multiple independently-simulated chains, but different levels of convergence are appropriate for different goals. When considering inference from stochastic simulation, we need to separate two tasks: (1) inference about parameters and functions of parameters based on broad characteristics of their distribution, and (2) more precise computation of expectations and other functions of probability distributions. For the first task, there is a natural limit to precision beyond which additional simulations add essentially nothing; for the second task, the appropriate precision must be decided from external considerations. We illustrate with an example from our current research, a hierarchical model of t

4 0.6017344 1749 andrew gelman stats-2013-03-04-Stan in L.A. this Wed 3:30pm

Introduction: Michael Betancourt will be speaking at UCLA: The location for refreshment is in room 51-254 CHS at 3:00 PM. The place for the seminar is at CHS 33-105A at 3:30pm – 4:30pm, Wed 6 Mar. ["CHS" stands for Center for Health Sciences, the building of the UCLA schools of medicine and public health. Here's a map with directions .] Title of talk: Stan : Practical Bayesian Inference with Hamiltonian Monte Carlo Abstract: Practical implementations of Bayesian inference are often limited to approximation methods that only slowly explore the posterior distribution. By taking advantage of the curvature of the posterior, however, Hamiltonian Monte Carlo (HMC) efficiently explores even the most highly contorted distributions. In this talk I will review the foundations of and recent developments within HMC, concluding with a discussion of Stan, a powerful inference engine that utilizes HMC, automatic differentiation, and adaptive methods to minimize user input. This is cool stuff.

5 0.55935812 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever

Introduction: Robert Grant has a list . I’ll just give the ones with more than 10,000 Google Scholar cites: Cox (1972) Regression and life tables: 35,512 citations. Dempster, Laird, Rubin (1977) Maximum likelihood from incomplete data via the EM algorithm: 34,988 Bland & Altman (1986) Statistical methods for assessing agreement between two methods of clinical measurement: 27,181 Geman & Geman (1984) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images: 15,106 We can find some more via searching Google scholar for familiar names and topics; thus: Metropolis et al. (1953) Equation of state calculations by fast computing machines: 26,000 Benjamini and Hochberg (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing: 21,000 White (1980) A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity: 18,000 Heckman (1977) Sample selection bias as a specification error:

6 0.55649948 1772 andrew gelman stats-2013-03-20-Stan at Google this Thurs and at Berkeley this Fri noon

7 0.52130389 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

8 0.51518953 127 andrew gelman stats-2010-07-04-Inequality and health

9 0.51503211 2349 andrew gelman stats-2014-05-26-WAIC and cross-validation in Stan!

10 0.51427317 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

11 0.51417929 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

12 0.50480157 1642 andrew gelman stats-2012-12-28-New book by Stef van Buuren on missing-data imputation looks really good!

13 0.50222111 2021 andrew gelman stats-2013-09-13-Swiss Jonah Lehrer

14 0.49936324 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

15 0.49770612 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

16 0.49518529 1188 andrew gelman stats-2012-02-28-Reference on longitudinal models?

17 0.49082622 2360 andrew gelman stats-2014-06-05-Identifying pathways for managing multiple disturbances to limit plant invasions

18 0.4906905 984 andrew gelman stats-2011-11-01-David MacKay sez . . . 12??

19 0.48856845 986 andrew gelman stats-2011-11-01-MacKay update: where 12 comes from

20 0.48793235 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(6, 0.04), (15, 0.037), (16, 0.098), (24, 0.086), (29, 0.037), (39, 0.186), (53, 0.044), (82, 0.033), (84, 0.032), (86, 0.067), (99, 0.203)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.91737342 844 andrew gelman stats-2011-08-07-Update on the new Handbook of MCMC

Introduction: It’s edited by Steve Brooks, Galin Jones, Xiao-Li Meng, and myself. Here’s the information and some sample chapters (including my own chapter with Ken Shirley on inference and monitoring convergence and Radford’s instant classic on Hamiltonian Monte Carlo). Sorry about the $100 price tag–nobody asked me about that! But if you’re doing these computations as part of your work, I think the book will be well worth it.

2 0.8877666 1343 andrew gelman stats-2012-05-25-And now, here’s something we hope you’ll really like

Introduction: This came in the email: Postdoctoral Researcher (3 years) in State-Space Modeling of Animal Movement and Population Dynamics in Universities of Turku and Helsinki, Finland We seek for a statistician/mathematician with experience in ecological modeling or an ecologist with strong quantitative training to join an interdisciplinary research team focusing on dispersal and dynamics of the Siberian flying squirrel (Pteromys volans). The Postdoctoral Researcher will develop modeling approaches (from individual based models to population level models) to assess the dispersal and population dynamics of the flying squirrel. A key challenge will be the integration of different kinds of data (census data, telemetry data, mark-recapture data, life-history data, and data on environmental covariates such as forest structure) into the modeling framework using Bayesian State-Space models or other such approaches. The project will be supervised by Dr. Vesa Selonen (a flying squirrel specialist;

3 0.86170924 443 andrew gelman stats-2010-12-02-Automating my graphics advice

Introduction: After seeing this graph : I have the following message for Sharad: Rotate the graph 90 degrees so you can see the words. Also you can ditch the lines. Then what you have is a dotplot, following the principles of Cleveland (1985). You can lay out a few on one page to see some interactions with demographics. The real challenge here . . . . . . is to automate this sort of advice. Or maybe we just need a really nice dotplot() function and enough examples, and people will start doing it? P.S. Often a lineplot is better. See here for a discussion of another Sharad example.

4 0.79812688 1622 andrew gelman stats-2012-12-14-Can gambling addicts be identified in gambling venues?

Introduction: Mark Griffiths, a psychologist who apparently is Europe’s only Professor of Gambling Studies, writes: You made the comment about how difficult it is to spot problem gamblers. I and a couple of colleagues [Paul Delfabbro and Daniel Kingjust] published this review of all the research done on spotting problem gamblers in online and offline gaming venues (attached) that I covered in one of my recent blogs .

5 0.7968756 31 andrew gelman stats-2010-05-13-Visualization in 1939

Introduction: Willard Cope Brinton’s second book Graphic Presentation (1939) surprised me with the quality of its graphics. Prof. Michael Stoll has some scans at Flickr . For example: The whole book can be downloaded (in a worse resolution) from Archive.Org .

6 0.78897381 441 andrew gelman stats-2010-12-01-Mapmaking software

7 0.78766114 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

8 0.78102863 1157 andrew gelman stats-2012-02-07-Philosophy of Bayesian statistics: my reactions to Hendry

9 0.77705133 1927 andrew gelman stats-2013-07-05-“Numbersense: How to use big data to your advantage”

10 0.77229446 334 andrew gelman stats-2010-10-11-Herman Chernoff used to do that too; also, some puzzlement over another’s puzzlement over another’s preferences

11 0.76967931 935 andrew gelman stats-2011-10-01-When should you worry about imputed data?

12 0.76890898 2248 andrew gelman stats-2014-03-15-Problematic interpretations of confidence intervals

13 0.76507103 1681 andrew gelman stats-2013-01-19-Participate in a short survey about the weight of evidence provided by statistics

14 0.76424015 2182 andrew gelman stats-2014-01-22-Spell-checking example demonstrates key aspects of Bayesian data analysis

15 0.76366079 2004 andrew gelman stats-2013-09-01-Post-publication peer review: How it (sometimes) really works

16 0.76191187 155 andrew gelman stats-2010-07-19-David Blackwell

17 0.75873572 1924 andrew gelman stats-2013-07-03-Kuhn, 1-f noise, and the fractal nature of scientific revolutions

18 0.758237 1980 andrew gelman stats-2013-08-13-Test scores and grades predict job performance (but maybe not at Google)

19 0.75755513 1125 andrew gelman stats-2012-01-18-Beautiful Line Charts

20 0.75647986 1278 andrew gelman stats-2012-04-23-“Any old map will do” meets “God is in every leaf of every tree”