andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1753 knowledge-graph by maker-knowledge-mining

1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0


meta infos for this blog

Source: html

Introduction: Stan 1.2.0 and RStan 1.2.0 are now available for download. See: http://mc-stan.org/ Here are the highlights. Full Mass Matrix Estimation during Warmup Yuanjun Gao, a first-year grad student here at Columbia (!), built a regularized mass-matrix estimator. This helps for posteriors with high correlation among parameters and varying scales. We’re still testing this ourselves, so the estimation procedure may change in the future (don’t worry — it satisfies detailed balance as is, but we might be able to make it more computationally efficient in terms of time per effective sample). It’s not the default option. The major reason is the matrix operations required are expensive, raising the algorithm cost to , where is the average number of leapfrog steps, is the number of iterations, and is the number of parameters. Yuanjun did a great job with the Cholesky factorizations and implemented this about as efficiently as is possible. (His homework for Andrew’s class w


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 We’re still testing this ourselves, so the estimation procedure may change in the future (don’t worry — it satisfies detailed balance as is, but we might be able to make it more computationally efficient in terms of time per effective sample). [sent-11, score-0.153]

2 The major reason is the matrix operations required are expensive, raising the algorithm cost to , where is the average number of leapfrog steps, is the number of iterations, and is the number of parameters. [sent-13, score-0.494]

3 Yuanjun did a great job with the Cholesky factorizations and implemented this about as efficiently as is possible. [sent-14, score-0.076]

4 Cumulative Distribution Functions The practical upshot is that Stan supports more truncated distributions, and hence more truncated and censored data models. [sent-17, score-0.21]

5 Michael Betancourt did the heavy lifting here, which involved a crazy amount of “special function” derivative calculations and implementations. [sent-18, score-0.142]

6 Everyone knows that the derivative of a distribution function with respect to the variate is the density. [sent-19, score-0.437]

7 We’ll be documenting all of the functions and derivatives in the manual. [sent-21, score-0.471]

8 Daniel Lee generalized the entire density and distribution function testing framework to generate code for tests. [sent-22, score-0.367]

9 We’re doing much more extensive tests of the vectorizations and derivatives. [sent-23, score-0.06]

10 Also, Daniel implemented efficient vectorized derivatives for many more of the density functions. [sent-24, score-0.361]

11 Model Log Probability and Derivatives in R Jiqiang Guo, who’s at the helm of RStan, wrote code to allow users to access the log probability function in a Stan model and its gradients directly. [sent-25, score-0.454]

12 The functions are parameterized with the unconstrained parameterization of a Stan model with support on all of R^N. [sent-26, score-0.361]

13 He also exposed the model functions to convert back and forth between the constrained and unconstrained parameterizations for initialization and interpretation of the samples. [sent-27, score-0.421]

14 Print Posterior Summary Statistics from Command Line Daniel Lee wrote a program to print a summary of one or more chains from the command line, mirroring the print() command of RStan. [sent-30, score-0.509]

15 Bug Fixes We also fixed a bad memory leak in multivariate operations that was introduced in the last release when we optimized the matrix operations for derivative calculations. [sent-31, score-1.336]

16 We also fixed the Windows issue with conservative matrix resizing which caused multivariate models to crash under Windows at optimization levels above 0. [sent-32, score-0.742]

17 The Future There hass been a lot of activity in various branches that haven’t been merged into the trunk yet, so stay tuned. [sent-33, score-0.06]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('matrix', 0.321), ('functions', 0.266), ('function', 0.212), ('rstan', 0.183), ('operations', 0.173), ('fixed', 0.17), ('windows', 0.168), ('print', 0.156), ('derivatives', 0.145), ('derivative', 0.142), ('command', 0.137), ('yuanjun', 0.132), ('stan', 0.131), ('bug', 0.126), ('warmup', 0.114), ('multivariate', 0.107), ('log', 0.106), ('leak', 0.105), ('truncated', 0.105), ('daniel', 0.1), ('preventing', 0.095), ('fixes', 0.095), ('unconstrained', 0.095), ('documentation', 0.093), ('gradient', 0.092), ('cumulative', 0.087), ('estimation', 0.085), ('distribution', 0.083), ('chains', 0.079), ('memory', 0.079), ('probability', 0.076), ('optimization', 0.076), ('implemented', 0.076), ('density', 0.072), ('lee', 0.072), ('efficient', 0.068), ('caused', 0.068), ('mass', 0.068), ('release', 0.066), ('full', 0.065), ('trunk', 0.06), ('documenting', 0.06), ('grave', 0.06), ('helm', 0.06), ('vectorizations', 0.06), ('wrapped', 0.06), ('cholesky', 0.06), ('gao', 0.06), ('initialization', 0.06), ('jiqiang', 0.06)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

Introduction: Stan 1.2.0 and RStan 1.2.0 are now available for download. See: http://mc-stan.org/ Here are the highlights. Full Mass Matrix Estimation during Warmup Yuanjun Gao, a first-year grad student here at Columbia (!), built a regularized mass-matrix estimator. This helps for posteriors with high correlation among parameters and varying scales. We’re still testing this ourselves, so the estimation procedure may change in the future (don’t worry — it satisfies detailed balance as is, but we might be able to make it more computationally efficient in terms of time per effective sample). It’s not the default option. The major reason is the matrix operations required are expensive, raising the algorithm cost to , where is the average number of leapfrog steps, is the number of iterations, and is the number of parameters. Yuanjun did a great job with the Cholesky factorizations and implemented this about as efficiently as is possible. (His homework for Andrew’s class w

2 0.37897328 1799 andrew gelman stats-2013-04-12-Stan 1.3.0 and RStan 1.3.0 Ready for Action

Introduction: The Stan Development Team is happy to announce that Stan 1.3.0 and RStan 1.3.0 are available for download. Follow the links on: Stan home page: http://mc-stan.org/ Please let us know if you have problems updating. Here’s the full set of release notes. v1.3.0 (12 April 2013) ====================================================================== Enhancements ---------------------------------- Modeling Language * forward sampling (random draws from distributions) in generated quantities * better error messages in parser * new distributions: + exp_mod_normal + gumbel + skew_normal * new special functions: + owenst * new broadcast (repetition) functions for vectors, arrays, matrices + rep_arrray + rep_matrix + rep_row_vector + rep_vector Command-Line * added option to display autocorrelations in the command-line program to print output * changed default point estimation routine from the command line to

3 0.33378801 1627 andrew gelman stats-2012-12-17-Stan and RStan 1.1.0

Introduction: We’re happy to announce the availability of Stan and RStan versions 1.1.0, which are general tools for performing model-based Bayesian inference using the no-U-turn sampler, an adaptive form of Hamiltonian Monte Carlo. Information on downloading and installing and using them is available as always from Stan Home Page: http://mc-stan.org/ Let us know if you have any problems on the mailing lists or at the e-mails linked on the home page (please don’t use this web page). The full release notes follow. (R)Stan Version 1.1.0 Release Notes =================================== -- Backward Compatibility Issue * Categorical distribution recoded to match documentation; it now has support {1,...,K} rather than {0,...,K-1}. * (RStan) change default value of permuted flag from FALSE to TRUE for Stan fit S4 extract() method -- New Features * Conditional (if-then-else) statements * While statements -- New Functions * generalized multiply_lower_tri

4 0.32984087 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!

Introduction: We just released Stan 1.1.1 and RStan 1.1.1 As usual, you can find download and install instructions at: http://mc-stan.org/ This is a patch release and is fully backward compatible with Stan and RStan 1.1.0. The main thing you should notice is that the multivariate models should be much faster and all the bugs reported for 1.1.0 have been fixed. We’ve also added a bit more functionality. The substantial changes are listed in the following release notes. v1.1.1 (5 February 2012) ====================================================================== Bug Fixes ———————————- * fixed bug in comparison operators, which swapped operator< with operator<= and swapped operator> with operator>= semantics * auto-initialize all variables to prevent segfaults * atan2 gradient propagation fixed * fixed off-by-one in NUTS treedepth bound so NUTS goes at most to specified tree depth rather than specified depth + 1 * various compiler compatibility and minor consistency issues * f

5 0.26829085 2209 andrew gelman stats-2014-02-13-CmdStan, RStan, PyStan v2.2.0

Introduction: The Stan Development Team is happy to announce CmdStan, RStan, and PyStan v2.2.0. As usual, more info is available on the Stan Home Page . This is a minor release with a mix of bug fixes and features. For a full list of changes, please see the v2.2.0 milestone on stan-dev/stan’s issue tracker. Some of the bug fixes and issues are listed below. Bug Fixes increment_log_prob is now vectorized and compiles with vector arguments multinomial random number generator used the wrong size for the return value fixed memory leaks in auto-diff implementation variables can start with the prefix ‘inf’ fixed parameter output order for arrays when using optimization RStan compatibility issue with latest Rcpp 0.11.0 Features suppress command line output with refresh <= 0 added 1 to treedepth to match usual definition of treedepth added distance, squared_distance, diag_pre_multiply, diag_pre_multiply to Stan modeling lnaguage added a ‘fixed_param’ sampler for

6 0.23249505 2150 andrew gelman stats-2013-12-27-(R-Py-Cmd)Stan 2.1.0

7 0.20728654 2161 andrew gelman stats-2014-01-07-My recent debugging experience

8 0.19387798 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

9 0.18694586 2296 andrew gelman stats-2014-04-19-Index or indicator variables

10 0.17856513 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

11 0.15746218 2258 andrew gelman stats-2014-03-21-Random matrices in the news

12 0.14830644 1475 andrew gelman stats-2012-08-30-A Stan is Born

13 0.14665151 1807 andrew gelman stats-2013-04-17-Data problems, coding errors…what can be done?

14 0.13581926 2089 andrew gelman stats-2013-11-04-Shlemiel the Software Developer and Unknown Unknowns

15 0.12194574 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

16 0.12021074 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

17 0.11132383 1580 andrew gelman stats-2012-11-16-Stantastic!

18 0.11097538 266 andrew gelman stats-2010-09-09-The future of R

19 0.10555406 1423 andrew gelman stats-2012-07-21-Optimizing software in C++

20 0.10500392 1748 andrew gelman stats-2013-03-04-PyStan!


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.146), (1, 0.112), (2, -0.002), (3, 0.073), (4, 0.124), (5, 0.062), (6, 0.043), (7, -0.192), (8, -0.099), (9, -0.1), (10, -0.139), (11, -0.019), (12, -0.108), (13, -0.062), (14, 0.051), (15, -0.02), (16, -0.003), (17, 0.076), (18, -0.03), (19, -0.028), (20, 0.019), (21, 0.005), (22, -0.061), (23, -0.004), (24, 0.034), (25, 0.013), (26, 0.006), (27, 0.088), (28, 0.019), (29, -0.029), (30, -0.017), (31, 0.061), (32, 0.004), (33, 0.002), (34, 0.028), (35, -0.098), (36, -0.036), (37, 0.052), (38, 0.012), (39, 0.038), (40, 0.006), (41, -0.067), (42, -0.011), (43, -0.023), (44, 0.003), (45, 0.041), (46, -0.0), (47, -0.002), (48, 0.035), (49, -0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97017163 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

Introduction: Stan 1.2.0 and RStan 1.2.0 are now available for download. See: http://mc-stan.org/ Here are the highlights. Full Mass Matrix Estimation during Warmup Yuanjun Gao, a first-year grad student here at Columbia (!), built a regularized mass-matrix estimator. This helps for posteriors with high correlation among parameters and varying scales. We’re still testing this ourselves, so the estimation procedure may change in the future (don’t worry — it satisfies detailed balance as is, but we might be able to make it more computationally efficient in terms of time per effective sample). It’s not the default option. The major reason is the matrix operations required are expensive, raising the algorithm cost to , where is the average number of leapfrog steps, is the number of iterations, and is the number of parameters. Yuanjun did a great job with the Cholesky factorizations and implemented this about as efficiently as is possible. (His homework for Andrew’s class w

2 0.94843519 1799 andrew gelman stats-2013-04-12-Stan 1.3.0 and RStan 1.3.0 Ready for Action

Introduction: The Stan Development Team is happy to announce that Stan 1.3.0 and RStan 1.3.0 are available for download. Follow the links on: Stan home page: http://mc-stan.org/ Please let us know if you have problems updating. Here’s the full set of release notes. v1.3.0 (12 April 2013) ====================================================================== Enhancements ---------------------------------- Modeling Language * forward sampling (random draws from distributions) in generated quantities * better error messages in parser * new distributions: + exp_mod_normal + gumbel + skew_normal * new special functions: + owenst * new broadcast (repetition) functions for vectors, arrays, matrices + rep_arrray + rep_matrix + rep_row_vector + rep_vector Command-Line * added option to display autocorrelations in the command-line program to print output * changed default point estimation routine from the command line to

3 0.93866092 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!

Introduction: We just released Stan 1.1.1 and RStan 1.1.1 As usual, you can find download and install instructions at: http://mc-stan.org/ This is a patch release and is fully backward compatible with Stan and RStan 1.1.0. The main thing you should notice is that the multivariate models should be much faster and all the bugs reported for 1.1.0 have been fixed. We’ve also added a bit more functionality. The substantial changes are listed in the following release notes. v1.1.1 (5 February 2012) ====================================================================== Bug Fixes ———————————- * fixed bug in comparison operators, which swapped operator< with operator<= and swapped operator> with operator>= semantics * auto-initialize all variables to prevent segfaults * atan2 gradient propagation fixed * fixed off-by-one in NUTS treedepth bound so NUTS goes at most to specified tree depth rather than specified depth + 1 * various compiler compatibility and minor consistency issues * f

4 0.89511681 1627 andrew gelman stats-2012-12-17-Stan and RStan 1.1.0

Introduction: We’re happy to announce the availability of Stan and RStan versions 1.1.0, which are general tools for performing model-based Bayesian inference using the no-U-turn sampler, an adaptive form of Hamiltonian Monte Carlo. Information on downloading and installing and using them is available as always from Stan Home Page: http://mc-stan.org/ Let us know if you have any problems on the mailing lists or at the e-mails linked on the home page (please don’t use this web page). The full release notes follow. (R)Stan Version 1.1.0 Release Notes =================================== -- Backward Compatibility Issue * Categorical distribution recoded to match documentation; it now has support {1,...,K} rather than {0,...,K-1}. * (RStan) change default value of permuted flag from FALSE to TRUE for Stan fit S4 extract() method -- New Features * Conditional (if-then-else) statements * While statements -- New Functions * generalized multiply_lower_tri

5 0.84409916 2150 andrew gelman stats-2013-12-27-(R-Py-Cmd)Stan 2.1.0

Introduction: We’re happy to announce the release of Stan C++, CmdStan, RStan, and PyStan 2.1.0.  This is a minor feature release, but it is also an important bug fix release.  As always, the place to start is the (all new) Stan web pages: http://mc-stan.org   Major Bug in 2.0.0, 2.0.1 Stan 2.0.0 and Stan 2.0.1 introduced a bug in the implementation of the NUTS criterion that led to poor tail exploration and thus biased the posterior uncertainty downward.  There was no bug in NUTS in Stan 1.3 or earlier, and 2.1 has been extensively tested and tests put in place so this problem will not recur. If you are using Stan 2.0.0 or 2.0.1, you should switch to 2.1.0 as soon as possible and rerun any models you care about.   New Target Acceptance Rate Default for Stan 2.1.0 Another big change aimed at reducing posterior estimation bias was an increase in the target acceptance rate during adaptation from 0.65 to 0.80.  The bad news is that iterations will take around 50% longer

6 0.80604291 2209 andrew gelman stats-2014-02-13-CmdStan, RStan, PyStan v2.2.0

7 0.77609897 2003 andrew gelman stats-2013-08-30-Stan Project: Continuous Relaxations for Discrete MRFs

8 0.77150458 1036 andrew gelman stats-2011-11-30-Stan uses Nuts!

9 0.76675129 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

10 0.76360291 2161 andrew gelman stats-2014-01-07-My recent debugging experience

11 0.72772062 1475 andrew gelman stats-2012-08-30-A Stan is Born

12 0.71290773 1580 andrew gelman stats-2012-11-16-Stantastic!

13 0.68610859 2242 andrew gelman stats-2014-03-10-Stan Model of the Week: PK Calculation of IV and Oral Dosing

14 0.68473351 2020 andrew gelman stats-2013-09-12-Samplers for Big Science: emcee and BAT

15 0.6816293 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

16 0.66505331 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

17 0.65748668 1748 andrew gelman stats-2013-03-04-PyStan!

18 0.63754231 2291 andrew gelman stats-2014-04-14-Transitioning to Stan

19 0.63571876 712 andrew gelman stats-2011-05-14-The joys of working in the public domain

20 0.63554394 2332 andrew gelman stats-2014-05-12-“The results (not shown) . . .”


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(6, 0.036), (9, 0.012), (13, 0.015), (15, 0.01), (16, 0.048), (21, 0.024), (24, 0.157), (35, 0.036), (36, 0.029), (44, 0.025), (53, 0.014), (59, 0.026), (65, 0.026), (66, 0.011), (69, 0.023), (73, 0.011), (79, 0.014), (86, 0.046), (91, 0.122), (98, 0.014), (99, 0.161)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93674487 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

Introduction: Stan 1.2.0 and RStan 1.2.0 are now available for download. See: http://mc-stan.org/ Here are the highlights. Full Mass Matrix Estimation during Warmup Yuanjun Gao, a first-year grad student here at Columbia (!), built a regularized mass-matrix estimator. This helps for posteriors with high correlation among parameters and varying scales. We’re still testing this ourselves, so the estimation procedure may change in the future (don’t worry — it satisfies detailed balance as is, but we might be able to make it more computationally efficient in terms of time per effective sample). It’s not the default option. The major reason is the matrix operations required are expensive, raising the algorithm cost to , where is the average number of leapfrog steps, is the number of iterations, and is the number of parameters. Yuanjun did a great job with the Cholesky factorizations and implemented this about as efficiently as is possible. (His homework for Andrew’s class w

2 0.8993274 920 andrew gelman stats-2011-09-22-Top 10 blog obsessions

Introduction: I was just thinking about this because we seem to be circling around the same few topics over and over (while occasionally slipping in some new statistical ideas): 10. Wegman 9. Hipmunk 8. Dennis the dentist 7. Freakonomics 6. The difference between significant and non-significant is not itself statistically significant 5. Just use a hierarchical model already! 4. Innumerate journalists who think that presidential elections are just like high school 3. A graph can be pretty but convey essentially no information 2. Stan is coming 1. Clippy! Did I miss anything important?

3 0.88880873 1186 andrew gelman stats-2012-02-27-Confusion from illusory precision

Introduction: When I posted this link to Dean Foster’s rants, some commenters pointed out this linked claim by famed statistician/provacateur Bjorn Lomberg: If [writes Lomborg] you reduce your child’s intake of fruits and vegetables by just 0.03 grams a day (that’s the equivalent of half a grain of rice) when you opt for more expensive organic produce, the total risk of cancer goes up, not down. Omit buying just one apple every 20 years because you have gone organic, and your child is worse off. Let’s unpack Lomborg’s claim. I don’t know anything about the science of pesticides and cancer, but can he really be so sure that the effects are so small as to be comparable to the health effects of eating “just one apple every 20 years”? I can’t believe you could estimate effects to anything like that precision. I can’t believe anyone has such a precise estimate of the health effects of pesticides, and also I can’t believe anyone has such a precise effect of the health effect of eating an app

4 0.88637888 637 andrew gelman stats-2011-03-29-Unfinished business

Introduction: This blog by J. Robert Lennon on abandoned novels made me think of the more general topic of abandoned projects. I seem to recall George V. Higgins writing that he’d written and discarded 14 novels or so before publishing The Friends of Eddie Coyle. I haven’t abandoned any novels but I’ve abandoned lots of research projects (and also have started various projects that there’s no way I’ll finish). If you think about the decisions involved, it really has to be that way. You learn while you’re working on a project whether it’s worth continuing. Sometimes I’ve put in the hard work and pushed a project to completion, published the article, and then I think . . . what was the point? The modal number of citations of our articles is zero, etc.

5 0.88308001 736 andrew gelman stats-2011-05-29-Response to “Why Tables Are Really Much Better Than Graphs”

Introduction: Ellen Barnes writes, in response to my paper and the associated discussion at JCGS , I [Barnes] am an industry statistician. I will agree that a table of numbers is essential in an academic publication. The readers want to be able to sit down with the numbers, and make sure they can replicate the results. However, graphics communicate faster – especially when a group of engineers are trying to figure out what is going on. Or, there are times when I have just a couple minutes to convey a complex relationship to a director or a vice-president. One example from this week: We are putting a new subsystem into some of our vehicles – using new technology. The technical specialist leading the project wanted to double check to make sure the system was working properly and finalize the calibration procedure. He mentioned a concern that was nagging him. I plotted his data in a matrix plot (a matrix of two dimensional scatter plots). We immediately keyed in on one plot that showed s

6 0.86744142 53 andrew gelman stats-2010-05-26-Tumors, on the left, or on the right?

7 0.84902239 1365 andrew gelman stats-2012-06-04-Question 25 of my final exam for Design and Analysis of Sample Surveys

8 0.84532344 1475 andrew gelman stats-2012-08-30-A Stan is Born

9 0.843018 1528 andrew gelman stats-2012-10-10-My talk at MIT on Thurs 11 Oct

10 0.84239745 1367 andrew gelman stats-2012-06-05-Question 26 of my final exam for Design and Analysis of Sample Surveys

11 0.84138089 2209 andrew gelman stats-2014-02-13-CmdStan, RStan, PyStan v2.2.0

12 0.83814073 1799 andrew gelman stats-2013-04-12-Stan 1.3.0 and RStan 1.3.0 Ready for Action

13 0.83631152 953 andrew gelman stats-2011-10-11-Steve Jobs’s cancer and science-based medicine

14 0.83619148 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization

15 0.83573395 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

16 0.83533198 846 andrew gelman stats-2011-08-09-Default priors update?

17 0.83453631 1130 andrew gelman stats-2012-01-20-Prior beliefs about locations of decision boundaries

18 0.83366841 1240 andrew gelman stats-2012-04-02-Blogads update

19 0.83317208 1851 andrew gelman stats-2013-05-11-Actually, I have no problem with this graph

20 0.83248776 1368 andrew gelman stats-2012-06-06-Question 27 of my final exam for Design and Analysis of Sample Surveys