andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-114 knowledge-graph by maker-knowledge-mining

114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction


meta infos for this blog

Source: html

Introduction: Kevin Bryan wrote: I read your new article on deduction/induction under Bayes. There are a couple interesting papers from economic decision theory which are related that you might find interesting. Samuelson et al have a (very) recent paper about what happens when you have some Bayesian and some non-Bayesian hypotheses. (I mentioned this one on my blog earlier this year.) Essentially, the Bayesian hypotheses are forced to “make predictions” in every future period (“if the unemployment rate is x%, the president is reelected with pr=x), whereas other forms of reasoning (say, analogies: “If the unemployment rate is above 10%, the president will not be reelected”). Imagine you have some prior over, say, the economy and elections, with 99.9% of the hypotheses being Bayesian and the rest being analogies as above. Then 100 years from now, because the analogies are so hard to refute, using deduction will push the proportion of Bayesian hypotheses toward zero. There is a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 There are a couple interesting papers from economic decision theory which are related that you might find interesting. [sent-2, score-0.255]

2 Samuelson et al have a (very) recent paper about what happens when you have some Bayesian and some non-Bayesian hypotheses. [sent-3, score-0.096]

3 ) Essentially, the Bayesian hypotheses are forced to “make predictions” in every future period (“if the unemployment rate is x%, the president is reelected with pr=x), whereas other forms of reasoning (say, analogies: “If the unemployment rate is above 10%, the president will not be reelected”). [sent-5, score-1.663]

4 Imagine you have some prior over, say, the economy and elections, with 99. [sent-6, score-0.069]

5 9% of the hypotheses being Bayesian and the rest being analogies as above. [sent-7, score-0.648]

6 Then 100 years from now, because the analogies are so hard to refute, using deduction will push the proportion of Bayesian hypotheses toward zero. [sent-8, score-0.903]

7 That is, if scientists are not totally honest (e. [sent-10, score-0.274]

8 , they don’t report negative results), it turns out nonobvious (and in many cases outright false) that deducting on hypotheses leads to increased knowledge about the “true state of the world”. [sent-12, score-0.765]

9 You might find this line of research interesting. [sent-13, score-0.199]

10 These ideas sound appealing but I don’t think they really work. [sent-16, score-0.157]

11 As Cosma and I discuss in our article (and as we further discuss elsewhere, including chapter 6 of BDA), I think that Bayesian inference works within a model, but not so well for comparing models. [sent-17, score-0.21]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('analogies', 0.332), ('hypotheses', 0.316), ('reelected', 0.275), ('bayesian', 0.25), ('unemployment', 0.16), ('related', 0.155), ('president', 0.144), ('deducting', 0.138), ('northwestern', 0.13), ('samuelson', 0.12), ('bryan', 0.116), ('refute', 0.113), ('premise', 0.111), ('rate', 0.109), ('discuss', 0.105), ('scientists', 0.104), ('descriptions', 0.103), ('find', 0.1), ('outright', 0.1), ('line', 0.099), ('deduction', 0.098), ('forced', 0.097), ('al', 0.096), ('kevin', 0.096), ('pr', 0.09), ('cosma', 0.09), ('supporting', 0.087), ('totally', 0.086), ('bda', 0.086), ('appealing', 0.086), ('elsewhere', 0.086), ('true', 0.085), ('competing', 0.085), ('honest', 0.084), ('push', 0.083), ('writers', 0.083), ('grad', 0.083), ('forms', 0.079), ('results', 0.076), ('proportion', 0.074), ('increased', 0.071), ('turns', 0.071), ('sound', 0.071), ('traditional', 0.07), ('elections', 0.07), ('period', 0.07), ('leads', 0.069), ('economy', 0.069), ('attitude', 0.067), ('worry', 0.067)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction

Introduction: Kevin Bryan wrote: I read your new article on deduction/induction under Bayes. There are a couple interesting papers from economic decision theory which are related that you might find interesting. Samuelson et al have a (very) recent paper about what happens when you have some Bayesian and some non-Bayesian hypotheses. (I mentioned this one on my blog earlier this year.) Essentially, the Bayesian hypotheses are forced to “make predictions” in every future period (“if the unemployment rate is x%, the president is reelected with pr=x), whereas other forms of reasoning (say, analogies: “If the unemployment rate is above 10%, the president will not be reelected”). Imagine you have some prior over, say, the economy and elections, with 99.9% of the hypotheses being Bayesian and the rest being analogies as above. Then 100 years from now, because the analogies are so hard to refute, using deduction will push the proportion of Bayesian hypotheses toward zero. There is a

2 0.18017548 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models

Introduction: Robert Bloomfield writes: Most of the people in my field (accounting, which is basically applied economics and finance, leavened with psychology and organizational behavior) use ‘positive research methods’, which are typically described as coming to the data with a predefined theory, and using hypothesis testing to accept or reject the theory’s predictions. But a substantial minority use ‘interpretive research methods’ (sometimes called qualitative methods, for those that call positive research ‘quantitative’). No one seems entirely happy with the definition of this method, but I’ve found it useful to think of it as an attempt to see the world through the eyes of your subjects, much as Jane Goodall lived with gorillas and tried to see the world through their eyes.) Interpretive researchers often criticize positive researchers by noting that the latter don’t make the best use of their data, because they come to the data with a predetermined theory, and only test a narrow set of h

3 0.17248073 524 andrew gelman stats-2011-01-19-Data exploration and multiple comparisons

Introduction: Bill Harris writes: I’ve read your paper and presentation showing why you don’t usually worry about multiple comparisons. I see how that applies when you are comparing results across multiple settings (states, etc.). Does the same principle hold when you are exploring data to find interesting relationships? For example, you have some data, and you’re trying a series of models to see which gives you the most useful insight. Do you try your models on a subset of the data so you have another subset for confirmatory analysis later, or do you simply throw all the data against your models? My reply: I’d like to estimate all the relationships at once and use a multilevel model to do partial pooling to handle the mutiplicity issues. That said, in practice, in my applied work I’m always bouncing back and forth between different hypotheses and different datasets, and often I learn a lot when next year’s data come in and I can modify my hypotheses. The trouble with the classical

4 0.16599371 1529 andrew gelman stats-2012-10-11-Bayesian brains?

Introduction: Psychology researcher Alison Gopnik discusses the idea that some of the systematic problems with human reasoning can be explained by systematic flaws in the statistical models we implicitly use. I really like this idea and I’ll return to it in a bit. But first I need to discuss a minor (but, I think, ultimately crucial) disagreement I have with how Gopnik describes Bayesian inference. She writes: The Bayesian idea is simple, but it turns out to be very powerful. It’s so powerful, in fact, that computer scientists are using it to design intelligent learning machines, and more and more psychologists think that it might explain human intelligence. Bayesian inference is a way to use statistical data to evaluate hypotheses and make predictions. These might be scientific hypotheses and predictions or everyday ones. So far, so good. Next comes the problem (as I see it). Gopnik writes: Here’s a simple bit of Bayesian election thinking. In early September, the polls suddenly im

5 0.16599038 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

6 0.16300583 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

7 0.15898705 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

8 0.14709345 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

9 0.14566949 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation

10 0.14373024 110 andrew gelman stats-2010-06-26-Philosophy and the practice of Bayesian statistics

11 0.14299065 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

12 0.14011905 1469 andrew gelman stats-2012-08-25-Ways of knowing

13 0.13968642 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

14 0.13477957 2281 andrew gelman stats-2014-04-04-The Notorious N.H.S.T. presents: Mo P-values Mo Problems

15 0.13387449 2112 andrew gelman stats-2013-11-25-An interesting but flawed attempt to apply general forecasting principles to contextualize attitudes toward risks of global warming

16 0.13197616 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

17 0.13047092 544 andrew gelman stats-2011-01-29-Splitting the data

18 0.1298905 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

19 0.12552997 117 andrew gelman stats-2010-06-29-Ya don’t know Bayes, Jack

20 0.11596609 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.228), (1, 0.082), (2, -0.068), (3, 0.018), (4, -0.171), (5, -0.013), (6, -0.056), (7, 0.049), (8, 0.025), (9, -0.041), (10, -0.029), (11, -0.039), (12, -0.003), (13, -0.002), (14, 0.028), (15, 0.031), (16, 0.038), (17, 0.001), (18, -0.036), (19, 0.033), (20, -0.002), (21, 0.051), (22, -0.036), (23, 0.021), (24, -0.006), (25, -0.066), (26, 0.058), (27, -0.041), (28, 0.03), (29, -0.023), (30, 0.06), (31, 0.018), (32, 0.023), (33, -0.046), (34, -0.055), (35, -0.02), (36, 0.01), (37, -0.002), (38, 0.071), (39, -0.063), (40, -0.019), (41, 0.003), (42, 0.018), (43, 0.006), (44, -0.007), (45, -0.002), (46, 0.041), (47, 0.024), (48, 0.031), (49, -0.024)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97853643 114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction

Introduction: Kevin Bryan wrote: I read your new article on deduction/induction under Bayes. There are a couple interesting papers from economic decision theory which are related that you might find interesting. Samuelson et al have a (very) recent paper about what happens when you have some Bayesian and some non-Bayesian hypotheses. (I mentioned this one on my blog earlier this year.) Essentially, the Bayesian hypotheses are forced to “make predictions” in every future period (“if the unemployment rate is x%, the president is reelected with pr=x), whereas other forms of reasoning (say, analogies: “If the unemployment rate is above 10%, the president will not be reelected”). Imagine you have some prior over, say, the economy and elections, with 99.9% of the hypotheses being Bayesian and the rest being analogies as above. Then 100 years from now, because the analogies are so hard to refute, using deduction will push the proportion of Bayesian hypotheses toward zero. There is a

2 0.815745 792 andrew gelman stats-2011-07-08-The virtues of incoherence?

Introduction: Kent Osband writes: I just read your article The holes in my philosophy of Bayesian data analysis . I agree on the importance of what you flagged as “comparable incoherence in all other statistical philosophies”. The problem arises when a string of unexpected observations persuades that one’s original structural hypothesis (which might be viewed as a parameter describing the type of statistical relationship) was false. However, I would phrase this more positively. Your Bayesian prior actually cedes alternative structural hypotheses, albeit with tiny epsilon weights. Otherwise you would never change your mind. However, these epsilons are so difficult to measure, and small differences can have such a significant impact on speed of adjustment (as in the example in Chapter 7 of Pandora’s Risk), that effectively we all look incoherent. This is a prime example of rational turbulence. Rational turbulence can arise even without a structural break. Any time new evidence arrives that

3 0.81032872 117 andrew gelman stats-2010-06-29-Ya don’t know Bayes, Jack

Introduction: I came across this article on the philosophy of statistics by University of Michigan economist John DiNardo. I don’t have much to say about the substance of the article because most of it is an argument against something called “Bayesian methods” that doesn’t have much in common with the Bayesian data analysis that I do. If an quantitative, empirically-minded economist at a top university doesn’t know about modern Bayesian methods, then it’s a pretty good guess that confusion holds in many other quarters as well, so I thought I’d try to clear a couple of things up. (See also here .) In the short term, I know I have some readers at the University of Michigan, so maybe a couple of you could go over to Prof. DiNardo’s office and discuss this with him? For the rest of you, please spread the word. My point here is not to claim that DiNardo should be using Bayesian methods or to claim that he’s doing anything wrong in his applied work. It’s just that he’s fighting against a bu

4 0.80428725 2078 andrew gelman stats-2013-10-26-“The Bayesian approach to forensic evidence”

Introduction: Mike Zyphur sent along this paper by Corinna Kruse: This article draws attention to communication across professions as an important aspect of forensic evidence. Based on ethnographic fieldwork in the Swedish legal system, it shows how forensic scientists use a particular quantitative approach to evaluating forensic laboratory results, the Bayesian approach, as a means of quantifying uncertainty and communicating it accurately to judges, prosecutors, and defense lawyers, as well as a means of distributing responsibility between the laboratory and the court. This article argues that using the Bayesian approach also brings about a particular type of intersubjectivity; in order to make different types of forensic evidence commensurable and combinable, quantifications must be consistent across forensic specializations, which brings about a transparency based on shared understandings and practices. Forensic scientists strive to keep the black box of forensic evidence – at least partly

5 0.80006576 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

6 0.7979061 1529 andrew gelman stats-2012-10-11-Bayesian brains?

7 0.79065144 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

8 0.78991717 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

9 0.78540277 1181 andrew gelman stats-2012-02-23-Philosophy: Pointer to Salmon

10 0.7835834 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

11 0.78341585 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation

12 0.77779061 1262 andrew gelman stats-2012-04-12-“Not only defended but also applied”: The perceived absurdity of Bayesian inference

13 0.76865983 2000 andrew gelman stats-2013-08-28-Why during the 1950-1960′s did Jerry Cornfield become a Bayesian?

14 0.76843899 1781 andrew gelman stats-2013-03-29-Another Feller theory

15 0.76837379 1469 andrew gelman stats-2012-08-25-Ways of knowing

16 0.7673822 110 andrew gelman stats-2010-06-26-Philosophy and the practice of Bayesian statistics

17 0.76710045 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

18 0.76532871 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

19 0.75848961 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

20 0.75333679 1280 andrew gelman stats-2012-04-24-Non-Bayesian analysis of Bayesian agents?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.042), (15, 0.028), (16, 0.069), (18, 0.129), (21, 0.021), (24, 0.185), (31, 0.01), (45, 0.018), (84, 0.03), (86, 0.048), (99, 0.324)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98462659 1967 andrew gelman stats-2013-08-04-What are the key assumptions of linear regression?

Introduction: Andy Cooper writes: A link to an article , “Four Assumptions Of Multiple Regression That Researchers Should Always Test”, has been making the rounds on Twitter. Their first rule is “Variables are Normally distributed.” And they seem to be talking about the independent variables – but then later bring in tests on the residuals (while admitting that the normally-distributed error assumption is a weak assumption). I thought we had long-since moved away from transforming our independent variables to make them normally distributed for statistical reasons (as opposed to standardizing them for interpretability, etc.) Am I missing something? I agree that leverage in a influence is important, but normality of the variables? The article is from 2002, so it might be dated, but given the popularity of the tweet, I thought I’d ask your opinion. My response: There’s some useful advice on that page but overall I think the advice was dated even in 2002. In section 3.6 of my book wit

2 0.97768235 969 andrew gelman stats-2011-10-22-Researching the cost-effectiveness of political lobbying organisations

Introduction: Sally Murray from Giving What We Can writes: We are an organisation that assesses different charitable (/fundable) interventions, to estimate which are the most cost-effective (measured in terms of the improvement of life for people in developing countries gained for every dollar invested). Our research guides and encourages greater donations to the most cost-effective charities we thus identify, and our members have so far pledged a total of $14m to these causes, with many hundreds more relying on our advice in a less formal way. I am specifically researching the cost-effectiveness of political lobbying organisations. We are initially focusing on organisations that lobby for ‘big win’ outcomes such as increased funding of the most cost-effective NTD treatments/ vaccine research, changes to global trade rules (potentially) and more obscure lobbies such as “Keep Antibiotics Working”. We’ve a great deal of respect for your work and the superbly rational way you go about it, and

3 0.97008288 1292 andrew gelman stats-2012-05-01-Colorless green facts asserted resolutely

Introduction: Thomas Basbøll [yes, I've learned how to smoothly do this using alt-o] gives some writing advice : What gives a text presence is our commitment to asserting facts. We have to face the possibility that we may be wrong about them resolutely, and we do this by writing about them as though we are right. This and an earlier remark by Basbøll are closely related in my mind to predictive model checking and to Bayesian statistics : we make strong assumptions and then engage the data and the assumptions in a dialogue: assumptions + data -> inference, and we can then compare the inference to the data which can reveal problems with our model (or problems with the data, but that’s really problems with the model too, in this case problems with the model for the data). I like the idea that a condition for a story to be useful is that we put some belief into it. (One doesn’t put belief into a joke.) And also the converse, that thnking hard about a story and believing it can be the pre

4 0.96835268 2046 andrew gelman stats-2013-10-01-I’ll say it again

Introduction: Milan Valasek writes: Psychology students (and probably students in other disciplines) are often taught that in order to perform ‘parametric’ tests, e.g. independent t-test, the data for each group need to be normally distributed. However, in literature (and various university lecture notes and slides accessible online), I have come across at least 4 different interpretation of what it is that is supposed to be normally distributed when doing a t-test: 1. population 2. sampled data for each group 3. distribution of estimates of means for each group 4. distribution of estimates of the difference between groups I can see how 2 would follow from 1 and 4 from 3 but even then, there are two different sets of interpretations of the normality assumption. Could you please put this issue to rest for me? My quick response is that normality is not so important unless you are focusing on prediction.

same-blog 5 0.96421266 114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction

Introduction: Kevin Bryan wrote: I read your new article on deduction/induction under Bayes. There are a couple interesting papers from economic decision theory which are related that you might find interesting. Samuelson et al have a (very) recent paper about what happens when you have some Bayesian and some non-Bayesian hypotheses. (I mentioned this one on my blog earlier this year.) Essentially, the Bayesian hypotheses are forced to “make predictions” in every future period (“if the unemployment rate is x%, the president is reelected with pr=x), whereas other forms of reasoning (say, analogies: “If the unemployment rate is above 10%, the president will not be reelected”). Imagine you have some prior over, say, the economy and elections, with 99.9% of the hypotheses being Bayesian and the rest being analogies as above. Then 100 years from now, because the analogies are so hard to refute, using deduction will push the proportion of Bayesian hypotheses toward zero. There is a

6 0.96308041 698 andrew gelman stats-2011-05-05-Shocking but not surprising

7 0.96100813 1691 andrew gelman stats-2013-01-25-Extreem p-values!

8 0.95901161 1319 andrew gelman stats-2012-05-14-I hate to get all Gerd Gigerenzer on you here, but . . .

9 0.95850897 588 andrew gelman stats-2011-02-24-In case you were wondering, here’s the price of milk

10 0.95074069 1074 andrew gelman stats-2011-12-20-Reading a research paper != agreeing with its claims

11 0.94355953 456 andrew gelman stats-2010-12-07-The red-state, blue-state war is happening in the upper half of the income distribution

12 0.94225681 2136 andrew gelman stats-2013-12-16-Whither the “bet on sparsity principle” in a nonsparse world?

13 0.94189763 718 andrew gelman stats-2011-05-18-Should kids be able to bring their own lunches to school?

14 0.94130886 2112 andrew gelman stats-2013-11-25-An interesting but flawed attempt to apply general forecasting principles to contextualize attitudes toward risks of global warming

15 0.94039983 2148 andrew gelman stats-2013-12-25-Spam!

16 0.93965566 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

17 0.93917346 1390 andrew gelman stats-2012-06-23-Traditionalist claims that modern art could just as well be replaced by a “paint-throwing chimp”

18 0.93791199 2239 andrew gelman stats-2014-03-09-Reviewing the peer review process?

19 0.93748581 1578 andrew gelman stats-2012-11-15-Outta control political incorrectness

20 0.93695748 1760 andrew gelman stats-2013-03-12-Misunderstanding the p-value