andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1409 knowledge-graph by maker-knowledge-mining

1409 andrew gelman stats-2012-07-08-Is linear regression unethical in that it gives more weight to cases that are far from the average?


meta infos for this blog

Source: html

Introduction: I received the following note from someone who’d like to remain anonymous: I read your post on ethics and statistics, and the comments therein, with much interest. I did notice, however, that most of the dialogue was about ethical behavior of scientists. Herein I’d like to suggest a different take, one that focuses on the statistical methods of scientists. For example, fitting a line to a scatter plot of data using OLS [linear regression] gives more weight to outliers. If each data point represents a person we are weighting people differently. And surely the ethical implications are different if we use a least absolute deviation estimator. Recently I reviewed a paper where the authors claimed one advantage of non-parametric rank-based tests is their robustness to outliers. Again, maybe that outlier is the 10th person who dies from an otherwise beneficial medicine. Should we ignore him in assessing the effect of the medicine? I guess this gets me partly into loss f


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I did notice, however, that most of the dialogue was about ethical behavior of scientists. [sent-2, score-0.298]

2 And surely the ethical implications are different if we use a least absolute deviation estimator. [sent-6, score-0.227]

3 Again, maybe that outlier is the 10th person who dies from an otherwise beneficial medicine. [sent-8, score-0.228]

4 I guess this gets me partly into loss functions and how we evaluate models. [sent-10, score-0.359]

5 If I remember correctly you were not very appreciative of loss functions in one of your blog entries. [sent-11, score-0.452]

6 When we adopt a method we, along with it, adopt an ethical stance, whether we know it or not. [sent-14, score-0.561]

7 Ideally scientists ought to be aware of the stance they are taking and be able to offer justifications for it. [sent-15, score-0.217]

8 If the treatment effect is constant, than the issues discussed above don’t arise: there is one parameter being estimated, and the ethical thing to do is to estimate it as accurately as possible. [sent-17, score-0.391]

9 The problem becomes less salient — for the statistician. [sent-24, score-0.182]

10 When estimating CATEs the statistician is only responsible for within strata weighting choices. [sent-25, score-0.618]

11 As more strata are added, these choices become less consequential. [sent-26, score-0.298]

12 In the extreme, individuals within strata are identical on relevant covariates so within strata weights don’t matter. [sent-27, score-0.906]

13 The statistician reports CATEs and passes the buck to policy makers. [sent-28, score-0.373]

14 The problem may remain for the policy maker — and bounce back to statistician. [sent-30, score-0.541]

15 In such situations policy makers will insist on an ATE estimate — that is the quantity of interest. [sent-32, score-0.353]

16 If so, how will the statistician estimate and make inferences about the ATE? [sent-33, score-0.195]

17 Alternatively, he may push back and say to politician: “I give you CATEs, you compute ATEs (or provide me with your loss function so I can do it for you)”. [sent-35, score-0.279]

18 My general response is that if this sort of thing is a concern, it would be good to formally model the decision problem and the costs and benefits of different options. [sent-36, score-0.26]

19 Most work I’ve seen in statistical decision analysis tends to have utility or loss functions chosen based on mathematical principles rather than applied considerations. [sent-39, score-0.597]

20 We have some examples of more applied decision analysis in chapter 22 of Bayesian Data Analysis. [sent-40, score-0.238]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('cates', 0.305), ('strata', 0.298), ('ethical', 0.227), ('loss', 0.206), ('adopt', 0.167), ('decision', 0.159), ('ate', 0.156), ('functions', 0.153), ('stance', 0.134), ('weights', 0.116), ('weighting', 0.114), ('policy', 0.11), ('statistician', 0.109), ('problem', 0.101), ('within', 0.097), ('remain', 0.094), ('appreciative', 0.093), ('ethically', 0.093), ('finely', 0.093), ('logistical', 0.093), ('therein', 0.087), ('maker', 0.087), ('estimate', 0.086), ('justifications', 0.083), ('locally', 0.083), ('ols', 0.081), ('dies', 0.081), ('salient', 0.081), ('scatter', 0.081), ('insist', 0.081), ('analysis', 0.079), ('treatment', 0.078), ('passes', 0.078), ('heterogeneous', 0.078), ('note', 0.077), ('buck', 0.076), ('aggregation', 0.076), ('bounce', 0.076), ('makers', 0.076), ('outlier', 0.076), ('back', 0.073), ('readings', 0.073), ('technological', 0.073), ('dialogue', 0.071), ('neutral', 0.071), ('beneficial', 0.071), ('alternatively', 0.071), ('rationale', 0.071), ('targeted', 0.07), ('politician', 0.07)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 1409 andrew gelman stats-2012-07-08-Is linear regression unethical in that it gives more weight to cases that are far from the average?

Introduction: I received the following note from someone who’d like to remain anonymous: I read your post on ethics and statistics, and the comments therein, with much interest. I did notice, however, that most of the dialogue was about ethical behavior of scientists. Herein I’d like to suggest a different take, one that focuses on the statistical methods of scientists. For example, fitting a line to a scatter plot of data using OLS [linear regression] gives more weight to outliers. If each data point represents a person we are weighting people differently. And surely the ethical implications are different if we use a least absolute deviation estimator. Recently I reviewed a paper where the authors claimed one advantage of non-parametric rank-based tests is their robustness to outliers. Again, maybe that outlier is the 10th person who dies from an otherwise beneficial medicine. Should we ignore him in assessing the effect of the medicine? I guess this gets me partly into loss f

2 0.18229194 553 andrew gelman stats-2011-02-03-is it possible to “overstratify” when assigning a treatment in a randomized control trial?

Introduction: Peter Bergman writes: is it possible to “overstratify” when assigning a treatment in a randomized control trial? I [Bergman] have a sample size of roughly 400 people, and several binary variables correlate strongly with the outcome of interest and would also define interesting subgroups for analysis. The problem is, stratifying over all of these (five or six) variables leaves me with strata that have only 1 person in them. I have done some background reading on whether there is a rule of thumb for the maximum number of variables to stratify. There does not seem to be much agreement (some say there should be between N/50-N/100 strata, others say as few as possible). In economics, the paper I looked to is here, which seems to summarize literature related to clinical trials. In short, my question is: is it bad to have several strata with 1 person in them? Should I group these people in with another stratum? P.S. In the paper I mention above, they also say it is important to inc

3 0.16118746 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

Introduction: Ken Rice writes: In the recent discussion on stopping rules I saw a comment that I wanted to chip in on, but thought it might get a bit lost, in the already long thread. Apologies in advance if I misinterpreted what you wrote, or am trying to tell you things you already know. The comment was: “In Bayesian decision making, there is a utility function and you choose the decision with highest expected utility. Making a decision based on statistical significance does not correspond to any utility function.” … which immediately suggests this little 2010 paper; A Decision-Theoretic Formulation of Fisher’s Approach to Testing, The American Statistician, 64(4) 345-349. It contains utilities that lead to decisions that very closely mimic classical Wald tests, and provides a rationale for why this utility is not totally unconnected from how some scientists think. Some (old) slides discussing it are here . A few notes, on things not in the paper: * I know you don’t like squared-

4 0.12129852 2305 andrew gelman stats-2014-04-25-Revised statistical standards for evidence (comments to Val Johnson’s comments on our comments on Val’s comments on p-values)

Introduction: As regular readers of this blog are aware, a few months ago Val Johnson published an article, “Revised standards for statistical evidence,” making a Bayesian argument that researchers and journals should use a p=0.005 publication threshold rather than the usual p=0.05. Christian Robert and I were unconvinced by Val’s reasoning and wrote a response , “Revised evidence for statistical standards,” in which we wrote: Johnson’s minimax prior is not intended to correspond to any distribution of effect sizes; rather, it represents a worst case scenario under some mathematical assumptions. Minimax and tradeoffs do well together, and it is hard for us to see how any worst case procedure can supply much guidance on how to balance between two different losses. . . . We would argue that the appropriate significance level depends on the scenario and that what worked well for agricultural experiments in the 1920s might not be so appropriate for many applications in modern biosciences . . .

5 0.10470576 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

Introduction: I sent a copy of my paper (coauthored with Cosma Shalizi) on Philosophy and the practice of Bayesian statistics in the social sciences to Richard Berk , who wrote: I read your paper this morning. I think we are pretty much on the same page about all models being wrong. I like very much the way you handle this in the paper. Yes, Newton’s work is wrong, but surely useful. I also like your twist on Bayesian methods. Makes good sense to me. Perhaps most important, your paper raises some difficult issues I have been trying to think more carefully about. 1. If the goal of a model is to be useful, surely we need to explore that “useful” means. At the very least, usefulness will depend on use. So a model that is useful for forecasting may or may not be useful for causal inference. 2. Usefulness will be a matter of degree. So that for each use we will need one or more metrics to represent how useful the model is. In what looks at first to be simple example, if the use is forecasting,

6 0.10442755 1430 andrew gelman stats-2012-07-26-Some thoughts on survey weighting

7 0.10342783 960 andrew gelman stats-2011-10-15-The bias-variance tradeoff

8 0.10233386 784 andrew gelman stats-2011-07-01-Weighting and prediction in sample surveys

9 0.10077398 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

10 0.099549524 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

11 0.099331953 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

12 0.099073179 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

13 0.097668245 1575 andrew gelman stats-2012-11-12-Thinking like a statistician (continuously) rather than like a civilian (discretely)

14 0.097619049 352 andrew gelman stats-2010-10-19-Analysis of survey data: Design based models vs. hierarchical modeling?

15 0.09747871 2351 andrew gelman stats-2014-05-28-Bayesian nonparametric weighted sampling inference

16 0.094799839 1981 andrew gelman stats-2013-08-14-The robust beauty of improper linear models in decision making

17 0.094294757 1883 andrew gelman stats-2013-06-04-Interrogating p-values

18 0.091945216 1941 andrew gelman stats-2013-07-16-Priors

19 0.091506824 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

20 0.089712575 758 andrew gelman stats-2011-06-11-Hey, good news! Your p-value just passed the 0.05 threshold!


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.215), (1, 0.072), (2, 0.031), (3, -0.047), (4, 0.003), (5, -0.009), (6, -0.008), (7, 0.015), (8, 0.032), (9, -0.004), (10, -0.03), (11, -0.035), (12, 0.011), (13, 0.008), (14, 0.008), (15, -0.002), (16, -0.018), (17, -0.001), (18, -0.009), (19, 0.033), (20, 0.018), (21, 0.011), (22, 0.026), (23, 0.019), (24, 0.019), (25, 0.044), (26, 0.005), (27, -0.003), (28, -0.006), (29, 0.015), (30, 0.012), (31, 0.053), (32, 0.005), (33, 0.022), (34, 0.001), (35, -0.043), (36, 0.026), (37, -0.016), (38, 0.021), (39, 0.022), (40, 0.003), (41, -0.019), (42, -0.071), (43, 0.006), (44, 0.031), (45, 0.004), (46, 0.035), (47, -0.056), (48, 0.021), (49, 0.017)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96318251 1409 andrew gelman stats-2012-07-08-Is linear regression unethical in that it gives more weight to cases that are far from the average?

Introduction: I received the following note from someone who’d like to remain anonymous: I read your post on ethics and statistics, and the comments therein, with much interest. I did notice, however, that most of the dialogue was about ethical behavior of scientists. Herein I’d like to suggest a different take, one that focuses on the statistical methods of scientists. For example, fitting a line to a scatter plot of data using OLS [linear regression] gives more weight to outliers. If each data point represents a person we are weighting people differently. And surely the ethical implications are different if we use a least absolute deviation estimator. Recently I reviewed a paper where the authors claimed one advantage of non-parametric rank-based tests is their robustness to outliers. Again, maybe that outlier is the 10th person who dies from an otherwise beneficial medicine. Should we ignore him in assessing the effect of the medicine? I guess this gets me partly into loss f

2 0.81088853 553 andrew gelman stats-2011-02-03-is it possible to “overstratify” when assigning a treatment in a randomized control trial?

Introduction: Peter Bergman writes: is it possible to “overstratify” when assigning a treatment in a randomized control trial? I [Bergman] have a sample size of roughly 400 people, and several binary variables correlate strongly with the outcome of interest and would also define interesting subgroups for analysis. The problem is, stratifying over all of these (five or six) variables leaves me with strata that have only 1 person in them. I have done some background reading on whether there is a rule of thumb for the maximum number of variables to stratify. There does not seem to be much agreement (some say there should be between N/50-N/100 strata, others say as few as possible). In economics, the paper I looked to is here, which seems to summarize literature related to clinical trials. In short, my question is: is it bad to have several strata with 1 person in them? Should I group these people in with another stratum? P.S. In the paper I mention above, they also say it is important to inc

3 0.78798723 368 andrew gelman stats-2010-10-25-Is instrumental variables analysis particularly susceptible to Type M errors?

Introduction: Hendrik Juerges writes: I am an applied econometrician. The reason I am writing is that I am pondering a question for some time now and I am curious whether you have any views on it. One problem the practitioner of instrumental variables estimation faces is large standard errors even with very large samples. Part of the problem is of course that one estimates a ratio. Anyhow, more often than not, I and many other researchers I know end up with large point estimates and standard errors when trying IV on a problem. Sometimes some of us are lucky and get a statistically significant result. Those estimates that make it beyond the 2 standard error threshold are often ridiculously large (one famous example in my line of research being Lleras-Muney’s estimates of the 10% effect of one year of schooling on mortality). The standard defense here is that IV estimates the complier-specific causal effect (which is mathematically correct). But still, I find many of the IV results (including my

4 0.7861464 775 andrew gelman stats-2011-06-21-Fundamental difficulty of inference for a ratio when the denominator could be positive or negative

Introduction: Ratio estimates are common in statistics. In survey sampling, the ratio estimate is when you use y/x to estimate Y/X (using the notation in which x,y are totals of sample measurements and X,Y are population totals). In textbook sampling examples, the denominator X will be an all-positive variable, something that is easy to measure and is, ideally, close to proportional to Y. For example, X is last year’s sales and Y is this year’s sales, or X is the number of people in a cluster and Y is some count. Ratio estimation doesn’t work so well if X can be either positive or negative. More generally we can consider any estimate of a ratio, with no need for a survey sampling context. The problem with estimating Y/X is that the very interpretation of Y/X can change completely if the sign of X changes. Everything is ok for a point estimate: you get X.hat and Y.hat, you can take the ratio Y.hat/X.hat, no problem. But the inference falls apart if you have enough uncertainty in X.hat th

5 0.78371543 2180 andrew gelman stats-2014-01-21-Everything I need to know about Bayesian statistics, I learned in eight schools.

Introduction: This post is by Phil. I’m aware that there  are  some people who use a Bayesian approach largely because it allows them to provide a highly informative prior distribution based subjective judgment, but that is not the appeal of Bayesian methods for a lot of us practitioners. It’s disappointing and surprising, twenty years after my initial experiences, to still hear highly informed professional statisticians who think that what distinguishes Bayesian statistics from Frequentist statistics is “subjectivity” ( as seen in  a recent blog post and its comments ). My first encounter with Bayesian statistics was just over 20 years ago. I was a postdoc at Lawrence Berkeley National Laboratory, with a new PhD in theoretical atomic physics but working on various problems related to the geographical and statistical distribution of indoor radon (a naturally occurring radioactive gas that can be dangerous if present at high concentrations). One of the issues I ran into right at the start was th

6 0.77885246 248 andrew gelman stats-2010-09-01-Ratios where the numerator and denominator both change signs

7 0.77682793 1985 andrew gelman stats-2013-08-16-Learning about correlations using cross-sectional and over-time comparisons between and within countries

8 0.75911444 744 andrew gelman stats-2011-06-03-Statistical methods for healthcare regulation: rating, screening and surveillance

9 0.75888234 518 andrew gelman stats-2011-01-15-Regression discontinuity designs: looking for the keys under the lamppost?

10 0.75867933 960 andrew gelman stats-2011-10-15-The bias-variance tradeoff

11 0.75381237 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

12 0.74952716 1575 andrew gelman stats-2012-11-12-Thinking like a statistician (continuously) rather than like a civilian (discretely)

13 0.7475577 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

14 0.74538291 1195 andrew gelman stats-2012-03-04-Multiple comparisons dispute in the tabloids

15 0.74348468 804 andrew gelman stats-2011-07-15-Static sensitivity analysis

16 0.7433424 1968 andrew gelman stats-2013-08-05-Evidence on the impact of sustained use of polynomial regression on causal inference (a claim that coal heating is reducing lifespan by 5 years for half a billion people)

17 0.74278486 212 andrew gelman stats-2010-08-17-Futures contracts, Granger causality, and my preference for estimation to testing

18 0.73992431 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

19 0.72677392 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

20 0.72401279 716 andrew gelman stats-2011-05-17-Is the internet causing half the rapes in Norway? I wanna see the scatterplot.


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.025), (6, 0.138), (15, 0.031), (16, 0.053), (21, 0.04), (24, 0.173), (27, 0.011), (36, 0.02), (42, 0.01), (53, 0.027), (79, 0.01), (81, 0.015), (86, 0.023), (89, 0.015), (97, 0.028), (99, 0.259)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96349132 650 andrew gelman stats-2011-04-05-Monitor the efficiency of your Markov chain sampler using expected squared jumped distance!

Introduction: Marc Tanguay writes in with a specific question that has a very general answer. First, the question: I [Tanguay] am currently running a MCMC for which I have 3 parameters that are restricted to a specific space. 2 are bounded between 0 and 1 while the third is binary and updated by a Beta-Binomial. Since my priors are also bounded, I notice that, conditional on All the rest (which covers both data and other parameters), the density was not varying a lot within the space of the parameters. As a result, the acceptance rate is high, about 85%, and this despite the fact that all the parameter’s space is explore. Since in your book, the optimal acceptance rates prescribed are lower that 50% (in case of multiple parameters), do you think I should worry about getting 85%. Or is this normal given the restrictions on the parameters? First off: Yes, my guess is that you should be taking bigger jumps. 85% seems like too high an acceptance rate for Metropolis jumping. More generally, t

2 0.95273292 221 andrew gelman stats-2010-08-21-Busted!

Introduction: I’m just glad that universities don’t sanction professors for publishing false theorems. If the guy really is nailed by the feds for fraud, I hope they don’t throw him in prison. In general, prison time seems like a brutal, expensive, and inefficient way to punish people. I’d prefer if the government just took 95% of his salary for several years, made him do community service (cleaning equipment at the local sewage treatment plant, perhaps; a lab scientist should be good at this sort of thing, no?), etc. If restriction of this dude’s personal freedom is judged be part of the sentence, he could be given some sort of electronic tag that would send a message to the police if he were ever more than 3 miles from his home. But no need to bill the taxpayers for the cost of keeping him in prison.

3 0.94977248 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!

Introduction: We just released Stan 1.1.1 and RStan 1.1.1 As usual, you can find download and install instructions at: http://mc-stan.org/ This is a patch release and is fully backward compatible with Stan and RStan 1.1.0. The main thing you should notice is that the multivariate models should be much faster and all the bugs reported for 1.1.0 have been fixed. We’ve also added a bit more functionality. The substantial changes are listed in the following release notes. v1.1.1 (5 February 2012) ====================================================================== Bug Fixes ———————————- * fixed bug in comparison operators, which swapped operator< with operator<= and swapped operator> with operator>= semantics * auto-initialize all variables to prevent segfaults * atan2 gradient propagation fixed * fixed off-by-one in NUTS treedepth bound so NUTS goes at most to specified tree depth rather than specified depth + 1 * various compiler compatibility and minor consistency issues * f

same-blog 4 0.94858694 1409 andrew gelman stats-2012-07-08-Is linear regression unethical in that it gives more weight to cases that are far from the average?

Introduction: I received the following note from someone who’d like to remain anonymous: I read your post on ethics and statistics, and the comments therein, with much interest. I did notice, however, that most of the dialogue was about ethical behavior of scientists. Herein I’d like to suggest a different take, one that focuses on the statistical methods of scientists. For example, fitting a line to a scatter plot of data using OLS [linear regression] gives more weight to outliers. If each data point represents a person we are weighting people differently. And surely the ethical implications are different if we use a least absolute deviation estimator. Recently I reviewed a paper where the authors claimed one advantage of non-parametric rank-based tests is their robustness to outliers. Again, maybe that outlier is the 10th person who dies from an otherwise beneficial medicine. Should we ignore him in assessing the effect of the medicine? I guess this gets me partly into loss f

5 0.94521314 2316 andrew gelman stats-2014-05-03-“The graph clearly shows that mammography adds virtually nothing to survival and if anything, decreases survival (and increases cost and provides unnecessary treatment)”

Introduction: Paul Alper writes: You recently posted on graphs and how to convey information.  I don’t believe you have ever posted anything on this dynamite randomized clinical trial of 90,000 (!!) 40-59 year-old women over a 25-year period (also !!). The graphs below are figures 2, 3 and 4 respectively, of http://www.bmj.com/content/348/bmj.g366 The control was physical exam only and the treatment was physical exam plus mammography. The graph clearly shows that mammography adds virtually nothing to survival and if anything, decreases survival (and increases cost and provides unnecessary treatment).  Note the superfluousness of the p-values.    There is an accompanying editorial in the BMJ http://www.bmj.com/content/348/bmj.g1403 which refers to “vested interests” which can override any statistics, no matter how striking: We agree with Miller and colleagues that “the rationale for screening by mammography be urgently reassessed by policy makers.” As time goes

6 0.94407403 618 andrew gelman stats-2011-03-18-Prior information . . . about the likelihood

7 0.93871778 1906 andrew gelman stats-2013-06-19-“Behind a cancer-treatment firm’s rosy survival claims”

8 0.92652386 1924 andrew gelman stats-2013-07-03-Kuhn, 1-f noise, and the fractal nature of scientific revolutions

9 0.91973257 819 andrew gelman stats-2011-07-24-Don’t idealize “risk aversion”

10 0.91820192 1287 andrew gelman stats-2012-04-28-Understanding simulations in terms of predictive inference?

11 0.91730154 1489 andrew gelman stats-2012-09-09-Commercial Bayesian inference software is popping up all over

12 0.91099644 1799 andrew gelman stats-2013-04-12-Stan 1.3.0 and RStan 1.3.0 Ready for Action

13 0.90908372 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

14 0.90849823 2161 andrew gelman stats-2014-01-07-My recent debugging experience

15 0.90795064 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

16 0.90634704 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

17 0.90590262 2174 andrew gelman stats-2014-01-17-How to think about the statistical evidence when the statistical evidence can’t be conclusive?

18 0.90574789 758 andrew gelman stats-2011-06-11-Hey, good news! Your p-value just passed the 0.05 threshold!

19 0.90478063 1625 andrew gelman stats-2012-12-15-“I coach the jumpers here at Boise State . . .”

20 0.90235662 2086 andrew gelman stats-2013-11-03-How best to compare effects measured in two different time periods?