« Another PR effort to scare you into clicking | Main | Binge Reading Gelman »


Feed You can follow this conversation by subscribing to the comment feed for this post.


In the previous article, you pointed out the second chart also had a Sunday-Monday blip in the week before the Fall time change. This article give a p=0.011 for the Spring blip. What would be the p-value for the pre-Fall blip?

Were the data examined to investigate the incidence of other day-to-day blips during the course of the year?


Richard: no, as far as we know, they only looked at DST because that's the research topic. I suspect that if you run this same analysis for every possible cut-date, we will find a host of signals. The pre-fall blip is something like p=0.04 if I recall, tantalizing close to the magical 0.05.


"...on the other hand, we include other weeks of the year that are potentially not representative of the period immediately prior to Spring Forward."

I am not sure what you are trying to say here. I would contend the null hypothesis is that every week (or every pair of days) is the same as every other. Seasonality? The time changes are somewhat independent of season as they are moved forward and backward in time by the whim of our government.

Do the areas which do not observe DST see no effect over the course of the year?

Has any testing been done to investigate whether or not this data is simply random? Given daily counts from several years, I would expect some of the day-to-day pairs to yield up a low p-value.


The p-values are supposed to demonstrate that the blip is not random. I have some questions about the methodology which I'll describe in the next post but the strategy they pursue seems ok to me. There is nothing in this article about states with no DST, other countries, other states with DST, federal hospitals in Michigan, etc.

I'm just pointing out the usual tradeoff when we "borrow strength" from larger samples. The prior week closest to the DST time shift is the most relevant week to use for comparison but it suffers from small sample size. When we add more weeks to build the "trend", we are necessarily adding weeks further away from the "event", and that introduces a different kind of noise. Does that make sense?

The comments to this entry are closed.

Get new posts by email:
Kaiser Fung. Business analytics and data visualization expert. Author and Speaker.
Visit my website. Follow my Twitter. See my articles at Daily Beast, 538, HBR, Wired.

See my Youtube and Flickr.


  • only in Big Data
Numbers Rule Your World:
Amazon - Barnes&Noble

Amazon - Barnes&Noble

Junk Charts Blog

Link to junkcharts

Graphics design by Amanda Lee

Next Events

Jan: 10 NYPL Data Science Careers Talk, New York, NY

Past Events

Aug: 15 NYPL Analytics Resume Review Workshop, New York, NY

Apr: 2 Data Visualization Seminar, Pasadena, CA

Mar: 30 ASA DataFest, New York, NY

See more here

Principal Analytics Prep

Link to Principal Analytics Prep