« Another PR effort to scare you into clicking | Main | Binge Reading Gelman »


Feed You can follow this conversation by subscribing to the comment feed for this post.


In the previous article, you pointed out the second chart also had a Sunday-Monday blip in the week before the Fall time change. This article give a p=0.011 for the Spring blip. What would be the p-value for the pre-Fall blip?

Were the data examined to investigate the incidence of other day-to-day blips during the course of the year?


Richard: no, as far as we know, they only looked at DST because that's the research topic. I suspect that if you run this same analysis for every possible cut-date, we will find a host of signals. The pre-fall blip is something like p=0.04 if I recall, tantalizing close to the magical 0.05.


"...on the other hand, we include other weeks of the year that are potentially not representative of the period immediately prior to Spring Forward."

I am not sure what you are trying to say here. I would contend the null hypothesis is that every week (or every pair of days) is the same as every other. Seasonality? The time changes are somewhat independent of season as they are moved forward and backward in time by the whim of our government.

Do the areas which do not observe DST see no effect over the course of the year?

Has any testing been done to investigate whether or not this data is simply random? Given daily counts from several years, I would expect some of the day-to-day pairs to yield up a low p-value.


The p-values are supposed to demonstrate that the blip is not random. I have some questions about the methodology which I'll describe in the next post but the strategy they pursue seems ok to me. There is nothing in this article about states with no DST, other countries, other states with DST, federal hospitals in Michigan, etc.

I'm just pointing out the usual tradeoff when we "borrow strength" from larger samples. The prior week closest to the DST time shift is the most relevant week to use for comparison but it suffers from small sample size. When we add more weeks to build the "trend", we are necessarily adding weeks further away from the "event", and that introduces a different kind of noise. Does that make sense?

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)


Link to Principal Analytics Prep

See our curriculum, instructors. Apply.
Business analytics and data visualization expert. Author and Speaker. Founder of Principal Analytics Prep, MS Applied Analytics at Columbia. See my full bio.

Next Events

July: 24 Data Analytics Resume Workshop, NYC

July: 30 Joint Statistical Meetings, Vancouver

Aug: 28 Swiss Statistics Meeting, Zurich

Sep: 6 Data Visualization Seminar, San Diego, CA

Sep: 12 NYPL Analytics Careers Talk, NYC

Past Events

See here

Future Courses (New York)

Summer: Statistical Reasoning & Numbersense, Principal Analytics Prep (4 weeks)

Summer: Applied Analytics Frameworks & Methods, Columbia (6 weeks)

Junk Charts Blog

Link to junkcharts

Graphics design by Amanda Lee


  • only in Big Data