« Another PR effort to scare you into clicking | Main | Binge Reading Gelman »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Richard

In the previous article, you pointed out the second chart also had a Sunday-Monday blip in the week before the Fall time change. This article give a p=0.011 for the Spring blip. What would be the p-value for the pre-Fall blip?

Were the data examined to investigate the incidence of other day-to-day blips during the course of the year?

Kaiser

Richard: no, as far as we know, they only looked at DST because that's the research topic. I suspect that if you run this same analysis for every possible cut-date, we will find a host of signals. The pre-fall blip is something like p=0.04 if I recall, tantalizing close to the magical 0.05.

Richard

"...on the other hand, we include other weeks of the year that are potentially not representative of the period immediately prior to Spring Forward."

I am not sure what you are trying to say here. I would contend the null hypothesis is that every week (or every pair of days) is the same as every other. Seasonality? The time changes are somewhat independent of season as they are moved forward and backward in time by the whim of our government.

Do the areas which do not observe DST see no effect over the course of the year?

Has any testing been done to investigate whether or not this data is simply random? Given daily counts from several years, I would expect some of the day-to-day pairs to yield up a low p-value.

Kaiser

The p-values are supposed to demonstrate that the blip is not random. I have some questions about the methodology which I'll describe in the next post but the strategy they pursue seems ok to me. There is nothing in this article about states with no DST, other countries, other states with DST, federal hospitals in Michigan, etc.

I'm just pointing out the usual tradeoff when we "borrow strength" from larger samples. The prior week closest to the DST time shift is the most relevant week to use for comparison but it suffers from small sample size. When we add more weeks to build the "trend", we are necessarily adding weeks further away from the "event", and that introduces a different kind of noise. Does that make sense?

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Marketing and advertising analytics expert. Author and Speaker. Currently at Vimeo and NYU. See my full bio.

Next Events

Mar: 26 Agilone Webinar "How to Build Data Driven Marketing Teams"

Apr: 4 Analytically Speaking Webcast, by JMP, with Alberto Cairo

May: 19-21 Midwest Biopharmaceutical Statistics Workshop, Muncie, IN

May: 25-28 Statistical Society of Canada Conference, Toronto

June: 16-19 Predictive Analytics World (Keynote), Chicago



Past Events

Feb: 27 Data-Driven Marketing Summit by Agilone, San Francisco

Dec: 12 Brand Innovators Big Data Event

Nov: 20 NC State Invited Big Data Seminar

Nov 5: Social Media Today Webinar

Nov: 1 LISA Conference

Oct: 29 NYU Coles Science Center

Oct: 9 Princeton Tech Meetup

Oct: 8 NYU Bookstore, NYC

Sep: 18 INFORMS NYC

Jul: 30 BIG Frontier, Chicago

May: 30 Book Expo, NYC

Apr: 4 New York Public Library Labs and Leaders in Software and Art Data Viz Panel, NYC

Mar: 22 INFORMS NY Student-Practitioner Forum on Analytics, NYC

Oct: 19 Predictive Analytics World, NYC

Jul: 30 JSM, Miami

Junk Charts Blog



Link to junkcharts

Graphics design by Amanda Lee

Search3

  • only in Big Data

Community