« Another PR effort to scare you into clicking | Main | Binge Reading Gelman »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Richard

In the previous article, you pointed out the second chart also had a Sunday-Monday blip in the week before the Fall time change. This article give a p=0.011 for the Spring blip. What would be the p-value for the pre-Fall blip?

Were the data examined to investigate the incidence of other day-to-day blips during the course of the year?

Kaiser

Richard: no, as far as we know, they only looked at DST because that's the research topic. I suspect that if you run this same analysis for every possible cut-date, we will find a host of signals. The pre-fall blip is something like p=0.04 if I recall, tantalizing close to the magical 0.05.

Richard

"...on the other hand, we include other weeks of the year that are potentially not representative of the period immediately prior to Spring Forward."

I am not sure what you are trying to say here. I would contend the null hypothesis is that every week (or every pair of days) is the same as every other. Seasonality? The time changes are somewhat independent of season as they are moved forward and backward in time by the whim of our government.

Do the areas which do not observe DST see no effect over the course of the year?

Has any testing been done to investigate whether or not this data is simply random? Given daily counts from several years, I would expect some of the day-to-day pairs to yield up a low p-value.

Kaiser

The p-values are supposed to demonstrate that the blip is not random. I have some questions about the methodology which I'll describe in the next post but the strategy they pursue seems ok to me. There is nothing in this article about states with no DST, other countries, other states with DST, federal hospitals in Michigan, etc.

I'm just pointing out the usual tradeoff when we "borrow strength" from larger samples. The prior week closest to the DST time shift is the most relevant week to use for comparison but it suffers from small sample size. When we add more weeks to build the "trend", we are necessarily adding weeks further away from the "event", and that introduces a different kind of noise. Does that make sense?

The comments to this entry are closed.

NEW BOOTCAMP



Part-Time Immersive
Fall 2019


Link to Principal Analytics Prep

See our curriculum, instructors. Apply.
Kaiser Fung. Business analytics and data visualization expert. Author and Speaker.
Visit my website. Follow my Twitter. See my articles at Daily Beast, 538, HBR.

See my Youtube and Flickr.
Numbers Rule Your World:
Amazon - Barnes&Noble

Numbersense:
Amazon - Barnes&Noble

Search3

  • only in Big Data

Next Events

Aug: 15 NYPL Analytics Resume Review Workshop, New York, NY

Past Events

Jun: 5 NYPL Public Lecture on Analytics Careers, New York, NY

Apr: 2 Data Visualization Seminar, Pasadena, CA

Mar: 30 ASA DataFest, New York, NY

See more here

Courses

R Fundamentals, Principal Analytics Prep

Numbersense: Statistical Reasoning in Practice, Principal Analytics Prep

Applied Analytics Frameworks & Methods, Columbia

The Art of Data Visualization, NYU

Signed copies at McNally-Jackson, NYC

Excerpts: Numbersense Ch. 1, 7, 8. NRYW

Junk Charts Blog



Link to junkcharts

Graphics design by Amanda Lee

Community