From Andrew Gelman's blog, I learned about a paper that makes the claim that daylight savings time could kill you. (Andrew links to this abstract, which is from a poster presentation at a meeting of the American College of Cardiology, and later published as a supplement in the ACC Journal; one of his readers found the published paper.) There is also a press release sponsored by the Journal with the fearmongering headline "Daylight saving impacts the timing of heart attacks". In case you don't get the message, there is a subhead, pushing the idea that "Setting clocks ahead 1 hour may accelerate cardiac events in some, a large study shows".
Given that heart attacks are all about "timing", this headline will be interpreted as daylight saving causes heart attacks. This is probably the intention of the publicist who wrote that headline. But the headline distorts the researcher's conclusion, which was stated as (in the poster):
In the week following the seasonal time change, daylight savings time impacts the timing of presentations for acute myocardial infarction but does not influence the overall incidence of this disease.
First of all, the researchers clearly states that daylight savings time (DST) does not influence overall incidence of AMI. That should have been end of story. Secondly, there is a world of difference between DST "accelerating cardiac events" and observing an increased number of "presentations for acute myocardial infarction". The researchers did not analyze data on heart attacks--the data they had were admissions for AMI undergoing PCI (percutaneous coronary intervention).
Thirdly, anyone looking at the accompanying chart (from the poster) should be asking a lot of questions about the conclusion:
The most plausible conclusion is what the researchers said in the poster: there is no effect, and nothing to see here. But you can't get a paper published or get the press's attention with that conclusion! So you put a magnifying glass on that one blip you see on the Monday after Spring Forward (top chart). That is highly problematic because the two weeks of data is not enough to understand whether that level of a jump from Sunday to Monday is abnormal. In fact, if you look at the Sunday to Monday jump in the bottom chart on the left side, you'll see that it is similar in scale... the only reason why this increase is not commented on is that it does not occur after a DST time shift!
Later in this post, we'll examine how research methodology turns a blip into a supposedly important finding that deserves publicity.
Before getting into the methodological issues, one needs to ask the most basic question. Did the researchers check the quality of the data or just take the data as is? In my experience, the data surrounding DST time shifts are often inaccurate. Let me talk about an example from Web server type systems which I'm familiar with. Some systems fail to switch time zones, an analyst or a customer notices the error some hours later, or a day later, and the administrator corrects the oversight at this time. The administrator does not modify the erroneous past data sitting in logs in some remote servers because it's out of process, because it requires an expensive surgical procedure to isolate the wrong entries, and because, understandably, he/she is only concerned about whether the application understands the current time to serve current and future users. The IT department does not know that in the future, a data analyst will use the log data to investigate the effect of DST change.
The fact that this is a "large study" (large for this type of study) makes this an even bigger problem. The data come from a registry that "encompasses all non-federal hospitals in the state of Michigan". This means that people from many different organizations are handling the data. The more people are involved, the more likely there will be mismatches. For example, any of the hospitals could have database servers that fail to switch time zones. These are some of the challenges of using Adapted and Merged data, which I previously outlined in the OCCAM framework for understanding Big Data. Also, if you are in Chicago, come hear my talk on Wednesday at Predictive Analytics World on OCCAM and Numbersense.
Another basic question is the practical implication of such a result (assuming that we can believe it). Looking back at the top chart shown above, you see that the excess on Monday was balanced by deficits on Tuesday (most of it) and Thursday. Thus, if we believed the result, then DST has the effect of accelerating the heart attack by one or three days. In other words, you will still have a heart attack but just a few days earlier.
Wait a minute, you might challenge me. Didn't I make the assumption that the excess people who suffered heart attacks on that Monday would have gotten heart attacks during the same week otherwise? That would be a great question! That's numbersense! The reason why it is a reasonable assumption is that if you didn't make it, you are making a different assumption: you now have to explain how DST simultaneously induces some heart attacks (to explain the excess on Monday) and prevents other heart attacks (to explain no overall effect).
Moreover, this finding is purely based on a correlation in observational data. No one is arguing that the act of having a DST policy induces heart attacks. However, because we don't have a causal factor to work with, doctors have no action to take as a result of this analysis.
As this article is getting long, I will leave comments on the methodology to a next post.