I am excited to chat with Professor David Spiegelhalter, who is no strangers to our UK audience, and our statistics colleagues. Perhaps his most well-known contribution is the DIC criterion for model selection, introduced by a paper by him and collaborators. He holds the impressive title of Winton Professor for the Public Understanding of Risk at the University of Cambridge (link). He also writes a blog called Understanding Uncertainty (link), and as the accompanying photos show, is someone who knows how to enjoy life.
I mean, a statistician who appeared on Winter Wipeout (link to Youtube for spectacular splashes at 15:12 and 16:10), who'd have thought? Yes, Wipeout is that obstacle course show held over a pool of water. He also made this rather more educational Youtube (link).
KF: How did you pick up your impressive statistical reasoning skills?
Well that's your label and not mine. I started off doing pure maths, but that got too hard, and then I did mathematical statistics, and then got both too hard and too boring, and so for years now I have preferred getting involved in real problems that people are trying to handle using data.
But generally the data are messy, incomplete, and not as relevant as desired. While some technical insights are vital, I think any skill comes mainly from an apprenticeship of dealing with many problems, making many mistakes, trying to explain things, and far too much time spent critiquing studies.
KF: What is your pet peeve with published data interpretations?
That's easy to identify - it's a non-scientific approach to science reporting. I have a naive view that scientists should do investigations to answer a question, and they should be pleased whatever the answer. But it seems clear from many publications that some researchers set out to prove a point, and do everything they can to do so: in the worst instances they write an inflated abstract, the journal puts out a press release, and the media lap it up.
I feel the public get fed a diet of highly selected and biased studies (often, ironically, on diet) that have gone through so many filters that they become very unrepresentative of the bulk of research conducted. In my more cynical old-man moments, I would say that the very fact that a study is reported in the media is a reason to ignore it - almost certainly you would not have heard about it if the results had different.
KF: That last point sounds counterintuitive. Let's take a diet example. The media has been telling us new research suggests that four or more cups of coffee each day is great for you. If the research result were null, surely it wouldn't get picked up by the media. Why would that be a bad thing?
Say the media tells us four cups or more of coffee every day is great for you, and I judge that, if the study had shown no effect of coffee, it would not have been press-released and the media would not have picked it up. This probably means there is an unknown number of studies out there that showed the opposite to what I am being told by the media, but I am not hearing about them because they are not newsworthy enough. Therefore ignore the media. It also saves a lot of time.
KF: That's rather sobering. Which source(s) do you turn to for reliable data analysis?
These would tend to be individuals and teams that I know and trust: Andrew Gelman (link to my interview) comes to mind, and there are other great scientists whose opinions I value. I also respect people who are trying to produce good odds for future events, without pushing for one side or another. A purely financial interest produces objectivity, and so sports-betting sites are good examples - it will be interesting to see how 5-38 develop.
KF: Thank you very much for sharing your insights.
David and Michael Blastland just published a new book called "The Norm Chronicles", which I had a chance to preview. It's an idiosyncratic look at the idiosyncratic risks of modern living.