« Know your data 8: revealing the little choices in your life | Main | Prostate cancer screening under scrutiny »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Thomas Colthurst

I'm not sure why you say that Cowen and Sumner don't include any alternative models. Sumner states that level targeting "should be the standard model" in his post, and Cowen explicitly lists four alternatives here: http://marginalrevolution.com/marginalrevolution/2011/10/is-lm-keynesianism-why-not-and-which-alternatives.html

I also think that requiring that the alternative model be *provably better* than the critiqued model is an impossibly high standard outside of pure math. I think it is enough that the alternative be simply *more useful*. Sometimes this means preserving the useful parts of what is being criticized, sometimes it doesn't. The canonical example of the later is Copernican astronomy initially giving *less* accurate predictions than the Ptolemaic model.

Finally, if there is anyone who does not suffer from the belief that models can be proved wrong and discarded, it is Tyler Cowen. The man is methodologically pluralistic to a fault; his prose often reads like someone translating Bayesian model averaging into English. His critiques of IS-LM are best understood (well, by statisticians, at least :) ) as demonstrating a low value of P(model | evidence), particularly in the context of understanding contemporary problems.


Thomas: thanks for the comment. I did read the post on alternatives which came after my post. However, I don't see anything there. As some commenters on Tyler's blog point out, it's a hotchpotch of ideas that do not add up to a model. At least that is what it sounds like to me as an outsider (not an economist).
How do you show the alternative is "more useful" when you can't show it's "provably better"? I'm not sure I understand the difference.
The trouble I have is that whatever alternative Tyler is suggesting, a different researcher can go and list the assumptions that don't make sense, the features that are omitted, the predictions that fail, etc. That's because every model suffers from these types of issues.


Suppose a model, if applied, will kill a kitten you're fond of.
Suppose you can raise valid and persuasive criticisms this model but can't construct an alternative to it.
Why exactly is it the case that said criticisms can't be take seriously?

A pretense of knowledge can do real harm. It's better to be ignorant than to have faulty knowledge.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)


Link to Principal Analytics Prep

See our curriculum, instructors. Apply.
Business analytics and data visualization expert. Author and Speaker. Founder of Principal Analytics Prep, MS Applied Analytics at Columbia. See my full bio.

Next Events

July: 24 Data Analytics Resume Workshop, NYC

July: 30 Joint Statistical Meetings, Vancouver

Aug: 28 Swiss Statistics Meeting, Zurich

Sep: 6 Data Visualization Seminar, San Diego, CA

Sep: 12 NYPL Analytics Careers Talk, NYC

Past Events

See here

Future Courses (New York)

Summer: Statistical Reasoning & Numbersense, Principal Analytics Prep (4 weeks)

Summer: Applied Analytics Frameworks & Methods, Columbia (6 weeks)

Junk Charts Blog

Link to junkcharts

Graphics design by Amanda Lee


  • only in Big Data