The third section of Chapter 2 of SuperFreakonomics relays the apparent success of a British bank analyst in predicting suspected terrorists using bank data. My overall reaction to this section can be read on Eric McNulty's blog here.
Eric is the editorial director of the International Institute for Analytics, which has the ambition to connect together practitioners and researchers in the field of business analytics. Tom Davenport, who has done some great work documenting the burgeoning field of business analytics, is its research lead. (See book 1 and book 2).
I used L&D's example to illustrate the "secrets" of predictive modeling, which is widely used in businesses to perform tasks ranging from credit scoring to targeting marketing offers.
***
A few highlights, in the order they appear in the piece:
- Even if all suspected terrorists are Muslims, it does not follow that all Muslims are suspects.
- Predictive models identify correlations between features and the target behavior; they do not make a cause--effect assertion.
- Predictive algorithms have batting averages, and these averages reflect in part their tendency to swing at pitches.
- The apparently successful model has a hit rate of 0.167 among those identified as suspected terrorists while missing 99% of potential terrorists.
- Models do not replace sound judgment.
***
To keep to a respectable length, I had to cut out several sections from the original draft. They appear below:
Gaming the algorithms
Why should suicide bombers buy life insurance? asked Levitt and Dubner in the chapter's title. Horsley discovered that people who own life insurance policies are less likely to be suspected terrorists than the average bank customer, all else being equal. Noting that life insurers do not cover claims in case of suicide, the authors speculated that future suicide bombers could proactively buy life insurance to confuse Horsley's predictive model.
Indeed, if they took such an action, the model would (mis-)classify these crafty criminals as regular customers. This is why the details of predictive algorithms should be treated as trade secrets.
Consumer advocates misfire when they exhort banks to open up the "black boxes" of credit scoring. When it became known that the authorized-user status was a positive indicator, EBay-style marketplaces sprang up in which people with high credit scores sold their desirable status to the highest bidders, thus distorting the entire system. Eventually, the modelers dropped this feature from the credit scoring algorithms.
Variable X
Levitt and Dubner refrained from revealing the best indicator of suspected terrorists "in the interest of national security". This "Variable X" has predictably led to rampant speculation. There are two reasons to keep a straight face while the commotion ebbs and flows.
As explained, Horsley's model is far from "accurate" if accuracy includes identifying most, if not all, of the suspected terrorists.
Moreover, a key feature of "X" is its specificity: as Levitt and Dubner pointed out, very few bank customers exhibit this behavior, few enough to narrow the suspicious list to 30 names out of 50 million. It is highly probable that any correlation that exists between "X" and being a suspected terrorist is "spurious", to use a statistical jargon. All predictive models attempt to make generalizations from history, and any algorithm which targets an extremely specific trait risks not being general enough. When this happens, the model will be ineffective in predicting the future, even if it performs well on back-testing.
Back to basics
In the discussion thus far, I entertained the conceit that Horsley was attempting to predict "suspected terrorists". In building his model, he started with a list of suspects, rather than proven terrorists. This decision stemmed from the dearth of known terrorists, which, as I noted before, is what makes this problem hard. Levitt and Dubner justified this choice: "Granted, none of these men were proven terrorists; most of them would never be convicted of anything. But if they resembled a terrorist closely enough to get arrested, perhaps their banking habits could be mined." (Later, they went from talking about suspected terrorists to suicide bombers; surely not all terrorists blow themselves up.)
The analysis of batting averages clearly shows that almost all terrorist suspects are probably innocent. Thus, an algorithm tailored to looking for suspected terrorists will yield suspects who are mostly innocent people falsely accused of extremely serious crimes. Horsley effectively modified the objective of his predictive model when he switched from predicting terrorists to predicting suspects.
At the earliest stage of the development of predictive models, you must be crystal- clear in defining the business problem and setting the relevant target(s) of prediction. Confusion on this most basic issue will almost certainly result in disappointing models that solve the wrong problem.
Comments