This is an article that should have been written a long time ago. It's great that Mark Pothier (Boston Globe) finally wrote it. (Another article that begs to be written: how much productivity Microsoft has destroyed with the new Office. I don't need to see another review of the iPad; how about a proper review of the new Office?)
The bottom line: most online security measures are time-wasters that have no proven benefits, the equivalent of snake oil. Don't you ask us to make up another password.
Here are some passages that we can approach with a statistical mindset:
While doctors can cite statistics showing smoking causes cancer, and road-safety engineers can produce miles of numbers supporting seat belt use, computer security professional lack such compelling evidence to give their advice clout.
Great point about having data to support the decisions to roll out irksome security measures.
However, Mark totally under-estimated the immense intellectual gyrations statisticians went through to prove smoking causes cancer. I mentioned this briefly in Chapter 2 of Numbers Rule Your World -- even a giant of the field, Fisher, disputed the causal link, claiming that one day a gene would be found that caused us both to smoke and to get cancer! Much of Chapter 2 concerns the tremendous resources devoted to finding the source of one E.coli outbreak. None of this is simple stuff.
The connection between seat belt use and road safety has to be even more tenuous. This is a practical situation where statisticians clearly cannot run a randomized experiment (which would force a randomly selected group of drivers to not wear seat belts), and in these cases, cause--effect relationships are notoriously hard to establish.
Security professionals need to consider that user education costs everyone (in time), but benefits only the small percentage who are actually victimized, [Cormac Herley, a principal researcher at Microsoft] wrote.
This sentence is too simplistic. Yes, only a small percentage will be victimized. The key to understanding this situation is to realize that who gets victimized is not known beforehand! This is a classic case of "decision making under uncertainty". We have to take action before we know who among us may become victims.
This issue is analogous to the insurance problem discussed in Chapter 3. All insurance schemes -- when all is said and done -- result in subsidies, transfers of wealth from one group to another. However, many of us happily participate in these subsidy schemes. Why? It's because at the time we pay the premiums, we cannot predict whether we would be stricken by a horrific illness or a freak accident. In that sense, everyone has equal opportunity to claim the rainy-day funds. (I'm simplifying a bit here. There are moral hazard and adverse selection problems to deal with.)
Banks and other investment companies often guarantee to reimburse customers if unauthorized withdrawals are made from their online accounts, so the customer does not pay a direct price. The banks face losses, but they are relatively modest -- the annual cost nationwide as a result of phishing attacks is $60 million, Herley estimated.
Herley thinks banks are willing to foot the bill because the total cost to the industry of phishing attacks is small on average. That is one reason but something else - statistical - is happening here. The law of large numbers (see also here). It is very difficult to predict how much loss will be suffered by any individual customer; however, the total loss inflicted on all bank customers is very predictable due to the large sample size.