I have been traveling quite a bit lately, and last week, I went to Rome for a few days, and spent time at the KDIR conference. Rome is one of my favorite destinations and apart from the architecture and museums, and the restaurants, I also enjoy shopping there.
To my dismay, a gray cloud followed me around this entire trip - in the form of a misfiring fraud detection algorithm. On foreign trips, I always prefer to spend cash via my ATM card to avoid those ridiculous credit card surcharges. And I have used my Schwab card without any issues for many trips.
This time, however, my card was blocked after one withdrawal. This means I had to call back to the States to unblock it. Then, the next day, it happened again. And this continued every day until the last day of the short four-day trip.
Needless to say, I was getting more and more irate as the trip progressed. It makes me wonder about fraud detection algorithms. Are they any good? If they are tuned to be very risk-averse, then you can always prove to your boss that you have prevented a lot of fraud. The flip side is you have always caused a lot of hassle for your good customers. In technical terms, each time my card was blocked, the algorithm committed a false-positive error.
The service reps had not a clue how these algorithms work. On the first day, I was told that it blocked me because I didn't give them warning that I'd be traveling. That made sense until it blocked me again the next day... after I told the rep the exact days I'd be in Rome. The next rep explained that I was getting blocked because Rome is a high-fraud zone, and I was using certain ATM machines. That sounds reasonable, except if those were the reasons, then I might as well throw the card away. The experience got me thinking about the challenges of making a good fraud detection algorithm.
Clearly, when I am traveling, my habits don't match what is in my customer history. I'm going to be engaging in a series of transactions that might look suspicious - like taking more cash out than usual, taking cash from places and machines that I have never used, taking cash out multiple times a day (because there is a per-transaction limit on most ATMs), taking cash out from machines all over town, etc. How can a computer figure out if those transactions are legitimate?
When the algorithm got it seriously wrong, it can be very annoying. One of those days, I had put money down on a suit. It was an hour away from the store's closing time. Because the problem could not be resolved in time, I had to go back the next day, which meant I had to cut other things out of the itinerary. If it happened on the last day of the trip, it would have been a lot of trouble. I racked up probably $100 of international roaming charges for all the calls I had to make to unblock the card repeatedly. There were several moments I had to stand on the street, the phone on one hand, the other hand operating the ATM, testing the machine, pulling out cash, etc. Those moments felt very ironic because the blocking of my card was supposed to make me feel secure.
As a statistician, I want to know the probability of falling victim to the kind of fraud Schwab's algorithm is trying to prevent, and the average cost of such fraud (bearing in mind that you can only take 250 euros per transaction). I suspect that the cost of the inconvenience, both tangible and intangible, may outweigh the potential benefit.
I'd be interested to know whether a relatively smaller bank (in terms of customers at least) like Schwab has more difficulties with this. Presumably, your ROC curve gets better with more data (or better data analysis anyway) so the larger banks like BoA will be able to do a better job predicting fraud.
It's also worth considering that the probability of these types of fraud may be low precisely because of these kinds of protections. In a world where we accepted more false negatives, there would be an increased incentive to try and defraud these systems.
Also, banks may be in a race to the bottom (or top depending on how you view it) not to be perceived by fraudsters as the easiest target. BoA don't want it to get around that BoA has the loosest fraud rules or we get the same incentive effect as above.
I guess what I'm saying is that it's at least a possibility that this level of false positives is an equilibrium that you can't easily go below (because true positives increase in response).
Posted by: Jon Mellon | 10/30/2014 at 08:17 AM
I've wondered about these algorithms in another context: the push by my local supermarket chain to "scan" guns you as customer wield as you fill your cart. I loved being able to walk up to a register, point the scanner at a barcode and have it download everything I've bought to the register so I can pay and leave very quickly. Problem: it asked for a bag check. OK, they need to do some form checking: the light flashes and a cashier has to come over, unload every single item and scan each one. Next time, it asked for a bag check. I said something to a manager. Next time it asked for a bag check. Manager gave me a new number (though of course it was still my customer information). Next time it asked for a bag check. I called corporate. They had no ideas at all. Not kidding: they couldn't figure out what was happening, if I was just on some weird run of luck or if, as I suspected, some data field had corrupted and that defaulted me automatically to bag check. It was simply beyond them to figure it out.
I would suspect the rejection flag was never reset in your case. That may require more intervention, like a ticket for work, than a phone call can easily generate or than a phone call can cause to happen overnight.
Thinking about it, years ago I had a problem with charges suddenly being denied by AMEX. No one could figure out why. I ended up talking to the executive offices and was told the problem was - this is kind of funny in retrospect - that it originated in how their balance carrying credit cards worked with their old-fashioned pay it all off cards. That is, if you took out one of their actual new credit cards they hadn't correctly connected the approval mechanism for that to the one used for the charges you made on your regular AMEX card. They were getting reports of charges being denied for customers like me and the only solution they had for now was to delete the credit card account. Not kidding. Stuff like this happens. Builds break. Your build may work fine for your department but break what another department uses.
Posted by: jonathan | 10/30/2014 at 11:45 AM
These days cash withdrawals are less common and therefore probably considered more suspicious by fraud detection software. Given the roaming fees you incurred, it seems you may have been better off paying the credit card transaction surcharges.
Posted by: Josh | 10/30/2014 at 03:28 PM
Josh: That was exactly what irked me. Schwab has now offered to pay for the roaming charges so at least the monetary loss is taken care of. Cash is definitely still an important thing especially in a foreign country.
Jon: I don't think the generic concept of "big data" is useful at all. What kinds of data do big banks have more of? These banks need more "cases"; adding more non-fraudulent transactions don't improve the algorithms. One could argue that maybe small banks are more targeted by fraudsters so they may even have better data. The other issue is the adversary. Historical data are not as useful as it seems because the adversary is constantly adapting and actively gaming the system.
Posted by: junkcharts | 10/31/2014 at 03:23 AM
Couldn't you use credit card at the suit place? The $100 you paid in roaming is more than credit card charge on fx conversion.
you are behaving to your hard coded algorithm of not using a credit card.
Posted by: Nirav | 11/29/2014 at 03:04 AM