Tom Davenport is one of the leading voices on business analytics, and he has a new piece titled "Why are most 'targeted' marketing offers so bad?" in which he expanded on a question I raised in my HBR article. Tom's book Competing on Analytics is a classic. He has a great appreciation for the business of the data business.
In the new feature, Davenport classifies marketing offers he gets into five types:
retargeted offers, well-meaning but poorly-targeted offers, offers that benefit the offerer rather than the potential consumer, offers that are OK except for the context, and well-targeted offers that benefit you
and he certainly speaks some truths.
On retargeted offers, he reminds marketers "for the most part, if we abandon a search or purchase, we intended to do so."
On well-meaning and poorly-targeted offers (like sending men offers for women's clothing), he suspects that the retailers didn't try hard enough to mine their data.
***
I think there are some technical deficiencies partially responsible for these issues.
Firstly, human behavior and preferences can never and will never be reduced to a set of equations. Thus, every targeting algorithm has to balance false positives and false negatives. I have written about this a lot. Start with Chapter 4 of Numbers Rule Your World or the Groupon and Target chapters in Numbersense.
Secondly, the existence of "retargeting" as a business is entirely due to perversion of measurement, which I address in Chapters 1-2 of Numbersense. I also wrote about how online marketing is measured here. Briefly, the more you flood customers with impressions, the more likely your impression is close in time to a purchase event, the more credit you get for "influence".
Thirdly, the data is noisy and few are investing any time in getting rid of bad data. Just think about it for a second. Let's say you are a guy. If your son let his classmate use your iPad to buy something from a girl's clothing site just once, you are forever tagged as a girl's clothing buyer.
The Big Data mindset to solving this problem: they want to be even more creepy; they want "all of your data". If everything is being tracked, but by hundreds or thousands of different entities, that wouldn't work either, so the Big Data end game is one all-knowing monopolist of all of your data.
But this path is entirely a dead end. Here's something to ponder - the fact that you visited a particular website is today equated to an expression of interest in that website. The data measure what you do, and why you do it.
The solution is humility, and accepting a level of uncertainty. Enhance observed data with more direct, even qualitative data. Remove noise, which is a way of managing the uncertainty.
I find that Amazon's recommendations are based very much on my recent purchases, even though I would be interested in the type of material that I have bought several years ago. I assume however that Amazon does find it's strategy to be the best overall, as they don't miss something like this. However, maybe because I never buy anything from the recommendations they could try another strategy. It doesn't seem something that would be difficult to implement.
Posted by: Ken | 08/14/2014 at 04:28 AM
Ken: It's the mentality that is hard to change. It's difficult to capture this in a short post but one of the tenets of the field is that removing noise is bad or time-wasting. There is an obsession about collecting "all of your data". In addition, the challenge is in figuring out which pieces of data to be noise. If you are just staring at web logs, it will be very challenging for you to know that it was my son's friend who browsed to that girls' clothing site!
Posted by: junkcharts | 08/14/2014 at 10:02 AM