I haven't read Kartik's book but it looks like something I'd enjoy.
In this interview with Verge, he cited the following experiment:
One problem with grading in college courses is that different TAs are more or less lenient. So they used an algorithm to normalize or modify the grade so that the level of leniency was consistent. Then, one group received minimal information about how the algorithm worked, a second group got some high-level data, and the third got all of the information on how the algorithm worked and the raw data and all the changes made. The result was that the level of trust in the third group was back down to the same level as the group that didn’t receive any information. So it goes to show that if you reveal that much information, it’s as if you reveal nothing.
This discussion is about transparency when it comes to algorithms. His recommendation is to provide "just enough" information but not "too much". The test group that received details about the algorithm trusted it less than the group that received high-level information, and about the same as the group that was kept in the dark.
Kartik cites this experiment in the context of explaining to end-users how algorithms work.
The above commentary is equally valid when explaining algorithms to decision-makers, such as senior managers at an organization. Most people experienced at this quickly realize the futility of explaining minute details of an algorithm. Decision-makers are satisified if they gain an intuitive understanding of how the machine functions, and if they observe that the outputs "make sense."
***
One important issue that isn't addressed in the interview is gaming. More transparency is more fodder for those who want to game the system. I discuss this in the chapter about credit scoring in Numbers Rule Your World (link): we can now correct our credit history data, but this will clearly introduce a bias because people will remove incorrect, negative data but not incorrect, positive data.
[PS. After I wrote this, the current controversy around Boeing 737 MAX provides a counter-point. The current theory about what caused the crashes (at least the Lion Air one) is that an automation software misdiagnosed the problem. Apparently, Boeing decided not to inundate pilots with too much information and did not disclose the software prior to the Lion Air crash.]
Comments
You can follow this conversation by subscribing to the comment feed for this post.