Finally, we have a serious report on the potential harms of AI that is recommended reading (link, paywall).
***
Here are five things about AI harms that I think everyone should know.
1. The press is typically focused on the wrong harm
Perhaps misdirected by industry heavyweights, the press frequently waxes about the potential for AI to "kill us all." But it is mundane harms, such as being robbed $1,000 by an anonymous person across the ocean, that most of us will likely be victims of.
2. Don't forget the term "deepfake"
A few years ago, the press warned us about the harms of "deepfakes." These days, not so much. In fact, one has to dig pretty deep into the Financial Times article to find a mention of "deepfake." At least they used the term, which is a pleasant surprise. What the industry is now calling "generative AI" is the same thing as what was labeled "deepfake".
The name change switches our attention from the harmful use cases to the beneficial ones. But a name change does not reduce the potential for AI abuse. Rather than ameliorating it, the potential for harm is scaling up as quickly as the underlying technology is developing.
3. It's two sides of the same coin
It's wonderful to imagine a world in which we can deploy AI that gives us benefits without also inducing harms. But it's a mytical world that won't exist.
Take the simple example of 1-click shopping, for which our credit card details must be stored. For sure, these technologies make it much more convenient for online shoppers, and delight retailers as they reduce the friction of buying. But they also open people to hacking, stolen identities and scams.
If our account details weren't saved on the servers of these online retailers, then they could not be stolen. Not saving such data also destroys the value of 1-click shopping.
4. Reducing AI harms through more AI is a dead end
I laugh every time someone claims that the solution to AI abuse is more AI. This idea requires AI to predict accurately which outputs came from AI.
An example would be technology that tells professors which students submitted essays written by ChatGPT. The AI harm reduction technology reads an essay and computes the probability that it was written by an AI not a human.
But... we are looking at one side of the coin. What's on the other side? People - e.g. newsrooms, banks - are looking to use the same AI technology as the students to write articles, replacing human writers. The more successful are these use cases, the less predictive is the AI harm reduction technology.
Remember that the ultimate goal of AI is to equal or exceed human ability. Let's pretend we have an AI that is effectively equivalent to humans. How is it possible for another AI to predict which output came from AI and which from humans?
5. AI developers can reduce AI harms today (but they don't)
Try subscribing to ChatGPT. You will find that you can't do so without disclosing your identity. They won't accept a Google Voice number, they absolutely want your mobile number, which completely identifies you to them.
The main reasons why ChatGPT wants your personal data are commercial.
However, we know that they know who we are. Now, consider the professor who wants to know if someone used ChatGPT to write an essay.
You don't need any AI to make a prediction. ChatGPT knows for sure because ChatGPT has the personal data of the student. So if ChatGPT is serious about reducing this type of AI harm, it can absolutely do it. This method works even if the AI evolves to the point where it generates essays indistinguishable from those by human writers.
Why won't they pursue this path? I'd let you all ponder this.
Recent Comments