My friend, who I hadn't seen in a decade, was having a hard time understanding why I had left Facebook. After all, not being on the platform had made getting together in person that much more difficult. What was wrong with me?
"The AI we need to fear isn't the Terminator. It is codified cultural bias."
Shockingly, that didn't clear things up. Before we start talking about extinction-level robotic events, we need to talk about your "feed".
Social networks have some sort of "feed". This is the ongoing list of items created by a social circle visible to you: tweets from those you follow, photos from friends, job updates from coworkers, blurbs from blogs, etc. When a site starts, be it Twitter, Instagram, or Facebook, this stream of events is be presented in reverse-chronological order, with the newest item appearing on top.
While simple, this presentation became problematic as these sites grew. Each platform wanted to maximize the amount of interaction a visitor had with the content they were presented. But as the flow-rate of each stream increased, "great" content - those items that would engage an audience and cause them to interact - began disappearing beneath of flood of banality.
The social network's response was to, algorithmically, surface these pieces; to persist them in a stream longer than they would otherwise be in a chronologically organized timeline. Pieces of code scan everything available to you, prioritize what you should see, and display those items first.
In creating the default framing of the world around us, these algorithms have tremendous power. They are trade secrets. This code, and the machine learning behind it, isn't published publicly for review. It isn't possible to independently evaluate and assess the level of charm or malice contained within. These algorithms could be every bit the obedient, well-intentioned digital retriever, fetching our online newspaper on command.
Unfortunately, history tells a different story. For example, Pokemon, because of a algorithm quirk, hate black neighborhoods. Facebook, previously caught in a filtering scandal having supressed news of a certain type, is getting worse. But so what if algorithms routinely demonstrate a white bias - it is only for games and social networks, right? Wrong. When industry practice, with its bias, is then subsequently applied to sentencing we don't need Terminators; we've got algorithms afoot destroying lives.
How do examples like those above happen? Did the authors of the Google algorithm that tagged two photoed black people as gorillas intend to offend? Was it Microsoft's mission to create an algorithm, named Tay which spewed genocidal tweets? I'd assume not, at least not consciously. But the process used for everything from sentiment analysis to image recognition relies on patterns derived from "machine-learning". And machine-learning has proven particularly adept at learning unintended lessons.
Machine-learning starts with a subset of data. This data is used to train the algorithms to recognize patterns. But, if there is a problem with the data, things go awry. Specifically, the lack of diversity in test pools causes these pattern matching processes to double down on cultural biases. Microsoft admitted "it wasn't prepared for what would happen when it exposed Tay to a wider audience." When the seed data is from almost entirely 20-something, single, white males the result are algorithms predijudiced against the old, married, female, and racially diverse.
I'm not the only one increasingly bothered by this. Recently the New America institute hosted an even chat with Charles Stross, the author of a number of science fiction books. He was asked what about future AI scared him. Charles stated quite firmly that he is much more concerned about algorithms that reinforce, even exacerbate, social injustices than he is a robotic uprising.
That brings us back to Facebook. The thing that sets it apart from other services is the heavy-handedness with which it pushes the algorithmic feed. Where elsewhere I can flip a setting or click on a dialog and have the chronological view back, Facebook reverts to the algorithmic version over and over and over. If I am against ceding cultural artifacts to a closed system then I shouldn't tithe to it. If I'm creeped out by Facebook monetizing "echo-chambers" built from friends and family's content then I shouldn't contribute contribute the mortar. Finally, if I distrust the algorithm's ability to serve society's best interests, I am obliged not to be there.
For years, the begrudging response to why somebody should use it was that "everybody is already on it". That will remain true until it is not.
- Update 2016-09-14: The PBS Idea Channel on Why Facebook News Can't Escape Bias
- Update 2016-12-06: Software Glitches so Bad, It Gets the Wrong People Arrested
- Update 2017-08-25: Machines Taught by Photos Learn a Sexist View of Women