The problem with YouTube is what goes on beneath the surface. People go to the site looking for some video or another, and down inside YouTube, the machine learning has learned, mechanistically, to keep the people there, watching more videos. Up come the suggestions—glurge, hoaxes, conspiracies, racism—whatever it takes to prevent a user from going away.
The YouTube officials operate like YouTube. They know they have built an addictive brain-poisoning machine, but if they said so, users might avoid it. If they changed the incentive structures that make its algorithm poison people’s brains, users might also go away, no longer being addicted to the product. So they can’t talk about what they’re doing.
What do they say, then? The New York Times did another story about another person who was susceptible to YouTube poisoning. Probably YouTube was not the only factor—the details of this person’s pre-YouTube innocence included his hanging out on 4chan—but he did follow the basic pattern of being fed, and consuming, ever more right-wing content in ever greater quantities, which YouTube’s “engagement”-based metrics defined as success for the platform. Eventually he was watching plain racist trash, cranked out by people who had discovered that the internal structures of YouTube enabled and encouraged them to make money by promoting bigoted politics on a mass scale.
Here is what Kevin Roose of the New York Times wrote that YouTube had told him about this now-familiar process:
In interviews, YouTube officials denied that the recommendation algorithm steered users to more extreme content. The company’s internal testing, they said, has found just the opposite — that users who watch one extreme video are, on average, recommended videos that reflect more moderate viewpoints. The officials declined to share this data, or give any specific examples of users who were shown more moderate videos after watching more extreme videos.
This couldn’t be true, in any normal sense of truth intelligible to humans. YouTube’s recommendation algorithm does steer users to more extreme content. It is an extensively documented fact about YouTube that the algorithm does this. It is as if McDonald’s, criticized for its contribution to the obesity epidemic, were to put out a statement saying that it doesn’t serve people french fries.
But the claim about internal testing did not say what YouTube seemed to want people to believe it said. It may be the case that if a user watches one extreme video, that viewer is then, on average, shown a less extreme video. That would not be a test of whether YouTube steers someone who is watching a less extreme video toward a more extreme one. A cynical reader might think that it would, in fact, be a test of the opposite question from the question YouTube was supposed to answer.
And from inside the black box, it could be exactly the same process. Roose described how YouTube, concerned that users would get bored seeing only the kind of videos they’d wanted to go to the site to see, began trying to “maximize users’ engagement over time by predicting which recommendations would expand their tastes.” It made the system start trying to redirect viewers’ attention by drawing connections between different kinds of videos that tended to appeal to the same audience.
What if those connections were completely uninterested in which videos were more extreme or less extreme? If it viewed the audiences as interchangeable, the algorithm could send traffic either way. Stipulate, for these purposes, that Jordan Peterson‘s audience is not the same as a neo-Nazi audience. If left alone, the members of the one audience would watch Jordan Peterson videos until they got bored, and the members of the other audience would watch neo-Nazi videos until they got bored.
But if YouTube’s black box discovered they would watch each other’s videos—inspired by a resentment of “political correctness” or by a taste for grand unified theories of blame, or by whatever, because the algorithm would have no conception of reasons, only results—it would start recommending them. The fact that the flow could go both ways wouldn’t matter. One person shows up to watch a neo-Nazi video and ends up getting told to clean his bedroom. One thousand people show up to watch a Jordan Peterson lecture and end up getting fed neo-Nazi videos. Formally, just as YouTube says, that process wouldn’t favor more extreme videos over less extreme videos. Functionally, it would steer a thousand times as many people to the neo-Nazi side.