The people who worked at YouTube, in the early years, didn’t believe malicious or dangerous content was an impossible problem to solve. In a story yesterday about YouTube’s development into a worldwide logistics network for distributing harm, Bloomberg quoted one of those workers, Mica Schaffer, whose job a decade ago was to set policy:
Around that time, YouTube noticed an uptick in videos praising anorexia. In response, staff moderators began furiously combing the clips to place age restrictions, cut them from recommendations or pull them down entirely. They “threatened the health of our users,” Schaffer recalled.
He was reminded of that episode recently, when videos sermonizing about the so-called perils of vaccinations began spreading on YouTube. That, he thought, would have been a no-brainer back in the earlier days. “We would have severely restricted them or banned them entirely,” Schaffer said.
Years after that, according to Bloomberg’s reporting, it still didn’t seem impossible—as a matter of policy or of engineering—to keep YouTube from promoting bad content:
Yonatan Zunger, a privacy engineer at Google, recalled a suggestion he made to YouTube staff before he left the company in 2016…. Videos that were allowed to stay on YouTube, but, because they were “close to the line” of the takedown policy, would be removed from recommendations. “Bad actors quickly get very good at understanding where the bright lines are and skating as close to those lines as possible,” Zunger said.
His proposal, which went to the head of YouTube policy, was turned down. “I can say with a lot of confidence that they were deeply wrong,” he said.
Yet YouTube remains an overpressurized sewer, one that jets sewage into the air and pumps it into the drinking-water supply. Bloomberg’s story affirms the thing that seemed obvious about YouTube’s operations all along: the reason YouTube can’t stop recommending pernicious extremist content is that the people who run YouTube chose not to try.
Instead, they focused on what they called a “North Star”:
YouTube, then run by Google veteran Salar Kamangar, set a company-wide objective to reach one billion hours of viewing a day, and rewrote its recommendation engine to maximize for that goal.
In 2016, when YouTube achieved that goal, there were about 3.4 billion people with Internet access. That means the goal was, on average, 18 minutes of YouTube viewing for each one of those people, every single day.
No one needs that much YouTube. As the New York Times reported last year, at the same time YouTube was setting off in pursuit of its one billion daily minutes, its engineers were battling to try to keep the share of fake views in its traffic below 50 percent, so that its bot-fighting algorithms wouldn’t decide that artificial views were normal behavior and start trying to fend off human users.
And yet, even as it knew its traffic volume was inherently fake, YouTube used those bot-enhanced numbers as its goal for what it ought to extract from its human user base. It trained its algorithms to feed people the content that kept them engaged—that is, content that made ever-escalating demands on their attention.
In 2016, by Bloomberg’s account:
three Google coders published a paper on the ways YouTube’s recommendation system worked with its mountain of freshly uploaded footage. They outlined how YouTube’s neural network, an AI system that mimics the human brain, could better predict what a viewer would watch next…
Paul Covington, a senior Google engineer who coauthored the 2016 recommendation engine research, presented the findings at a conference the following March. He was asked how the engineers decide what outcome to aim for with their algorithms. “It’s kind of a product decision,” Covington said at the conference, referring to a separate YouTube division. “Product tells us that we want to increase this metric, and then we go and increase it. So it’s not really left up to us.”
These decisions were made—people made these decisions—not because the people running YouTube were cutting corners for the sake of survival, but because they wanted to make an already unimaginably powerful position of wealth and dominance even more wealthy and dominant. YouTube was not trying to push out competing video-hosting platforms, but to claw away market share from life itself.
YouTube set out to change its product into something more harmful, to make it more successful.
In the 20th century, tobacco companies set the standard for what we think of as corporate malevolence: they intentionally made their products more addictive even as they fought to keep their customers ignorant of the fact that smoking would kill them. But at least the tobacco industry was selling an inherently dangerous product in the first place, and its evils all flowed from that built-in necessity. YouTube set out to change its product into something more harmful, to make it more successful.
And since that success was defined by uncontrolled growth, it became its own excuse. Once you’ve gotten up to serving up a billion hours a day, who can do quality control at that scale?
This is the logic of the megaplatforms. They operate like drivers going 90 miles per hour through an unfamiliar residential neighborhood at dusk, with no headlights. How can they be expected not to hit anyone? At that speed, under those conditions, the technical challenge of not hitting anyone can’t be solved. (This is also, unfortunately, how the tech companies building literal self-driving cars operate.)
Responsibility becomes vulnerability. Bloomberg described how YouTube made inaction into a formal course of action:
YouTube actively dissuaded staff from being proactive. Lawyers verbally advised employees not assigned to handle moderation to avoid searching on their own for questionable videos, like viral lies about Supreme Court Justice Ruth Bader Ginsburg, according to one former executive upset by the practice. The person said the directive was never put in writing, but the message was clear: If YouTube knew these videos existed, its legal grounding grew thinner. Federal law shields YouTube, and other tech giants, from liability for the content on their sites, yet the companies risk losing the protections of this law if they take too active an editorial role.
The problem with YouTube, then, is that it is caught up in incentives that block out its other options. It is mindlessly, addictively pursuing worse and worse things, in a spiral of radicalization. The problem with YouTube is that it is YouTube.