Today, Vanity Fair published a report from the writer Simon van Zuylen-Wood, who was permitted to observe and interview the people who are supposedly setting the rules about what content is allowed on Facebook. The story brought out two key facts—not revelations; all Facebook news is a constant process of revealing these things, one way or another—about Facebook: first, these people don’t really know what they’re doing, and second, they’re not really doing it.
At the center of the story was a former federal prosecutor named Monika Bickert, whose job is to try to establish Facebook’s limits on what people can or can’t post. Bickert is not, or was not, a technology person, but a policy person, who van Zuylen-Wood described as having, in turn, “hired a high-school teacher, a rape crisis counselor, a West Point counterterrorism expert, a Defense Department researcher.” All these various forms of knowledge and expertise were brought together in the name of trying to solve Facebook’s content problems.
The story illustrated the process by looking at one specific problem: whether or not it’s permissible for users to write “men are scum.” The Facebook content group asked itself, what rules should govern this problem?
If you remove dehumanizing attacks against gender, you may block speech designed to draw attention to a social movement like #MeToo. If you allow dehumanizing attacks against gender, well, you’re allowing dehumanizing attacks against gender. And if you do that, how do you defend other “protected” groups from similar attacks?
The word for what all these arguments deal with is “context,” which is, as it happens, the thing that Facebook is built to obliterate and replace with Facebook. Grasping for ways to explain a perfectly sensible distinction, the group considered treating disparagement of women differently from disparagement of men:
Bickert foresees another hurdle. “My instinct is not to treat the genders differently,” she tells me. “We live in a world where we now acknowledge there are many genders, not just men and women. I suspect the attacks you see are disproportionately against those genders and women, but not men.” If you create a policy based on that logic, though, “you end up in this space where it’s like, ‘Our hate-speech policy applies to everybody—except for men.’ ” Imagine how that would play.
Here was a group of people, gathered for their different perspectives and ideas, being driven in real time to think the single way Silicon Valley thinks, which is not the way human beings think. The tech industry insists, over and over, on using “reason” in the sense of pure abstract thought, with no reference to actual facts about the world. When you remove the social and historical context from bigotry, it becomes impossible to get a handle on bigotry.
But even this flight into neutral abstraction is flagrantly rigged: Imagine how that would play. As the social context in which women might justifiably call men scum is ruled out, a new social context is being brought in: the context in which right-wingers would hypothetically attack Facebook for unfairness.
Having lost their grip on conventional moral judgment, the Facebook people need something to replace it with. Bickert suggested to Vanity Fair that a morality-equivalent would be an emergent result of Facebook’s business model, as it operates in the market:
“People will say, ‘Oh, your business interests are not aligned with the safety interests of the community.’ I completely disagree with that,” she says. Not only does hate speech turn others off, but the people who post it may not be ideal moneymakers for the company. “Those people are not likely to click on an ad for shoes, you know, in the middle of their hate. The person who is looking at puppy videos is a lot more likely.”
If puppies were the cure for racism, things would have gotten a lot better in this country long ago. Human history, near and far, is one long series of disproofs of the claim that bigots don’t spend money. More pertinently for Facebook, though, machines don’t believe in it either.
Terms like “Jew hater” and “white genocide conspiracy theory” were, to the Facebook system, useful markers for clusters of users who share interests.
Specifically, Facebook’s machines don’t believe it. Time and again—as recently as last week—people testing Facebook’s advertising systems have found that the company’s algorithms, built to track human behavior and make money off it, identify hate as a useful market niche. Terms like “Jew hater” and “white genocide conspiracy theory” were, to the Facebook system, useful markers for clusters of users who share interests.
The Los Angeles Times explained how it came to do the latest round of reporting on Facebook’s racist market segmentation:
The Times was tipped off by a Los Angeles musician who asked to remain anonymous for fear of retaliation from hate groups.
Earlier this year, he tried to promote a concert featuring his hardcore punk group and a black metal band on Facebook. When he typed “black metal” into Facebook’s ad portal, he said he was disturbed to discover that the company suggested he also pay to target users interested in “National Socialist black metal” — a potential audience numbering in the hundreds of thousands.
While the humans sat in a conference room, letting a reporter watch them deliberate about how to fairly and neutrally take action against online abuse, the systems that matter, the systems that generate revenue for Facebook, were suggesting that the best way to enhance an ad campaign was by reaching out to Nazis. Opening up the meetings was meaningless. If Facebook wants to be publicly accountable for doing less harm in the world, the only way to do it is by opening up the black boxes where it makes its money.