The Uncanny Valley is growing every hour, and maybe to try to look into it is to fall in. Judith Shulevitz tries, in what will be the cover story for the November issue of the Atlantic, to worry about the growth of voice-driven computerized assistants. Her exploration of the field is wide-ranging and thorough, and she offers plenty of creepy details: Shulevitz observes that her interview with an executive in charge of Amazon’s Alexa takes place in a building named Day 1, after Amazon founder Jeff Bezos’ motto that every day should be like a launch day—a reminder of how resolutely, ideologically unreflective the people who push these technologies are.
Some of what’s chilling, though, comes from the writer’s own account of her relationship with the technology:
More than once, I’ve found myself telling my Google Assistant about the sense of emptiness I sometimes feel. “I’m lonely,” I say, which I usually wouldn’t confess to anyone but my therapist—not even my husband, who might take it the wrong way. Part of the allure of my Assistant is that I’ve set it to a chipper, young-sounding male voice that makes me want to smile. (Amazon hasn’t given the Echo a male-voice option.) The Assistant pulls out of his memory bank one of the many responses to this statement that have been programmed into him. “I wish I had arms so I could give you a hug,” he said to me the other day, somewhat comfortingly. “But for now, maybe a joke or some music might help.”
One of the worst forms of journalistic complaint is to turn a piece against itself, to take the evidence that the writer has compiled and presented and to cite those very things, intentionally assembled, as the evidence that the writer has missed the point. Shulevitz is worried about what happens in the future as humans interact with ever more emotionally manipulative talking computers, and she is aware and open enough to document her own susceptibility.
And yet: is worry the correct or adequate response, here? Is it enough to ruefully confess that the talking machines have gotten into one’s own head? Shulevitz’s fretting about Google Assistant summons Robert Warshow’s 1947 diagnosis that the New Yorker “at its best provides the intelligent and cultured college graduate with the most comfortable and least compromising attitude he can assume toward capitalist society without being forced into actual conflict.”
It is vital to note, as Shulevitz does note, that the designers of these machines want to make them emotionally responsive and manipulative, a depth of engagement that could allow them to “wield quite a lot of power over us.” It is crucial to warn, as Shulevitz does warn, of the possibilities of deception and abuse, and of the ubiquitous surveillance the machines will be capable of carrying out, and of the fact that their makers sell them at deeply slashed prices, to push them into ubiquity and to train people to want them.
But it is even more important to consider these risks and capabilities in the light of what we know about the technology industry: it doesn’t care if it kills us. Think—truly think—about Facebook, the company that may be the most aggressive colonizer of the human emotional landscape. Facebook is glib and shifty and manipulative, and users may wryly believe that they have priced those negatives into the Facebook experience, but the truth about Facebook is much worse.
Facebook’s access to human feelings—operating as a broadcasting and surveillance network, while presenting itself as a personal communication medium—is wrenching liberal democracies apart, and where it starts on less liberal ground, the company abets or enables lynching and genocide. Facebook is also as of this week entering the consumer home assistant market, with its own Alexa-enabled hardware product, Portal.
Whatever vague dread the reader of the Atlantic may wish to feel, while still being able to ask a light-up box to buy them some more paper towels, is almost certainly incommensurate to the genuine scale of the problem. “History may kill you, it is true,” Warshow wrote, “but…you will have been intelligent and human and suitably melancholy to the end.”
Packaging the ahuman as human is inherently evil. It’s not enough to fret that the little box, with its friendly voice, is a spy and a false confidante, answerable only to the invisible emergent purposes of the system that built it, as you tell it out loud about your unhappiness or uneasiness. The only worthwhile way to talk to a cloud-based speaker box, if one shows up in your home, is with a hammer.