The “uncanny valley” is a term coined by Japanese roboticist Mashahiro Mori in 1970 to describe the strange fact that, as robots become more human-like, we relate to them better—but only to a point. The ”uncanny valley” is this point.
The issue is that, as robots start to approach true human mimicry, when they look and move almost, but not exactly, like a real human, real humans react with a deep and violent sense of revulsion.
This is evolution at work. Biologically, revulsion is a subset of disgust, one of our most fundamental emotions and the by-product of evolution’s early need to prevent an organism from eating foods that could harm that organism. Since survival is at stake, disgust functions less like a normal emotion and more like a phobia—a nearly unshakable hard-wired reaction.
Psychologist Paul Ekman discovered that disgust, alongside contempt, surprise, fear, joy, and sadness, is one of the six universally recognized emotions. But the deepness of this emotion (meaning its incredibly long and critically important evolutionary history) is why Ekman also discovered that in marriages, once one partner starts feeling disgust for the other, the result is almost always divorce.
Why? Because once disgust shows up the brain of the disgust-feeler starts processing the other person (i.e. the disgust trigger) as a toxin. Not only does this bring on an unshakable sense of revulsion (i.e. get me the hell away from this toxic thing response), it de-humanizes the other person, making it much harder for the disgust-feeler to feel empathy. Both spell doom for relationships.
Now, disgust comes in a three flavors. Pathogenic disgust refers to what happens when we encounter infectious microorganisms; moral disgust pertains to social transgressions like lying, cheating, stealing, raping, killing; and sexual disgust emerges from our desire to avoid procreating with “biologically costly mates.” And it is both sexual disgust and pathogenic that creates the uncanny valley.
To protect us from biologically costly mates, the brain’s pattern recognition has a hair-trigger mechanism for recognizing signs of low-fertility and ill-health. Something that acts almost human but not quite, reads—to our brain’s pattern recognition system—as illness.
And this is exactly what goes wrong with robots. When the brain detects human-like features—that is, when we recognize a member of our own species—we tend to pay more attention. But when those features don’t exactly add up to human, we read this as a sign of disease—meaning the close but no cigar robot reads as a costly mate and a toxic substance and our reaction is deep disgust.
Repliee Q2. Taken at Index Osaka Note: The model of Repliee Q2 is probably same as Repliee Q1expo, Ayako Fujii, announcer of NHK.
But the uncanny valley is only the first step in what will soon be a much more peculiar progress, one that will fundamentally reshape our consciousness. To explore this process, I want to introduce a downstream extension of this principle—call it the uncanniest valley.
The idea here is complicated, but it starts with the very simple fact that every species knows (and I’m using this word to describe both cognitive awareness and genetic awareness) its own species the best. This knowledge base is what philosopher Thomas Nagel explored in his classic paper on consciousness: ”What Is It Like to Be A Bat.” In this essay, Nagel argues that you can’t ever really understand the consciousness of another species (that is, what it’s like to be a bat) because each species’ perceptual systems are hyper-tuned and hyper-sensitive to its own sensory inputs and experiences. In other words, in the same way that “game recognizes game,” (to borrow a phrase from LL Cool J), species recognize species.
And this brings us to Ellie, the world’s first robo-shrink. Funded by DARPA and developed by researchers at USC’s Institute for Creative Studies, Ellie is an early iteration computer simulated psychologist, a bit of complicated software designed to identify signals of depression and other mental health problems through an assortment of real-time sensors (she was developed to help treat PTSD in soldiers and hopefully decrease the incredibly high rate of military suicides) .
At a technological level, Ellie combines a video camera to track facial expressions, a Microsoft Kinect movement sensor to track gestures and jerks, and a microphone to capture inflection and tone. At a psychological level, Ellie evolved from the suspicion that our twitches and twerks and tones reveal much more about our inner state than our words (thus Ellie tracks 60 different “features”—that’s everything from voice pitch to eye gaze to head tilt). As USC psychologist and one of the leads on the project, Albert Rizzo told NPR: [P]eople are in a constant state of impression management. They’ve got their true self and the self that they want to project to the world. And we know that the body displays things that sometimes people try to keep contained.”
More recently, a new study just found that patients are much more willing to open up to a robot shrink than a human shrink. Here’s how Neuroscience News explained it: ”The mere belief that participants were interacting with only a computer made them more open and honest, researchers found, even when the virtual human asked personal questions such as, ‘What’s something you feel guilty about?’ or ‘Tell me about an event, or something that you wish you could erase from your memory.’ In addition, video analysis of the study subjects’ facial expressions showed that they were also more likely to show more intense signs of sadness — perhaps the most vulnerable of expressions — when they thought only pixels were present.
The reason for this success is pretty straightforward. Robots don’t judge. Humans do.
But this development also tells us a few things about our near future. First, while most people are now aware of the fact that robots are going to steal a ton of jobs in the next 20 years, the jobs that most people think are vulnerable are of the blue-collar variety. Ellie is one reason to disavow yourself of this notion.
As a result of this coming replacement, two major issues are soon to arise. The first is economic. There are about 607,000 social workers in America, 93,000 practicing psychologists, and roughly 50,000 psychiatrists. But, well, with Ellie 2.0 in the pipeline, not for long. (It’s also worth noting that these professions generate about $3.5 billion dollars in annual income, which—assuming robo-therapy is much, much cheaper than human-therapy—will also vanish from the economy.)
But the second issue is philosophical, and this is where the uncanniest valley comes back into the picture. Now, for sure, this particular valley is still hypothetical, and thus based on a few assumptions. So let’s drill down a bit.
The first assumption is that social workers, psychologist and psychiatrists are a deep knowledge base, arguably one of our greatest repositories of “about human” information.
Second, we can also assume that Ellie is going to get better and better and better over time—no great stretch since we know all the technologies that combine to make robo-psychologists possible are, as was well-documented in Abundance, accelerating on exponential growth curves. This means that sooner or later, in the psychological version of the Tricorder, we’re going to have an AI that knows us as well as we know ourselves.
Third—and also as a result of this technological acceleration—we can also assume there will soon come a time when an AI can train up a robo-therapist better than a human can—again, no great stretch because all we’re really talking about is access to a huge database of psychological data combined with ultra-accurate pattern recognition, two already possible developments.
But here’s the thing—when you add this up, what you start to realize is that sooner or later robots will know us better than we know ourselves. In Nagel’s terms, we will no longer be the species that understands our species the best. This is the Uncanniest Valley.
And just as the uncanny valley produces disgust, I’m betting that the uncanniest valley produces a nearly unstoppable fear reaction—a brand new kind of mortal terror, the downstream result of what happens when self loses its evolutionarily unparalleled understanding of self.
Perhaps this will be temporary. It’s not hard to imagine that our journey to this valley will be fortuitous. For certain, the better we know ourselves—and it doesn’t really matter where that knowledge comes from—the better we can care for and optimize ourselves.
Yet I think the fear-response produced by this uncanniest valley will have a similar effect to disgust in relationships—that is, this fear will be extremely hard to shake.
But even if I’m wrong, one this for certain, we’re heading to an inflection point almost with an equal—the point in time when we lose a lot more of ourselves, literally, to technology and another reason that life in the 21st century is about to get a lot more Blade Runner.
More human than human? You betcha. Stay tuned.
[Photo credits: Robert Couse-Baker/Flickr, Wikipedia, Steve Jurvetson/Flickr]