Facial Recognition Software and You
Yes, the future of Black Mirror is here
By Richard Whittaker, Fri., March 10, 2017
Who hasn't yelled at their phone, or wept on their laptop? Like every part of their lives, human interactions with technology have an emotional component. Yet your computer doesn't know if you're happy or sad, and it can't alter its user interface to respond to tears or laughter.
The important word there is "yet." As a design researcher and founder of Change Sciences, Pamela Pavliscak works with firms innovating in effective computing, sensor development, and AI. However, she sees a gap in their approaches. "I'm one of these people that loves the complex, messy, weird world of human beings, and how we relate to technology is very emotional. It's not rational at all, yet all these companies I'm working with, and all the designs that we're engaged with, are focused on the rational."
The underlying issue is how smart people interact with their dumb machines, and that's been a clear concern since ELIZA, the original chatbot developed at MIT in 1964. Pavliscak said, "People knew that it was fake, yet they developed an attachment because they were having a conversation, and conversation yields an emotional bond."
To this point, the emphasis when it comes to emotion in computing has been on emulation and manipulation, not comprehension. When emotion is factored into design, the end game is stimulating moods in the user, whether it's how an iPhone lies in the hand, or the quick-fix ecstatic rage of social media. However, Pavliscak said, "What we're fast coming up against is, that's not going to cut it in this new world where technology is embedded in every moment of our day-to-day existence."
Creating a fake human has been a long-established way to make people feel at ease with their machines: After all, HAL 9000 may have been a murderous supercomputer, but his gentle tones made his cold-blooded killing seem almost pleasant. Pavliscak said, "We all share this vision of the future that is that white, pristine, Airbnb-style room where everything is automatically happening for us, and I just wonder: Where's the life in that, where's the emotion in that, and where's all the stuff that's contributing to our emotional well-being? I don't think that's a conversation many of us are ready to have."
For Pavliscak, it comes down to a very simple understanding: "Emotion isn't a destination. Emotion is context." The technology is in its infancy (and, she warned, "Spoiler alert: It doesn't work very well") but there are engineers and psychologists developing facial recognition software that gauges mood, and wearable tech that reads more into skin temperature and heart rate than just biometrics. She sees a desperate need to add the creative arts to the R&D mix, since "when we think where we are with our understanding of emotion, so much of it comes out of literature and the arts, and that's something that isn't a voice in the development of our technology."
The next step is implementation, where there is both great potential and great risk. There are already interesting developments in therapy, for example for people suffering from PTSD, or with a diagnosis on the Autism spectrum. With the right technology, she said, "Friends and family can identify what's going on, or they can self-identify emotions."
Yet there is also the shadow of the darkest timeline represented by 2002's Minority Report, in which commercials recognize and target individuals. That's not so fantastical, since the bulk of the existing patents on emotion-linked facial recognition are held by advertising agencies. Pavliscak said, "Imagine your refrigerator knowing that you're deeply depressed, and offering you ice cream. Or the AI knows that you're stressed out from work, so it holds off advertising sleep meds for a couple of hours so you can get your work done, because your boss has keyed in. You can spin out into some very dark tales."