That’s when I had this “ah-a” moment. All the nuances of my emotions were being lost in cyberspace! But what if technology and our devices could understand us in the same way that people do?
This has driven my mission to build Emotion AI, and now Human Perception AI. As our relationship with technology is becoming more relational and personal, artificial intelligence needs to be able to understand and interact with us in the same way other humans do. AI needs not only IQ, but EQ—emotional intelligence—as well. At Affectiva, we believe that by giving machines the ability to understand all things human, it has the potential to improve the way we work, live and interact with one another.
Affectiva develops software that understands people’s emotional and cognitive states by analyzing their facial expression. Can you explain the science behind this? What is it possible to know about someone based on their facial expressions? What can’t you learn from facial expressions alone?
Affectiva’s Human Perception AI is built on deep learning, computer vision, speech science, and massive amounts of real-world data. Specifically, we use an optical sensor such as a smartphone camera or webcam to identify a human face in real time. Then, we use computer vision based deep learning networks to map the texture and movements on the face to a number of facial expressions, such as a smile or smirk or a brow furrow. These facial expressions are like building blocks that combine in many different ways to communicate a wide range of emotional and cognitive states—from joy and surprise to confusion and attention.