Machine Learning (ML) and Artificial Intelligence (AI) have transformed how enterprises gain insight into solving complex problems. “Emotion AI,” or E-AI, is getting more attention in the fast-evolving field of AI due to the near-ubiquity of cameras in smartphones, digital assistants like Siri and Alexa, and developments in face-tracking voice recognition, and natural language processing (NLP). But does E-AI know how you feel? Maybe. Global research firm Gartner predicts that by 2022, 10% of personal devices will have emotion AI capabilities, either on-device or via cloud services, up from less than 1% in 2018.

Where Emotion Meets Code

Narrowly defined “soft” AI solutions, like a Netflix recommendation based on shows you’ve watched, accomplish a simple objective. “Hard” AI solutions solve more complex tasks. For example, the US Census Bureau uses hard AI and ML models with hundreds of variables to accurately categorize, quantify, and predict housing activity across 20,000 jurisdictions each month.

Beyond the mere ability to derive a solution from a set of pre-approved variables, though, E-AI takes machine learning further by programming the computer to interpret and react to a consumer’s emotional state.  Ad agencies use E-AI to gauge public enthusiasm for a product or service. Call centers use it to infer a caller’s mood and adjust call handling based on that input. Medical researchers are using E-AI to help people who have autism ascertain another person’s emotional state.

AI experts are building ground-breaking E-AI solutions by programming computers to:

  • Recognize human micro-expressions. Humans have difficulty detecting micro-expressions, an involuntary facial display of one’s genuine emotion because they last for only milliseconds. Computers, however, can be trained to readily identify micro-expressions to discern a person’s sentiment and intentions.
  • Gauge a person’s voice inflections, vocal tones, and patterns. Research has shown that the human ear is far more capable of detecting human emotion than the eye. The human voice can communicate up to 24 separate and distinct feelings. Researchers are already training algorithms to understand how the human voice broadcasts these emotions. Their goal is to help people with dementia, autism, and other emotion-processing disabilities understand what speakers convey with their vocal inflections and chosen words.

How Organizations Can Use E-AI

Companies and organizations can leverage E-AI solutions for a variety of business needs. Beyond simply understanding a customer’s current emotions to deliver targeted advertisements, companies can use E-AI to enhance a product’s design based on users’ emotional feedback. For example, auto manufacturers are exploring how to implement E-AI to ascertain a driver’s level of alertness and recognize an increasingly agitated and distracted driver to promote increased auto safety. Other applications include fraud detection using voice and facial expression analysis; schools using E-AI to support autistic children in learning; and retail stores using E-AI to understand customers’ general mood and shopping experience.

Hurdles to Adoption of E-AI

We can all agree that our emotions are what makes each of us “human;” they are what has – until now – separated us from computers and robots. However, as with any relatively new advancement in computing, the idea of using an algorithm to detect specific human emotions – and, subsequently, to train that algorithm to select an appropriate response – comes with significant risks and ethical concerns. Before implementing an E-AI program, companies must address several hurdles.

  • E-AI is only as good as the input it has received. Like the old maxim states, “Garbage in, garbage out.” Without accurate data, even the most sensitive and highly adept emotional AI program will fail. In this instance, having far more white male faces to draw from for learning will directly impact the program’s ability to complete the same task for a Hispanic female.
  • There’s an inherent risk of racial bias. E-AI solutions require a steady heterogeneous input of faces (or voices) to learn and improve its response accuracy. But there are limits to how well an algorithm can correctly determine the emotions of ethnic minorities. Some studies have shown that E-AI solutions have difficulty simply determining a customer’s gender when assessing darker skin tones. Another study showed that Microsoft’s facial recognition or “Face” AI program and API both interpreted the emotions of black basketball players as inherently angrier than their white counterparts.
  • Lack of context can lead to misinterpretation of words and phrases. Beyond the difficulty in determining simple facial clues, E-AI also needs to understand the cultural and specific context in which the micro-expressions and feelings occur.
  • Human communication is nuanced. E-AI can completely misinterpret a speaker’s tone while focused on its chosen syntax. E-AI solutions can often misinterpret words and phrases with subtle, nuanced differences in meaning across cultures. Sarcasm, irony, and regional accents and dialects require the sophisticated E-AI seen in dystopian books and TV shows like HBO’s Westworld or the 1982 film Blade Runner.
  • Privacy and security concerns. There are significant challenges regarding users’ privacy and their ability to “opt-in” before having their interactions measured by E-AI. Beyond alerting each consumer that their interaction is being recorded, monitored, and controlled by a computer, companies must ensure they handle these conversations in a manner that protects consumers from having any personal data leaked to an unauthorized party.
  • Other possible nefarious uses. E-AI could determine a person’s overall truthfulness, sexual identity, or state of potential intoxication – any of which could open a company up to widespread lawsuits and unwanted negative press.

E-AI may be here to stay. But before you embrace its revolutionary potential, consider E-AI’s current limitations and the potential risks for its misuse. Ask us how Reveal can help you manage risk with a solid AI strategy and governance program.