Can Emotion AI do More Harm than Good?
The film M3GAN, a marvel of AI, featured a Model 3 Android doll named ‘Meghan’ that’s programmed to be a child’s greatest companion. With the ability to listen and interpret emotions, the doll was designed to play the role of a friend and protector of the child. But without proper controls or protocols set into the prototype, the robot optimized its protection functions too far that it went on a killing spree of anyone who caused discomfort to the child.
The story is essentially an AI cautionary tale that depicts how robots can override codes and do the unexpected. Megan took her protective function to the extreme and in the process of performing her responsibilities she ended up doing more harm than good in the child’s life. At the climax, the movie suggests there’s no replacement for human emotions and connection, especially between a parent figure and a child.
The question arises if AI could interpret Human Emotions will it do more harm than good?Emotion AI is a field of computer science that helps machines gain an understanding of human emotions. Using text, audio, video or a combination of tools AI can detect and decipher emotions. More recently, healthcare organizations have applied emotional processing to AI. With machines analyzing voice patterns, eye movements and facial expressions they reveal accurate insights into how humans feel. In theory, if machines gain that level of understanding, they can serve us better in the diagnostics field..
The next level of AI can not only detect human emotion but also respond accordingly. Understanding people’s emotional state and deciphering how they feel is certainly a hard task for caregivers. The sooner machines can read a person’s state and respond as competently as caregivers, the better our digital healthcare service can be. This opens many opportunities to treat patients in a personalized and empathetic way.
Although empathy machines sound intimidating, they’re helping fill the human limitations in integrated patient care. The question is ‘Can technology benefit a patient’s life beyond what a doctor can?’
Here are some examples of how emotion AI is already in practice– Twill (formerly Happify) is an Intelligent Healing platform in mental health care that uses AI to learn about one’s health needs and guide each person to the needed care, thus shortening the gap between need and care. Its health chatbot provides personalized care and supports essential to one’s health and well-being.
– LUCID uses cutting-edge Emotion AI to improve health & wellness with the power of music. Its AI recommendation system leverages biometrics and self-assessed data to interpret an individual’s emotional mood and suggests therapeutic music. Its music therapy effectively reduces stress and enhances mood.
Areas of caution No matter how efficient AI’s perform, the cautionary point is that human emotions are complex with a lot of strings attached. Also emotions are variable and volatile. How people react in different scenarios and age groups are highly subjective. Involving AI wit emotions can raise a lot of alarms.
Accurate recognition and insight generation can be challenging to emotional AI. Optimizing an accurate recognition model can prove to be inherently difficult and controversial. Hence the potential for misuse is enormous. There’s a gut reaction (stemming from Hollywood movies) that if machines understand emotion, they could gain sentience and potentially manipulate our emotions.
Rather than questioning if AI can be used for emotional analysis, organizations must decide if it should be. Emotional AI can, within narrow confines and situations, be useful such as making interactive voice response (IVR) more human-like in a limited context. Whereas it makes more sense to pass an irate customer to a human agent than a machine.
Alvina Clara, Content Writer, emQube