SYNOPSIS
e-Motion aims to examine the way humans and non-humans read and interpret emotional expressions. The work realizes the difficulty of translation of ‘feelings’ into words. We analyse the complexity of emotion recognition by comparing human and computer vision, reducing the subject’s emotion input to facial expression seen through a digital screen. We compare the accuracy of the classification between the human and computer vision by asking the participant to detect their own recorded expressions once completed. 

When we see someone smiling does it necessarily mean that this person is ‘Happy’? Our need to conceptualize and translate facial expressions into language is part of a natural learning process by which we attempt to understand the world. This process is often reductive and biased. The work also examines the impact of how we are being seen by others and how this in return changes our behavioral responses. When we are told that we seem tired, angry or sad, and we don't identify as such, how does it make us feel?

Technologies that we design often reflect our own world views. The AI system used in this project is trained to recognize facial expressions as one of seven human defined primary emotions. Such ocular-centric systems are built to estimate aspects of an individual’s identity or state of mind based on external appearances. This design brings to mind pseudo-scientific physiognomic practices, which are notorious for their discriminatory nature and surface up too often in AI based computer vision algorithms. The use of both AI analysis and human analysis of facial expressions reminds us that the technology is far from maturing beyond its maker, and that both humans and machines still have much to learn. 
Created in collaboration with Avital Meshi
IMAGES