Hi.

Welcome to my blog. I document my adventures in travel, style, and food. Hope you have a nice stay!

Reflections on: machine-in-the-middle

Reflections on: machine-in-the-middle

Gabriella Warren-Smith speaks to the makers behind machine-in-the-middle (2021), a video artwork and experiment by Rod Dickinson and Nathan Semertzidis exploring the future of emotion tracking and bio sensing.


A mathematical vision of behaviour.

GWS: As you state in your introduction to Decoding Humans, machine_in_the_middle is an experiment “based on a mathematical vision of behaviour”. It translates brain data into emotional categories, with the understanding that the data received combines to reveal the inner feelings of the subject.

Could you discuss your approach to quantifying Steve’s mood and emotion? Can humans really be read as computers, or as Sharon phrased it “as a machine”?

RD/NS: machine_in_the_middle sets out to question the idea that human emotions can be mathematically formulated. It creates a speculative scenario that viscerally visualises the potential consequences of asking a machine to direct our behaviour based on its interpretation of what we are feeling.

The project is informed by the long history of attempts to quantify emotions, stretching back to Darwin’s “The Expression of the Emotions in Man and Animals” published in 1872 and Paul Ekman’s more recent work and his highly contested belief that emotions have a universal form and expression.

In each of these cases there is an attempt to render emotions visible and numerically translatable so that they can then be integrated with machinic processes. One could argue that attempting to do this via bio sensing (reading the EEG signals from the brain) rather than relying on the analysis of facial expression is the next step in pushing back the frontier of that research and the logical conclusion of our increasing intimacy with machines (from mobile ubiquitous computing to wearables that interface with us ever more closely). 

This area of development, often called  “affective computing”, is  a growing field within computer science research. It seeks to study the development of technologies that can recognise, interpret, process and simulate human emotion. 

machine_in_the_middle aims to be a provocation in this milieu. It proposes that we invite the machines in, but simultaneously warns of the possible consequences of doing this.

GWS: The history of emotions and their interpretation through facial expression alone is fascinating. The more recent research by writers such as Lisa Feldman Barrett indicates that individually, culturally and psychologically we all communicate (or omit) our emotions differently, often only showing the feelings we want people to think we are experiencing. machine-in-the-middle strips this control away! You’ve taken that step further in using biosensing to try and decode what is going on beneath the skin. How did you go about defining the physiological data into emotions?

RM/NS: In practical terms we correlated two facets of Steve Davis’ EEG / brain waves; how aroused (or energetic) he was and how pleasant the feeling was that he was experiencing (called valence). 

This allowed us to create a two dimensional grid with four emotional states, one in each quadrant. We called these happy, relaxed, stressed and sad. Our Machine Learning Classifier was trained on a dataset produced by Dr Yucel Cimtay at Loughborough University that used similar labelling, nevertheless it might be reasonable to argue that the emotional states we classified (and indeed all computational emotional states) are a kind of parallel to the states we experience at a human visceral level. Much in the way that we understand a social media ‘friend’ or ‘like’ as a parallel to actually liking something or knowing someone. 

emotions graph.png

Rather than directly accessing the nuanced human experience of emotion, these classifications are formulated by extracting information from the underlying brain activity. This is a process called operationalization, where something not directly measurable is inferred by measuring something more accessible yet closely related. 

The relationship between a classified emotion, and the experience of that emotion depends on how well we understand how emotions are made. The neuroscience of emotion is currently not well understood, and classifications made from biodata should be taken with a degree of skepticism. Nonetheless, as we begin to understand the brain and ourselves better, this is likely to change.


A symbiotic organism

GWS: Nathan raised an interesting comment about the industry that Machine-in-the-middle calls into question, highlighting a near future era where symbiosis between man and machine will be achieved.

Is decoding emotions the next integral step that will strengthen the cognitive and biological relationship between humans and their devices? What sort of technologies do you think will emerge when this innovation is achieved?

RD/NS: Mutualistic Symbiosis in nature typically describes relationships between organisms in which their interactions produce a benefit for both parties. Given that we increasingly perceive machines as having agency due to ongoing advances in artificial intelligence, we are beginning to see instances of mutualistic symbiosis between humans and machines. For example, we benefit from digital assistants in that they help us access information, organise our schedules, and assume the role of personal DJ’s, making our lives easier and more enjoyable. In return we ongoingly develop, manufacture, distribute, and improve them, ultimately facilitating their evolutionary and reproductive processes. 

GWS: So it’s a kind of two way street where as the technology develops and integrates into everyday life, humans increasingly rely on and need the service that the device offers. And the machine in turn feeds on the input from the human to become wiser and better at what it’s programmed to do. How will emotions help machines?

RD/NS: Empathy enables many of the social skills that have driven our success as a species. Giving machines the ability to decode and communicate emotion works toward integrating them into our social activities. There are instances in which machines have been given emotionally expressive characteristics, such as facial expressions and exaggerated movements, to help communicate their internal states to humans. Similarly, there are examples of cooperative robots that have been designed to better work alongside human coworkers by sensing emotional biodata. This is particularly useful in warning the robot when their human counterpart feels unsafe around them, triggering them to stop whatever it is they are doing.

However perhaps that the greatest impact of allowing machines to decode emotion will be where the boundaries between both the human and machinic agent become blurred into a cohesive whole. Emotion can be thought of as a feedback mechanism facilitating cohesion between an organism's physiological, cognitive and behavioral processes. This allows for the different parts that make up an organism to come together under a unified state.

With the decoding of human emotion, a similar process could take place between humans and machines, having them act as a singular agency. We are beginning to see early examples of this, such as biosensing wearable robotics which react to their users' affect. Similarly, Nathan’s research is currently focused on developing brain interfaces that synchronize neural activity in groups of people, working towards a form of collective consciousness where artificial intelligence serves as the connective tissue linking the minds of the collective.


Through the eyes of a machine

GWS: In the opening of your video artwork, we watch the face of Steve Davies forced into an uncomfortable grimace. As his decoded emotions become mapped to his facial expression, we are met with the blunt reality that this artwork brings into question: will the technological translation of our feelings change our private emotional experience of the world? 

Will this change how we feel about ourselves, and even question our own interpretation of events?

RD/NS: One of the less obvious aspects of the quantification of behaviour is the way in which our privacy is shrinking and we become more exposed, more visible. As our behaviour is numerically encoded, we become increasingly machine readable. Our actions and behaviour are increasingly captured and encapsulated in forms that are accessible to software. The consequence of this is that our private experience of the world is fast disappearing. 

This is happening in a number of ways: Perhaps most obviously we choose to express our feelings (or at least analogues of them, via likes, emojis etc) more often and to more people. Perhaps less obviously those expressions are then used to create a theory of behaviour that is entirely computational, a computer science approach to all human acts, sociality, ideas exchange and even altruism. Alex Pentland in “Social Physics” calls this "a mathematical explanation of why society reacts as it does". 

For example Cambridge Psychometric Centre has developed a psychological profiling tool ironically called “Apply Magic Sauce” that creates a personality profile based on a social media profile. It detects gender, age, political orientation, curiosity, extraversion and so on. (Sarah Selby, who also worked on machine_in_the_middle, used this in her 2019 artwork “Raised By Google”).

Even without biosensing and the monitoring of emotional states we are more exposed than ever. machine_in_the_middle speculates on what that frontier might look like when it is pushed back even further.

GWS: Yes I completely agree. It’s where the foundations of The Downloadable Brain programme originated. Having built the previous programmes upon conversations around the attention economy and surveillance cultures, I was fascinated by what this landscape would look like as our everyday technologies became biological, picking up on the inner world of emotions and private thought. How do you think our cognitive relationship with our devices and the network will change?


RD/NS: In the future these technologies may dissolve many concepts we take for granted, such as self, identity, and autonomy. Our experience of reality may drastically be changed in ways we are not able to currently imagine. If the focus of brain interfacing technology is set towards connecting people on a neural level, one possible development is that we may begin to see a rapid increase in skepticism toward individualism, replaced by growth toward a more collectivist or universalist zeitgeist. 

However, while these technologies hold many opportunities for empowerment, especially in challenging the rigidity of individuality, identity, and ability, these technologies also possess the potential for darker applications. Among the research community, many have warned that the entanglement of psychology and computer science has led to the “calculability of human subjectivity,” quantizing the individual into information through which individuals can be digitally categorized is the main driver for what Shoshana Zuboff calls “Surveillance Capitalism”. 

Similarly others have also argued that what we consider as “valid” human emotions may become limited to what is quantifiable and machine interpretable, with nuance being conditioned out of our spectrum of emotional experience. We stand at the crossroads of potential futures, and needless to say, we must tread carefully.

If you would like to find out more about machine-in-the-middle, you can hear the artists present their work in the event recording below. Decoding Humans was a multidisciplinary event exploring the emotion tracking future of humans through the launch of two new artworks, co-hosted by STEM and emotions expert Dr Sharon Tettegah.

A Digital Chat About Digital Art

A Digital Chat About Digital Art

Spaces Manipulate Who We Are

Spaces Manipulate Who We Are

0