Artificial Intelligence + Physiological Data = Emotionally Intelligent Experiences

Scaling personalisation on a physiological level.

Sophie Larsmon
Any One Thing

--

I don’t know about you, but I want to make work that creates shivers down spines, produces goosebumps, a racing heart – I want to open up the floodgates to our emotional centres.

In my previous post, I explored a few different ways that Artificial Intelligence (AI) could enhance Experience Makers’ work from both commercial and creative standpoints. In this post, I’m going to dig down a little further into one of the areas I personally find the most interesting: the capturing and analysis of physiological data and the use of AI to extract emotional responses to content.

Inspired by Peter Salovey and John D. Mayer’s 1990 article ‘Emotional Intelligence’, I am defining Emotional Intelligence as the ability to:

Monitor, recognise and understand the feelings and emotions of others, and use this information to guide our design thinking.

This brings new depth to the idea of personalised experiences.

I don’t know about you, but I want to make work that creates shivers down spines, produces goosebumps, a racing heart — I want to open up the floodgates to our emotional centres. I ultimately want the experiences I design (and co-create with those participating in those experiences) to make us feel alive. At the epicentre of these desires, is emotion, and to be able to do my job properly, I need Emotional Intelligence in buckets.

Luckily for me, emotion triggers a constellation of changes in our body’s physiology, and each specific constellation of changes creates a signature for different emotions. The measurement (and statistical analysis) of these physiological changes is known as biometrics. While engagement and anxiety may look similar on some biometrics, other biometrics can easily distinguish the 2 emotional states. The more biometrics you measure, the more detailed those signatures can be, which makes it possible to distinguish more subtle emotional differences, like separating fear from anxiety and anger from frustration.

OpenBCI’s EmotiBit captures 16 high quality data streams from which it’s possible to derive dozens of biometrics that combine to create detailed biometric signatures reflective of moment-by-moment changes in emotional state.

Devices like the EmotiBit and other wearables can capture A LOT of data. Too much data for a human to trail through efficiently. The volume of this data also contain more noises due to irregular body movements. This is where progress in AI, specifically Machine Learning (ML) can help us. ML models can trawl through the massive amounts of data to help inform our understanding of an individual’s emotional base-level and when we’re in a ‘heightened state’ by identifying data trends over time. By adding time-stamped labels to the data from within the EmotiBit visualiser or by adding one’s own labels/signals in the source code, data scientists can apply the latest ML algorithms to extract emotional states from the data. Their current research suggests at least 6–10 emotions can be extracted with up to 80% reliability.

As mentioned in another post I wrote about wearables, however, there are challenges associated with capturing accurate data within a-synchronous live experiences where people experience different narrative beats at different points. Even though there are more and more advances everyday in the types of data-collecting wearables available (including skin patches, tattoos and even biosensitive inks), tracking many individual users’ exact physical journeys around a space and matching it with their physiological data (and thus their emotional response) is currently tricky. Not impossible. But tricky.

An easier environment to control and therefore capture accurate biometric data in is where people are relatively static consuming content, i.e. playing video games, reading a book, listening to music or watching TV / film. I personally know someone beginning to experiment in this field. The brilliant Brice Lemke who I met whilst co-running the Odyssey Works Masterclass in Portugal last summer is a Portland-based data scientist trained in both physics and philosophy.

A participant undergoing Lemke’s Experience Passport at The New Frame showcase in New York

Alongside his day-job and being a dad to 5 kids (!), he’s recently completed an Experience Design Certificate Program created by Abraham Burickson and Ayden LeRoux. Brice’s final project — a beta version of the ‘Experience Passport’ — was showcased at The New Frame in New York last November. Using Dry OpenBCI caps, facial-expression tracking via open source video analysis libraries and models and an EmotiBit he’s experimenting with capturing high-quality emotional, physiological, and movement data.

He’s interested in how we can use this data to train some machine learning models which can then interpret it and anticipate reactions to other experiences. He told me:

“Streaming services and other content providers already analyze which shows you’re watching, for how long, at what time of day etc., but they’re not yet digging down into what we’re emotionally engaging with and responding to. I thought it’d be pretty cool to do that.”

EmotiBit successfully raised funds via Kickstarter (top 2% of all time)! Here’s an intro the data it captures.

This is the first step in personalisation on a physiological level. Companies like the wonderful Odyssey Works could potentially use this to scale their personalised experiences for one. They use empathy as a springboard for creative practice which takes hours of in-depth research, including watching a participant’s favourite films, listening to their favourite music, reading their favouite books and potentially even playing their favourite video games during the research period. This is time-consuming stuff! ML could help the ‘Odyssey Engineers’ extract emotional states from the biometric data captured when the participants watch said movies, listen to said songs, read said books or play said games. The ‘engineers’ could then zoom in on the pieces of content that most resonate with the participant, and guide their human-centric design process more time-efficiently. Equipped with this physiological knowledge, experience engineers can then go about making new work littered with emotionally intelligent moments that they know will resonate with their participant.

It’s something I’m keen to try out myself. I know there’s fear out there regarding AI and how it’s going to alter human creativity forever. I hear that, and I get that. In this case though, AI is merely a tool in the experience makers toolkit, and if it can be used to help provide more emotionally intelligent work for more people, I think it’s something to be celebrated and utilised.

To quote the fabulous Frida Kahlo:

“At the end of the day, we can endure much more than we think we can.”

Perhaps this is the dawn of a new era; the era of “Generative-EI”. Human experience designers will still design the magical moments that matter; equipped with data and AI, there’ll have the time, energy and resource to make more of them.

This blog is part of a series I’m writing on what’s hot in both the software and experiential worlds. Follow me to get my latest discoveries straight to your inbox.

You can also find me on social channels @sophielarsmon and Any One Thing @any_one_thing

--

--

Sophie Larsmon
Any One Thing

Creative Producer & Director of Live Experiences, fascinated by how emerging technologies can foster human creativity