Not another blog post about Generative AI

Artificial Intelligence has a lot more to offer the Immersive Experience sector than just the ‘gimics’ of Generative AI tools.

Sophie Larsmon
Any One Thing

--

Unless you’ve been living under a rock, you’ve no doubt heard LOTS about Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) recently (and if you’re still confused about the different terms, check out this handy video). The likes of Chat-GPT, Google’s Bard, Gen-1 by Runway and a host of Generative-AI tools have recently been the meat of many Medium posts (including some of my own). The tools have filled the column-inches and bolstered the page-views of both specialist and generalist publications and websites alike. They generally, however, focus on the creative potential and fears surrounding ‘Generative AI’.

Generative AI is a subfield of machine learning that involves training computers to generate new outputs based on the data they have been trained on. Unlike traditional AI and ML systems that are designed to recognise patterns and make predictions, generative AI creates new content in the form of images, text, audio, and more.

A range of example of content made by various Generative-AI tools. Much has been written lately about the creative potential and the perceived threats of the technological advances in this field

There’s undoubtedly lots of potential here for creating content in immersive experiences — again I’ve started to think about this potential elsewhere — but what I want to explore in this post is other ways AI can be utilised creatively in the Immersive Experience sector.

What we can do with AI:

By harnessing both the power of data and advances in AI I believe we can create a more predictive, personalised, joined-up and participatory model of making and promoting work. I hope it goes without saying that everything below would be based upon user’s consent on their data to make their engagement with arts and entertainment more tailored to them. There needs to be transparency, a clear value exchange and trust that this sort of data isn’t being used outside of their wishes.

Always bearing that in mind, below is a list of potential uses of AI systems within the Immersive Experience industry where AI could do the heavy lifting. This is by no means an exhaustive list and I am obviously not a data scientist. If you have other ideas, think I’m wrong or would like to flesh out these ideas more, please do reach out by leaving a response to this piece.

  • Systems of engagement that reimagine how we interact with booking systems to make them more efficient and accessible.

I see many potential applications here. One could be presenting a bookings page which is designed to make it the most appealing and user-friendly to each individual audience member (aka participant) based on their digital consumer behaviour across different sorts of e-commerce sites;

Another is creating a cross-marketing / booking platform for players in the immersive experience field which promotes experiences based on previous bookings and preferences. Currently there are listing websites for immersive events like everythingimmersive.com and booking sites like fromtheboxoffice.com which has an immersive experiences page but neither currently use AI to dynamically suggest content you might like;

Another is advancing the booking systems for shows like our production Souvenir. In this instance, someone was only given the ability to book a show if it was in their local area (fostering the hyper-local trend within the experience economy, restricting supply to increase demand which I wrote about in a previous post). This was labour-intensive for our producer, but AI could automate this;

Another use is for experiences where there’s an element of pre-event merchandise on offer (think character costumes for Secret Cinema events for example). There is now AI for customers to try digital clothes on their physically accurate avatar for online retailers. Why not adopt this AI-driven technique within the experience market? From what I can see, it’s relatively simple, but it elevates the customer experience by transporting them to the fictional world before they walk through the event’s door. This fun tool could be offered to people free of charge before they buy a ticket to the actual experience. By using AI, producers can blend production and promotional activities in order to engage potential audiences more fully pre-event. If they can picture themselves there - fully garbed in a fantastical outfit - they might be more willing to press the ‘Buy Ticket’ button.

New avatar technology allows you to try on clothes to see how they’d realistically look on you. Why not costumes? [Image by Reactive Reality]
  • Systems of records that can bring together comprehensive data in a secure way to reveal unprecedented insights about people’s tastes and dislikes.

By using data from the likes of Google Reviews / TripAdvisor / OpenTable or from photos of food and beverages (F+B) that are posted on people’s social media, ML could help experience makers customise their F+B offering. Cocktails and menus could be curated based on these insights in such a way that resonates personally for different users. Perhaps there’s potential here to incorporate people’s music tastes into their experiences too? Dice syncs a user’s music library when you sign up to their app. AI is a curator’s best friend.

Curated cocktails could be constructed by the likes of Fire + Fly, the leading food + beverage company in the immersive hospitality industry [Author’s image taken at Secret Cinema’s ‘Bridgerton’]
  • Systems of intelligence that can utilise our individual data to inform personalised narratives / experiences.

The potential here is huge and could be used in a range of ways. From scrutinising data shared through social media posts before an experience, to analysing physiological data which maps historic emotional responses pre or live emotional responses during an experience, to tracking behaviours with prop interaction within an experience, there’s a massive amount of different types of data which can personalise a participant’s experience. Once you’ve chosen the dataset you want to capture and have worked out how you’re going to capture it, ML systems could segment these datasets for you, extract features and produce decisions.

One dataset of particular interest to me: The Galea Software Suite enables you to access a full range of data from the body taken via sensors within Varjo XR-3 headsets.

It’s worth noting that the personalisation created with these decisions could be done from both a top-down approach (production driven) or a down-up approach (participant driven).

Machine learning systems can be predictive, meaning the system can use historic data to predict what will happen in the future. Company Managers could use this information to make an educated guess as to which narrative path a participant is going to choose, and ensure there’s an actor in that part of the set to create a bespoke one-on-one moment. They could get a sense of whether a group would get more out of a quest-driven mission or an emotionally-driven scene and again deploy actors, props, stage managers etc more efficiently. In these instances, ML systems are enabling producers to move their resources around to where they’re most likely to improve the experience.

The function of a machine learning system can also be prescriptive, meaning the system can use historic data to make suggestions about what action to take in the present. These systems could help guide participants toward optimal outcomes.

Machine learning systems can also be descriptive, meaning that the system uses data to explain what did happen. This could potentially be helpful in episodic experiences where participants come back for more content within the same universe. This historic data could fuel Generative AI systems to create personalised content for next time.

Ah, so we’re back to Generative AI! Well, look, all of these AI systems are interlinked. The important thing in my mind is that systems of collaboration that can develop novel ways of machine-human co-creation (and help ensure the human touch isn’t erased from these experiences) will need to be designed. We will need a collection of tools that can empower performers, creators and participants to turn these various data-derived insights and generated content into meaningful content. This isn’t about artist replacement; this is about creative enhancement. Machine learning systems can segment data, extract features and produce decisions, but it is down to us, humans, to do the final, crucial part of the process. Context needs to be provided. This is a data-informed approach rather than a data-driven onean approach I heard Dr. Hannah Fry speak about in a talk at the Big Data & AI expo last week.

Back in 2015, MIT professors Erik Brynjolfsson and Andrew McAfee argued in The Second Machine Age:

Today’s tech wave will inspire a new style of work in which tech takes care of routine tasks so that people can concentrate on what mortals do best: generating creative ideas and actions in a data-rich world.

We’re in that tomorrow now, the new style of work is here, and thanks to ubiquitous computing, we have the ability to become incredibly rich in all sorts of data. By collaborating with brilliant data scientists, we can create the algorithms and ML tools so that we can get the tech to take care of the routine tasks and we can crack on with the creative ideas and actions.

I’ll be featuring the work of one of these brilliant data scientists, Brice Lemke, in my next post. He is creating an invention powered by Machine Learning called The Experience Passport, something that I believe will use artificial intelligence to enhance human emotional intelligence in experience design.

Further Reading if you’re interested in this area:

Artificial Intelligence and the Future of Work by Thomas W. Malone, Daniela Rus and Robert Laubacher

The Second Machine Age by Erik Brynjolfsson and Andrew McAfee

Hello World. How to be Human in the Age of the Machine by Hannah Fry

Chapter 4 ‘Artificial intelligence/machine learning solutions for mobile and wearable devices’ by Zhenxing Xu, PhD; Bin Yu, Phd; and Fei Wang PhD in Digital Health: Mobile and Wearable Devices for Participatory Health Applications edited by Shabbir Syed-Abdul, MD, MSc, PhD; Xinxin Zhu, MD, PhD, FAMIA; Luis Fernandez-Luque, PhD

With thanks to Leah Kurta and Professor Jonathan Freeman from i2 media research ltd. for sharing their psychologically derived IMPACT model which I’ve used to shape some of my thinking within this post.

This blog is part of a series I’m writing on what’s hot in both the software and experiential worlds. Follow me to get my latest discoveries straight to your inbox.

You can also find me on social channels @sophielarsmon and Any One Thing @any_one_thing

--

--

Sophie Larsmon
Any One Thing

Creative Producer & Director of Live Experiences, fascinated by how emerging technologies can foster human creativity