The Creative Convergence of Artificial Intelligence and Virtual Production in the making of Transmedia Storytelling

What’s the potential when these three hot topics collide?

Sophie Larsmon
Any One Thing

--

The last few weeks have been crammed with inspirational training and networking. I was lucky enough to be accepted onto both the Creative Convergence Training Programme facilitated by Liminal Stage Productions (commissioned by ScreenSkills using funding from Arts Council England) and Innovate UK KTN’s Ignite Lab x BridgeAI series which exists to ignite innovation, empower growth and transform businesses through the power of AI. Being part of these programmes, as well as trips to Cambridge Tech Week, the Media Production & Technology Show and the inaugural Immerse UK Awards have been incredibly thought-provoking — I could write 10 blogs on what I’ve seen, heard and learnt (perhaps I will!).

Peer support network: The ‘Creative Convergence’ training event was designed to bring together professionals from screen, theatre, opera, live events and immersive to raise understanding of how creative technologies and processes are converging in the creative industries.

There were very different crowds at these events — artistes, scientists, entrepreneurs, designers, investors, luvvies, developers, directors — but interestingly they were often talking about the same things. Whether in a seminar, participating in an interactive workshop or chatting over sandwiches, I was never far away from a conversation about one of three topics: Artificial Intelligence, Virtual Production or Transmedia Storytelling. I’ve learnt a lot about all these areas over the last few weeks, and am well aware I’ve got a lot more to learn about all three. What slightly surprised me, however, was the lack of joined up conversation about how these areas might intersect. This blog, therefore, is an attempt to outline some of the potential of their convergence.

There are of course multiple forms of Transmedia Storytelling. For the purpose of this blog, I’m focusing on forms which at least partly incorporate content made within the VP workflow which traditionally are Film or TV. If either film or TV are within a transmedia universe, they are often the ‘primary content’ within that universe. It is, however, beginning to be used in live, In Real Life (IRL) events too, which I will touch on below.

If you would like a definition of Artificial Intelligence (AI), Virtual Production (VP) &/or Transmedia Storytelling (TS), check this out. Otherwise, let’s jump straight in…

Designing complex, multifaceted narratives for multiple platforms can be a challenging, time-consuming task. Both Simon Wilkinson and Myra Appannah from BRiGHTBLACK shared some examples of their Transmedia Storytelling work at the Creative Convergence training. It was inspiring, but also emphasised what a big feat creating this sort of work is. Wilkinson’s Whilst the Rest Were Sleeping comprised of 17 VR installations, an Augmented Reality app for android and iOS, a feature length film with live electronic music and around 30 websites which formed an online trail!

Trailer for Circa 69’s ‘Whilst The Rest Were Sleeping’

In this interconnected, epic world of storytelling, AI can emerge as a powerful ally, offering a range of benefits to the creation of these universes.

Streamlining Content Creation

AI algorithms could be used to generate dialogue, character interactions, and plot developments, as well as enhancing world building. Procedural Generation is already widely used in Video Games to automatically create large amounts of content. With NLP and generative models like GPT-4, AI could provide a starting point for multiple types of content creation. This would provide creators with a robust foundation for their transmedia universe and allow them to focus on refining and expanding their overall story worlds.

Accelerating Production Timelines

ML could analyse vast amounts of data to derive insights that might drive the creative process. For example, it could analyse audience reception data to predict how certain storylines or visual elements might be received, informing the director’s creative decisions during the VP process, ultimately speeding up production.

Enhancing Pre-Visualisation in Pre-Production

The emphasis on pre-production is key within the VP workflow. The continuous iteration cycle it promotes opens up the opportunity for new efficiencies throughout the production, allowing makers to see what they have and what their show will be like much earlier in the process.

AI could also enhance pre-vis efficiencies further, by creating detailed 3D models and environments based on simple sketches or descriptions. This capability could provide a more accurate representation of the finished scenes during planning stages.

Visual Coherence & Economies of Scale

In Convergence Culture, Henry Jenkins describes transmedia storytelling as a “process where integral elements of a fiction get dispersed systematically across multiple delivery channels for the purpose of creating a unified and coordinated entertainment experience.”

If assets built by the Virtual Art Department (VAD) can be re-used across different platforms, you not only have visual coherence, you can also amortise the production cost across the different media. The economies of scale in asset building can generate considerable cost savings across the overall project.

Having made major creative decisions about the digital elements earlier in pre-vis, lead creatives can put AI algorithms to work in generating associated content for different platforms. Based on the digital assets, new storylines, more fleshed out secondary characters, and narratives from different characters’ points-of-view can be generated. This content could be placed in video games or Virtual Reality (VR) and Augmented Reality (AR) experiences. It will be visually cohesive with the primary content and won’t require huge amounts of time or energy from the main creative team who can focus on getting the primary content made. This associated content could be released pre-launch of the film or TV programme to help build buzz.

Automating large parts of Post-Production

A lot of the work traditionally done in post-production is moved to pre-production and production in VP, but that doesn’t mean there isn’t any! Additional visual effects — 3D modelling, image processing, colour grading, compositing etc. — are commonly still required. AI can already automate many of these post-production aspects in ‘traditional film’, including editing. When combined with the real-time capabilities of VP, it could dramatically boost the speed of manual work in post, saving time and resources. Its ability to scan entire scenes and classify objects could further speed up the VFX process.

Personalising User Experiences

Using LED screens and real-time technologies in front of a live physical audience brings with it huge potential in TS. As transmedia stories encourage user participation, AI could play a vital role in tailoring these live experiences to performers and audiences. Imagine if you could create a space that changed colour based on the audience’s mood?! We’re not far away from being able to do such things. ML algorithms can analyse audience behaviours in real-time. This data could drive the real-time design of virtual environments, narratives and character interactions. This level of personalisation would not only enhance user engagement, but it would also foster deeper connections with the narrative world.

Using NLP and ML, AI can already enable characters to ‘understand’ and respond to user input in a realistic and meaningful way, offering a level of engagement that extends beyond pre-scripted interactions. I’ve seen this in action in Saint Jude by Swamp Motel where I was able to interact with a character powered by charisma.ai’s conversation platform.

I’m keen to understand if this could extend beyond a single platform, i.e. could characters remember a conversation they had with you in an IRL experience, and then refer back to it in a video game or AR environment?

Facilitating Collaboration

As I’ve seen over the last few weeks, TS often involves a diverse range of creators, from writers and artists to game designers and developers. VP’s workforce also often come from diverse creative fields and a common working language can be tricky to generate early-on. AI could facilitate collaboration among these individuals by serving as a creative partner, providing suggestions, and contributing ideas to the brainstorming process. By acting as a facilitator, AI might inspire new creative directions and open up unexpected narrative possibilities, creating a bridge between different skill-sets and backgrounds.

I’m sure there are more ways these areas could converge. I haven’t even mentioned the Metaverse here for example (check out this interesting blog by Target3D on this topic). The list above is just the beginning; I’d love to hear your thoughts on other potential convergences too. Please leave a response to this article or reach out to me via socials @sophielarsmon

Monty Barlow at the opening of Cambridge Tech Week 2023

It is down to us to figure out how best to use today’s tools to channel the increasing thirst for participation, but I think we’re on the brink of a revolution. It is in this convergence that we will create an entirely new dimension of creative possibilities. We can’t go into this blind — this convergence will also bring challenges, particularly in terms of the ethical use of AI and the need for significant training in technical skills and investment in infrastructure. But if we can get these things right, the evolution of these technologies will undoubtedly redefine the landscape of media, arts and entertainment. This new age of hyper-connectivity, fuelled by AI, Virtual Production and Transmedia Storytelling, has the potential to open doors to uncharted territories of creativity and engagement. Let’s step in, participate, co-create, and reshape the future of storytelling.

--

--

Sophie Larsmon
Any One Thing

Creative Producer & Director of Live Experiences, fascinated by how emerging technologies can foster human creativity