Share

AI's role as a tool in production – to assist, rather than replace human creativity – has been widely discussed here at shots, but rarely have we seen this principle brought to life so successfully in a TVC as in Original Source's latest spot, Nature Hits Different.

Created by Fold7 and helmed by Private Island director Chris Boyle, the pioneering television campaign subtly blends live action production with AI-generated footage and traditional VFX to transform one tired man’s morning shower into an exhilarating sensory experience.  

We caught up with Boyle to discuss the pros and cons of this innovative creative process and how their success might shape the future of commercial film.  

Original Source – Nature Hits Different

Credits
powered by Source

Unlock full credits and more with a Source + shots membership.

Credits
powered by Source
Show full credits
Hide full credits
Credits powered by Source

How did the concept for this project evolve? When did using AI come into the discussion?

It was on the table from the start. Dom and Kier—the creatives—were all about that manic, high-energy, AI-driven aesthetic, so it was baked into the concept early on. Lucky for us, we’ve been tinkering with generative imagery for a while, both in personal projects like Meme Myself and AI and in commercials (like our recent KFC Double Down spots). 

AI didn’t do it all—we did, with a team of artists on set and behind the scenes using a blend of cutting-edge and old-school methods.

Initially, someone floated the idea of using conventional VFX to imitate AI, but we flipped it: we’d generate the actual images using AI, then tidy up with more traditional post-production. During the pitching phase last year, we showed some proof-of-concept pieces that got everyone excited, and from that point on, it was just a matter of making it happen.

Is this the first TVC aired in the UK that uses AI-generated visuals in this way?

As far as we know, yes—it’s among the first (maybe the first) to put AI visuals in a mainstream UK TV campaign so front and centre. We’ve certainly seen other experiments, especially online, but nothing so high-profile on national telly. That said, we’re quick to clarify that AI didn’t do it all—we did, with a team of artists on set and behind the scenes, using a blend of cutting-edge and old-school methods. It’s that collision of different techniques we love the most, and we think that’s where this ad really breaks new ground.

Can you walk us through the workflow of blending live action and AI visuals? Is it a similar process to combining live action and VFX?

In many ways, it felt like a familiar VFX workflow. At PI, we styleframe and animatic extensively before any production, but in this instance, our styleframes were literally taken all the way to final. On set, we shot the live-action footage of our actor on sets as well as a load of references for both facial expressions and wardrobe. 

There was a butt load (technical term) of clean up - both where the generation had gone a little wonky and connecting the generative footage back up to the in camera elements...

Then, in post, we added our plucky hero into the styleframes and generated footage, seamlessly linking each word together. This was understandably done in close consultation with the creatives and client so we could hit the right marks for framing, performance and environment. 

Then, finally there was a butt load (technical term) of clean up - both where the generation had gone a little wonky and connecting the generative footage back up to the in camera elements in the shower and office. So, like any post-heavy job, there is a lot of fiddling and pixel fudging, but hopefully, the result makes it worth it!

What kind of prompts and techniques did you use to get AI to produce the imagery you were looking for?

We relied on a grab bag of tools and approaches. Since we had our main character on camera, we had plenty of specific references we could feed into the AI so it’d keep his look consistent. For the overall styleframes, we have a robust setup at Private Island that can merge 3D references (for framing) with a chosen AI model so the results match our planned shots. 

 Ultimately, it’s all about a delicate balance: giving the tools enough steer to dream big, but not so much it goes off the rails!

Prompt-wise, dare I say it’s more of an art than a science — descriptive, sensory-laden phrases typically yield the best mix of lush, surreal landscapes and kinetic energy. We also did a lot of trial and error. Prompts only get you so far; we usually ended up Photoshopping or massaging outputs to hit the exact look we need. Ultimately, it’s all about a delicate balance: giving the tools enough steer to dream big, but not so much it goes off the rails!

What were the benefits and downfalls of using AI in this scenario? 

The biggest plus is we made something genuinely new —something regular VFX might not have achieved. I suppose PI has always pushed to do more on budgets - we set up with AFX and C4D when the norm was maya and flame - almost ten years ago as we thought that would allow us to build weirder and more wonderful worlds and I’d absolutely put generative stuff in that same bucket. Having said that… I should absolutely shout out Rufus Blackwell who did some masterful flame and generation on this project!

 Traditional VFX isn’t going anywhere; instead, you’ll see AI woven into more and more stages of the process. 

However, as we know, AI can be finicky. You gain quick, mind-blowing visuals but may lose some measure of precision—so you spend loads of time wrangling the outputs, re-prompting, and smoothing out weird anomalies. And yes, you’re constantly watching for updates to the software mid-project, which can be both exciting (cool new features!) and terrifying (a mid-stream pivot!). Still, we think the final spot’s uniqueness more than justifies the extra legwork. 

Given your experience and success with this spot, do you think AI could supersede traditional VFX in advertising over the next few years?

It’ll definitely transform the industry, but I wouldn’t say “supersede” is the right word. Traditional VFX isn’t going anywhere; instead, you’ll see AI woven into more and more stages of the process. Low-stakes projects or social content might rely heavily on AI generation, while big-budget commercials still need the meticulous polish that hands-on artists bring. Our experience on Meme Myself and AI showed us that while AI can spit out incredible raw material, it takes real people to mould that into something compelling. Balancing machine efficiency and human creativity feels like the direction this industry is headed.

Creating with AI alone outside of a traditional VFX pipeline, is a tough beast to wrangle.

And that balance is important is important because It's not only how you make the work but how you allow collaboration and integrate feedback from agency and client. Creating with AI alone outside of a traditional VFX pipeline, is a tough beast to wrangle. It's taken us years of self-initiated practice, trial, error and experimenting to work out a production friendly method. Big thanks to Fold7 and their client for trusting in the process and treading new ground with us.

Did the issue of intellectual property and copyright infringement ever come up when you decided to use AI-generated visuals? If so, how did you navigate this?

That was definitely a top concern from day one. The legalities of AI art are still new terrain. In general, we’ve got a two-pronged approach: first, keep everyone informed about the tech we’re using and how it’s generating images; second, steer the AI clear of any muddy waters right out of the gate. That means no direct references to famous paintings, copyrighted characters, or anything we don’t own the rights to.

We kept an eye on evolving guidelines as we’re keen to be part of the wider conversation in the industry about AI ethics...

In this instance, because we were shooting with the hero and the basis of the styleframes was abstract nature, it was a pretty simple process. Throughout the project, we kept an eye on evolving guidelines as we’re keen to be part of the wider conversation in the industry about AI ethics and aim to navigate this minefield by being cautious and informed!

Share