Share

A$AP Rocky recently dropped a madcap music video for his track Tailor Swif. There’s a rabbit bathing in a public bathroom sink, a dolphin playing in a puddle, and a latte toilet. 

AI visuals are becoming so advanced that distinguishing them from real, handcrafted content is a growing challenge.

These are just a few of the blink-and-you’ll-miss-it moments in a promo packed with surreal imagery. When a colleague asked me if AI was used to create some of the visual effects, I winced - and I have a background in VFX. 

We’re at a pivotal point in the AI era where it’s becoming increasingly difficult to discern what’s AI-generated and what’s not. As generative AI tools like Midjourney and DALL-E continue to evolve, they’re beginning to produce photorealistic images that carry fewer telltale signs of the uncanny valley (human hands with seven fingers, anyone?) and things like food and landscapes just keep on looking more delicious and beautiful. 

A$AP Rocky – Tailor Swif

Credits
powered by Source

Unlock full credits and more with a Source + shots membership.

Credits
powered by Source
Show full credits
Hide full credits
Credits powered by Source

AI visuals are becoming so advanced that distinguishing them from real, handcrafted content is a growing challenge. But, there are also less controversial AI-powered tools emerging that are greatly improving efficiency across creative and production. 

Will AI eventually replace the need for human artists or will it remain a tool that augments the creative process? 

For those in production, this shift signals both an opportunity and a caution. On one hand, some generative AI tools can blur the lines between human artistry and machine-made work, posing new questions about authenticity. 

On the other hand, AI tools are expanding creative production possibilities, offering new ways to generate content more quickly and cost-effectively - most of these tools are actually the product of machine learning (ML), which is the process of teaching a machine how to perform a specific task and provide accurate results by identifying patterns. 

Above: Demo of DALL-E 3, a text-to-image model developed by OpenAI to generate digital images from descriptions known as 'prompts'.

Epic Games recently introduced MetaHuman, an AI-powered tool that can transform real people into virtual humans within minutes. These digital doubles can be seamlessly integrated into 3D environments and used in backdrops during virtual production shoots. 

This opens up vast possibilities for creating scenes packed with virtual extras — imagine a stadium full of animated spectators or busy city streets without the need for hiring on-set extras. As tools like this continue to evolve, they enable faster, more efficient creation of virtual environments, allowing studios to deliver highly dynamic content with fewer resources.

By letting ML handle the tedious tasks and AI support the creative process, production teams can work smarter, not harder, and focus on the more important stuff! 

While AI tools like MetaHuman offer increased efficiency, generative AI tools such as Stable Diffusion and Midjourney are starting to play a more direct role in VFX workflows too. These tools can help artists generate concept art, storyboards, or even create entire backgrounds in a fraction of the time it would traditionally take. However, for many VFX artists, this raises questions about the future of their craft. Will AI eventually replace the need for human artists or will it remain a tool that augments the creative process? 

Above: Introduction to MetaHuman, Epic Game's AI-powered tool that can transform real people into virtual humans within minutes. 

My outlook is that AI and ML will continue to create opportunities for increased efficiency and refocusing creative effort, while augmenting rather than replacing human capabilities. By letting ML handle the tedious tasks and AI support the creative process, production teams can work smarter, not harder, and focus on the more important stuff! 

My advice to those in adland is to embrace this technology – understand how AI and ML can enhance your work and capitalise on it. The future of creative production lies in the synergy between human ingenuity and AI assistance. 

We’re at a point where AI and human creativity are dancing so closely, it’s hard to tell who’s leading. 

We're already seeing how AI is augmenting human creativity in VFX. Autodesk Flame, for example, now includes AI-enhanced features such as human face segmentation driven by machine learning. This automates the process of tracking, identifying, and isolating facial segments like eyes, nose, lips, laugh lines, and skin, enabling VFX artists to make faster adjustments. 

Digital Domain contributed nearly 1,000 VFX shots for She-Hulk: Attorney at Law, using AI and ML for facial animation, face-swapping, and cloth simulation, helping to produce more photorealistic and accurate results while reducing the complexities of creating these shots for the VFX artists.

Above: Trailer for Disney+'s She-Hulk: Attorney at Law

So, back to my colleague’s question: was AI used in A$AP Rocky’s promo? I’m not sure – it looks like a series of well-executed VFX to me (and knowing the director duo, that’s likely the case). They might have used AI-powered tools to assist in production, but does that really matter? 

As we look ahead to 2025 and beyond, answering questions like this will become even harder. We’re at a point where AI and human creativity are dancing so closely, it’s hard to tell who’s leading (unless you’re watching that terrifying new Toys ‘R’ Us ad). But as long as we keep getting dolphins in puddles and latte toilets, I’ll keep watching.

Above: Toys 'R' Us ad made entirely using +OpenAi Sora 
Share