Boom and bust cycles are a natural part of technological development.

During booms, entrepreneurs and enthusiasts make wild claims about how a radical new technology will reshape how we live and work. Sometimes, they are correct, even beyond their inflated expectations.

Busts are an equally natural response that grounds us to what is possible now. Capital shifts into different sectors as the shine wears off, and the public zeal turns its focus to the next hot new thing.

Blockchains and virtual reality are recent splashy concepts that have gone through boom-bust undulations but have yet to impact all our lives meaningfully. The realities of what is possible today drag us back to Earth and limit products using these technologies to niche appeal.

Generative AI is shaping up to be different as the rate of progress continues to press forward at a new order of magnitude, which leaves us still looking for the crest of its current boom cycle.

OpenAI demonstrated how fast things can improve in AI with its recently announced text-to-video AI model, Sora. Sora produces unique and original video content with only a line of text from a human as a guide. The results are astonishing and, I believe, good enough for some advertising purposes. 

Don't believe me? You be the judge:

It had been a long time since I felt a sense of absolute wonder elicited by new technology before OpenAI unleashed generative AI — but here we are a year later, and OpenAI has done it again.

When I first used ChatGPT and the image generation tool DALL-E last year, it was easy to see the potential near-term impact on advertising — mainly around ad creative creation. I knew it could take some time to work the kinks out, but the future was clear: advertisers would use AI to generate advertising copy and imagery.

Since most of my day-to-day focuses on advertising in video, this would have little effect on the creatives flowing through the pipes I deal with most closely. But OpenAI has now achieved what I thought we might see by the next decade if we are lucky.

With generative AI developing rapidly, I want to review how it's used in advertising platforms today to keep up with the advancements. I'll share what I found below and offer an opinion on how generative AI could impact advertising in the future, which could include video and more.

Google and Meta have each launched generative AI features integrated directly into existing ad creation workflows. It is no shock that we have seen these offerings from each company, given that they are the two most well-resourced businesses in advertising technology, and each possesses its own homegrown AI model.

The tools offered by each company are similar, but let's review what each product does and how they deliver real value for advertisers today.

Google Generative AI Ad Tools

In November last year, Google announced the inclusion of new generative AI tools in Google Ads as part of their Performance Max offering. Performance Max (PMax) is a Google Ads tool that has used machine learning since 2021 to automatically target inventory across the Google portfolio to help advertisers achieve inputted outcomes.

The big knock on Performance Max is its black-box nature, where an advertiser tells Google the outcomes it wants to achieve, and Performance Max tries to do that. The advertiser has little control over where or how an ad is delivered, and advertisers must trust the super-secret PMax magic to do its thing. 

For Performance Max to achieve the advertiser's goals, it needs a large set of ad creatives to test variations across all Google inventory. Creating copy and imagery is very time-consuming, which is precisely where generative AI can help. 

Google has integrated generative AI directly into the ad creation workflow by generating ad copy and original unique images.

These tools empower marketers to create dozens or hundreds of creative variations in different formats for placement across Google properties. Generative AI drastically reduces the overhead required to produce so many variations.

Having many variations for a creative is extremely useful for a tool like Performance Max, which optimizes campaigns in real time to achieve specific objectives. Change a word here, try a new image there, test, implement, repeat.

Generative AI gives Performance Max more ammunition at the creative level to auto-optimize until it achieves a specific goal. 

Meta Generative AI Ad Tools

Not to be outdone by Google, Meta also recently introduced generative AI tools to make producing and optimizing creatives easier. However, rather than creating new, unique images, they limit the AI's creative freedom relative to Google's tools. 

Meta offers three generative features for advertisers. From the announcement:

  • Background Generation: Creates multiple backgrounds to complement the advertiser's product images, allowing advertisers to tailor their creative assets for different audiences.
  • Image Expansion: Seamlessly adjusts creative assets to fit different aspect ratios across multiple surfaces, like Feed or Reels, allowing advertisers to spend less time and resources on repurposing creative assets.
  • Text Variations: Generates multiple versions of ad texts based on advertiser's original copy, highlighting the selling points of their products/services and giving them multiple text options to better reach their audience.

While Meta stops short of producing entirely new images, these three features lessen the burden of producing variations of creatives. Advertisers can use these variations to optimize campaigns.

Meta also has an auto-optimizing algorithmic buying tool like Performance Max called Advantage+. Meta doesn't specifically tie these new generative AI features to Advantage+, but there are passive mentions of generative AI in the Advantage+ help center. 

In any case, Meta must surely be gunning for the same strategy as Google to create more creative variations — so it has more variables to play around with when optimizing campaigns.

Google and Meta have immediate reasons to introduce generative AI into their advertising tools.

  1. Creates meaningful value by helping advertisers generate more opportunities for optimization — which is the name of the game in performance advertising.
  2. Demonstrates to investors that the companies are incorporating the most hyped technology of the decade into their products using their in-house AI models. 
  3. It widens the gap between competition as it may be cost-prohibitive for smaller players to introduce generative AI features. 

Will we see similar offerings for independent DSPs buying the open web? Maybe, but they will also be stuck licensing generative AI capabilities from other companies — potentially driving up costs for such a feature. Owning AI models gives Meta and Google another competitive advantage over smaller ad tech companies. 

Other demand-side platforms will have to come up with a response or at least integrate elegantly with third-party generative AI creative services if these kinds of ad creation workflows catch on in popularity.

Meta and Google's generative AI features only apply to social and display inventory, which are mediums dominated by the two companies. But given the astronomical leap Sora introduced, should we start thinking about the possibilities with video?

Generative AI Video Creatives

Generating alternative text and image variations for display ads is easy to grasp, given that we've had over a year to wrap our minds around generative AI for text and images. However, thinking about generative AI for video will take some imagination.

Sora accepts text prompts from a user and outputs unique and original video clips. GPUs run countless calculations to display how they think a human would visually interpret a string of text.

If you made it this far, you may wonder at this point what a marketer could do with such a tool. Let's use the meat computers in our heads to imagine some possibilities.

Advertisers would likely adopt the technology in phases, with each incremental phase taking over an increasingly large chunk of creative production. 

Phase 1: Use generative AI for rapid storyboarding, producing mock-ups to pitch ideas early in development or when crafting a campaign and media buy. 

Phase 2: Create basic shots and stitch them together with human-produced clips. Replace the use of licensing stock imagery with generative AI. 

Phase 3: Create an entire 15 to 60-second video ad creative using a text prompt. 

Will someone soon be able to type "captivating yet pensive advertisement that sells my entire inventory of automated soap dispensers" to generate an entire creative? Maybe not, but at the rate we are moving, do not be so surprised if we get there sooner rather than later.

Before we are at that point, there may be some smaller opportunities where generative AI can step in:

  • Car manufacturers could experiment with different generated scenery depending on the time of year or user location — like a car driving on the beach in winter and the snowy mountains during summer.
  • Swap the location of ads to match the same city the user lives in so they can more closely resonate with a marketing message.
  • Augment products using first-party data — like changing a basketball player's shoes to the same pair a user added to their cart but didn't complete a purchase.

Generative AI tools will ultimately help advertisers produce many variations of a creative — just like where we are now with text and video.

Companies like MNTNTVScientific, and Vibe want to make connected TV buying as performant and easy as buys on Meta and Google. Introducing features that produce more video creative variations available for optimization would only give these companies more opportunity to test those variations for maximum return on CTV ad spend. 

Google has already tested the waters with features to auto-generate rudimentary generated video creatives by stitching together stock music, logos, and images. Google does this to have video creatives available to distribute to YouTube and expand inventory options available to the PMax algorithm. However, PMax users are less than thrilled with the low-quality results

Using AI to create truly generative video assets rather than cobbled-together monstrosities will likely be a future use case for Google's Gemini AI. But given the company's high-profile parade of AI blunders, it's unclear if achieving anything like Sora's results is in the cards anytime soon.

Crafting many variations of video creatives is a concept introduced previously. Some companies conduct a simpler version of this practice through dynamic creative optimization.

Jivox would be happy to show you case studies that tout capabilities to adapt video ads based on user data, weather, and location. Flashtalking touts similar capabilities in their marketing materials. 

But these solutions do not generate original video on the fly and rely on swapping text, images, or clips to create dynamic creatives tailored to individuals — impressive in its own right, but still a precursor to what I'm discussing, which would be generating new video material via AI.

Some advertisers love these existing capabilities, even though publishers utilizing server-side ad insertion (SSAI) may not. Introducing many different creative variations in video introduces a set of challenges for publishers utilizing SSAI:

  • Publishers SSAI service must transcode each new creative independently, creating a delay between when it starts bidding and when it is eligible to win.
  • Supply-side platforms can struggle to identify unique individual creatives depending on their methodologies. 
  • Compute resources used to process creatives can be backed up if they receive many different variations.
    • User-level creative variations can cause a creative queue to pile up, impacting a publisher's ability to transcode creatives promptly and even impacting other non-dynamic creative campaigns.

These same issues would carry over or even compound in a future world where advertisers utilize generative AI in video ad creation to produce many unique creatives.

Despite potential issues, generative AI opens the door for genuinely personalized creatives. The use cases could span from futuristic to dystopian, depending on your point of view.

Imagine clothing brands could generate avatars of their customers and inject the customer into video ads wearing the brand's clothing instead of impossibly good-looking models. Airlines could generate scenery of locations you've been looking up ticket prices to visit with generated actors performing activities you like to do.

We may be far from social acceptance of these use cases, given that simple retargeting ads draw privacy scrutiny, but with valid consent, users may appreciate a truly personalized experience.

Generative AI ushers in a new era of creativity across all arts and industries, and advertising is no exception. The technology can utterly transform how marketers craft messaging and conduct storytelling.

AI is moving at a breakneck pace, and what we may think of only as a farfetched possibility today could be our reality tomorrow.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ad Tech Explained.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.