Are your PPC ads still authentic in the age of AI creative?

PPC platforms are asset-hungry. What began as simple text ads and keyword bidding has evolved into an AI-driven ecosystem.
Tools inside Google Ads can now remove backgrounds, generate lifestyle scenes, and even create synthetic humans in minutes. But just because the technology allows it doesn’t mean every brand should use it.
That shift forces PPC advertisers to confront difficult questions:

Are you willing to trade efficiency for authenticity?
How far up the stack should your brand let AI operate?
If clients knew exactly where and how you were using AI, would they trust you, or would they question you?

A brand integrity hierarchy offers a way to navigate those decisions — a four-level framework that helps determine how much AI manipulation your brand, industry, and audience can tolerate.
Why PPC needs its own AI ethics framework
Generic AI ethics guidelines don’t account for the operational realities of paid search. PPC isn’t a brand storytelling channel. It’s a high-volume, high-velocity system that demands constant image production across dozens of audiences, formats, and placements.
You must generate fresh lifestyle imagery at a pace traditional creative workflows can’t sustain.
At the same time, Google and Bing enforce strict policies around accurate product representation, especially in Merchant Center, where even minor visual inaccuracies can trigger disapprovals or account risk.
Layer on top of that the platform pressure. Google Ads added Nano Banana Pro, turning Asset Studio into an AI co-creation environment. Performance Max actively pushes you toward AI-generated backgrounds, variations, and lifestyle images to improve performance. Demand Gen and Merchant Center also now have capabilities to change product images at scale.
Most brands can’t afford the photoshoots required to keep up with this demand, yet the volume and placement of images across channels make them unavoidable if you want to compete.
This combination of policy risk, creative pressure, and platform-promoted tools is unique to PPC — which is exactly why the industry needs its own AI ethics framework.
Dig deeper: What’s next for PPC: AI, visual creative and new ad surfaces Your customers search everywhere. Make sure your brand shows up. The SEO toolkit you know, plus the AI visibility data you need. Start Free Trial Get started with
Level 1 – The core (zero risk): The absolute truth
Definition: The product and the human exactly as they exist in reality.
Permitted activities:

Upscaling resolution.
Cropping for fit.
Color correction.
Non-generative background cleanup (removing dust, adjusting lighting).

PPC context: This level is fully compliant with Google and Microsoft’s “accurate representation” policies. Merchant Center explicitly permits technical edits that don’t alter the product itself. This is the safest zone for regulated industries such as finance, healthcare, legal services, and brands with strict authenticity standards.
Client talk-track: “We’re using AI to make your reality look its best on every screen size. We aren’t changing what the product is, only how it’s displayed.”
Risk assessment: Zero brand risk. Zero policy risk. Maximum consumer trust.
I think about Level 1 the same way I think about working with a graphic designer in Photoshop. You’re not changing the product, the setting, or the truth — you’re simply cleaning up what already exists.
This level is about technical refinement, not creative invention. It’s the equivalent of adjusting lighting, removing dust, fixing a crooked crop, or correcting color balance. Nothing about the image becomes “untrue.” You’re enhancing reality, not altering it.
Level 2 – The inner ring (low risk): Contextual narrative
Definition: AI-generated environment, not AI-generated product.
Permitted activities:

Generative backgrounds (e.g., placing a watch on a mountain backdrop).
Removing visual distractions (e.g., power lines, litter, unrelated objects).
Seasonal or thematic settings (e.g., holiday scenes, office environments).
Generic commodity generation (e.g., coffee beans, grain, raw materials, not branded products).

Google Ads context: Performance Max’s AI background generation is designed for this level. Google allows contextual enhancement as long as the product remains unchanged. This approach is useful for scaling creative variations without expensive location shoots or studio rentals.
Risks:

Cultural mismatch. AI-generated settings may not reflect the target audience’s reality.
Unrealistic or off-brand environments.
Requires human review for brand consistency.

Client talk-track: “We’re using AI to build a world for your product to live in. The product the customer receives is identical to the one in the ad.”
Risk assessment: Low brand risk. Low policy risk. Maintains consumer trust if executed thoughtfully.
Level 2 sits in an odd psychological space. The manipulations themselves are still low-risk. You’re creating scenes, composites, or enhanced environments the same way a graphic designer would in Photoshop.
Brands have been doing this manually for decades. But the moment AI performs the same task, something shifts. To customers, and even to some advertisers, the exact same edit can feel more artificial simply because an algorithm did it instead of a human.
That perception gap matters.
Even when the output is identical, AI-assisted scene creation can trigger a sense of “this looks fake” that traditional Photoshop work never did. It’s irrational, but it’s real and worth acknowledging at this second tier. The actual risk is still low, but the emotional risk is higher than Level 1.
Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

Get the newsletter search marketers rely on.

See terms.

Level 3 – The outer ring (high risk): Subject augmentation
Definition: Altering the “hero” — the product or the person.
Activities:

Beautification filters on models.
Slimming or reshaping human subjects.
Altering food textures to appear more appealing.
Removing “imperfections” from products.
Making products appear more premium than they are.

PPC industry context: The platforms prohibit misleading or manipulated product imagery. Merchant Center disapprovals often occur at this level. High sensitivity exists in beauty, apparel, food, and health categories, where consumer expectations are tied directly to visual accuracy.
Recent consumer trust studies show that users feel deceived when they discover product images have been significantly altered. This is a policy concern, more so a brand reputation issue.
Half of U.S. adults (51%) believe AI-generated and edited content needs better labeling, CNET reports. One in five (21%) believe AI content should be prohibited on social media with no exceptions.
Risks:

High PR risk (e.g., press call-out moments).
High policy risk (e.g., disapprovals, account suspension).
High consumer trust risk (e.g., returns, negative reviews).

Client talk-track: “This is where we risk the ‘press call-out.’ If we remove a model’s birthmark or make a burger look like a 3D render, we aren’t optimizing — we’re fabricating.”
Risk assessment: High brand risk. High policy risk. Potential for long-term damage to consumer trust.
Level 3 moves into territory where the image no longer reflects the real person or product. And yes, brands have been doing this in Photoshop for years, and they’ve been called out for it just as long. There’s precedent, and there’s backlash.
What changes at Level 3 is scale. AI lets you make edits instantly, repeatedly, and across entire product catalogs or campaigns. The ethical risk isn’t new, but the volume and speed at which AI enables these distortions make the consequences far bigger. A single questionable Photoshop edit is one thing. Hundreds of AI-altered images pushed across every channel is something else entirely.
This is where the risk stops being theoretical and starts becoming reputational — and where paid search teams need a clarified stance.
Level 4 – The edge (critical risk): Full fabrication
Definition: Synthetic humans, synthetic products, or fully AI-generated scenes.
Activities:

AI-generated models.
Virtual influencers.
Products that don’t exist.
Entirely fabricated lifestyle scenes with no real-world basis.

PPC context: Synthetic humans are allowed in some formats with proper disclosure, but Merchant Center prohibits listing products that don’t exist. There is a high risk of disapproval for “inaccurate representation.” This level may be acceptable for creative testing or conceptual campaigns, but it’s dangerous as a primary brand identity.
Legal precedents regarding copyright protection for non-human-authored creative works remain murky. Using fully synthetic assets may cause challenges if ownership disputes arise or if synthetic models are mistaken for real individuals without proper disclosure.
Risks:

Maximum brand risk.
Maximum policy risk.
Maximum consumer trust risk.
Potential long-term damage to “trust equity.”

Client talk-track: “This is for high-speed testing or fringe creative. If we use this for our main brand identity, we must be prepared for the ‘inauthentic’ label.”
Risk assessment: Critical brand risk. Critical policy risk. Use with extreme caution and full disclosure.
Level 4 is where AI stops enhancing reality and starts inventing it. The image becomes a construction. While I haven’t personally worked with brands operating at this tier, it’s absolutely where the industry could be headed, and it deserves serious consideration.
Fully fabricated imagery can mislead customers, violate platform policies, and erode trust at scale. When AI creates people, products, or environments from scratch, the line between creative expression and consumer deception becomes razor-thin. The reputational fallout from getting this wrong is far greater than anything in Levels 1 through 3.
This is the highest-risk tier because it asks a fundamental question: Are you still advertising your product or an AI-generated fiction of it?
Brand alignment: Defining your North Star
Not every brand should operate at the same level of the brand integrity scale. Your acceptable AI usage depends on four factors.
1. Define your non-negotiables
Every brand must choose its acceptable level(s) on the scale and document it in a brand AI manifesto for PPC.
Examples:

Dove (authenticity-driven beauty brand): Level 1 only.
Tech-forward DTC brand: Levels 2-3 acceptable with clear disclosure.
Ecommerce aggregator: Levels 1-2 for product listings, Level 3 for lifestyle content.

Action: Create a PPC brand AI manifesto in collaboration with creative, legal, and executive leadership.
2. The press test vs. the policy test
Two critical questions should guide every AI decision:

Policy test: “Will the platform approve this?”
Press test: “Would we be proud if The Verge covered this?”

The press test is the real guardrail. Google’s policies change. Public perception is permanent.
3. Human-in-the-loop protocol
Every AI-assisted asset must be checked for:

Material deception: Does this misrepresent the product or service?
Identity erasure: Does this erase diversity or cultural authenticity?
Cultural hallucinations: Does this AI-generated scene reflect reality or stereotype?
Product accuracy: Does the ad show what the customer will actually receive?

Automated AI generation should never bypass human review, especially in regulated verticals.
4. Align with your customer base
Different audiences have different tolerances for AI manipulation:

Gen Z: Values “perfectly imperfect” authenticity. Responds negatively to over-polished imagery.
B2B: Prioritizes clarity and utility. AI-generated backgrounds are acceptable. Synthetic humans less so.
Retail: Authenticity directly impacts conversion rates. Product accuracy is non-negotiable.

Dig deeper: Why creative, not bidding, is limiting PPC performance
Operationalizing the brand integrity circle inside PPC ads
Creative workflow
Implement a pre-flight checklist for AI-generated assets:

Identify the level: Core, inner ring, outer ring, or edge
Apply the press test: Would we defend this publicly?
Check for bias: Does this asset represent your audience accurately?
Verify product accuracy: Is this what the customer will receive?
Document disclosure: If synthetic humans are used, is this disclosed?

Media workflow 
Safe placements for AI-generated assets

Performance Max (with contextual backgrounds).
Demand Gen (lifestyle scenes).
YouTube thumbnails (conceptual creative).

Unsafe placements

Merchant Center product images (Level 1 only).
Regulated verticals (finance, healthcare, legal).
Sensitive categories (beauty, weight loss, medical devices).

Legal workflow
Legal teams should:

Review synthetic human usage for disclosure compliance.
Validate product accuracy claims.
Approve the brand AI manifesto.
Maintain documentation for regulatory audits.

Industry standards and emerging frameworks, such as the Coalition for Content Provenance and Authenticity (C2PA), are establishing transparency protocols for AI-generated media. Monitor these developments and align your practices accordingly.
What the PPC community thinks
Some PPC professionals are already experimenting with the tools discussed in this framework.
Ameet Khabra, owner of Hop Skip Media, tested Nano Banana when it first appeared inside the Google Ads interface. She found the tool useful for ideation and quick edits, but noted that strong results often required highly specific prompts.
That level of prompt detail may be realistic for experienced advertisers, but it’s less likely for many SMBs experimenting with AI-generated assets.

“I think it’s a great tool to use for ideation and potentially quick edits,” Khabra said. “But I would still have a graphic designer creating the final product.”

Even when AI imagery is available, some advertisers remain skeptical of how it appears to audiences.
Julie Friedman Bacchini, owner of Neptune Moon, says AI-generated images often look noticeably artificial.

“I don’t like AI images because they look like AI and that’s off-putting to me,” Bacchini said. “It can be hard to avoid. Even when you’re trying to use stock photos, there are so many AI images on those sites too.”

To understand how people outside the industry view these changes, I also polled the community on Threads.

The sentiment was strikingly consistent: while the industry focuses on efficiency, the public is increasingly wary of fantasy versus reality.
One commenter wrote:

“False advertising. That seems like a pretty big concern. As a consumer, I actually would like to see the real thing I’d be buying.”

Another described the issue more bluntly:

“Bait and switch. Fantasy versus reality. Falsehood versus the truth.”

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial

Get started with

Master the spectrum, don’t avoid it
AI isn’t inherently deceptive. Nor is it inherently transparent. It’s a tool. Like all tools, its ethical impact depends on how it’s used. As PPC experts with access to these technologies and advisory roles with brands, we need a clear point of view to guide these decisions.
The brand integrity scale outlined above provides a structured approach to AI use in PPC, helping you navigate the tension between automation and authenticity. By defining your brand’s position on this spectrum today, you ensure tomorrow’s campaigns are remembered for their resonance.
Adopt ethical AI standards — define your brand AI manifesto, implement the press test, and ensure every AI-generated asset passes human review before it reaches your audience. Your brand’s integrity depends on it.

Scroll to Top