Brands Are Designing People Now. And the Seams Are Showing.

Pixel Rot series examines this $6B industry where synthetic faces sign brand deals, real creators get cloned, and design tells break the spell.
Brands Are Designing People Now. And the Seams Are Showing.
|

She is standing in front of the Ferris wheel. Her hair is pink and tousled, catching the Indio light. The fringe on her suede outfit moves in the wind. In her hand is a branded cupholder from the Rhode skin x 818 Tequila collab.

It's a photo that looks like 50 others from that weekend. Lighting controlled, composition clean, outfit choices signaling cultural fluency. The prop proves presence.

Her name is Aitana Lopez. She has 392,000 followers and earns up to $10,000 a month in brand deals. But she has no use for the tequila or the lip balm.

That’s because Aitana Lopez is not real.

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

A post shared by Aitana Lopez (@fit_aitana)

Aitana Lopez is one of the most-followed virtual influencers in the world, created by The Clueless, a Barcelona-based creative agency. She has never been anywhere. She is, as her Instagram bio describes her, a "digital soul."

The Visual Grammar

Scroll past Aitana and you find Alex Laine, designed to appeal to women football fans, photographed "attending" matches at the Emirates Stadium and taking reformer pilates in north London.

Laine is a collaboration between The Clueless and a company called Pixel.

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

A post shared by Alex Laine (@alex.laine_)

She self-describes as "north London through and through." Her Blank Street coffee cup is always in frame.

Scroll further and there are Mia and Ana Zelu, sisters built by Zelu House, posting photos from "travels" and posing at sports games.

There is Granny Spills, an older woman dressed almost exclusively in pink, self-described as "spilling tea and designer receipts since 1950," with 2 million followers and a Coachella grid that includes photos alongside Kardashian-Jenner family members.

Each feed has its own aesthetic. But the construction underneath is identical.

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

A post shared by Aitana Lopez (@fit_aitana)

The hair moves in one direction. Real hair in festival wind catches, separates, and reverses. AI-generated strands hold a single consistent arc, uniform enough to read as composite, not caught.

The lighting flatters the geometry. Shadows are present and polite. Outdoor light never crosses the nose at an inconvenient angle. The sun, apparently, always cooperates.

The skin is smooth at a resolution that does not exist. Pores are equidistant. Skin that implies physical activity does not show the compression or pull that bodies actually produce. A grip, a reach, a turn should leave marks. None of them appear.

The eyes engage the lens with complete precision. Real people in posed photos still catch mid-thought, mid-blink, an angle that missed. AI-generated faces arrive at the camera fully formed, every time.

The captions describe rather than accompany. Real creators write around their images, and the seam between image and language shows. AI-generated captions align with the image so completely that they read as briefs.

The Branded Prop Trick

Aitana's Rhode cup holder is the part that sells.

A fake hand holding a real product works on the scroll the same way a real hand would. The brain sees product and person and forms the link before it asks whether either is real. At feed speed, the association lands.

A hand holds the rhode x 818 Tequila Coachella cupholder
The prop that does the selling. Coachella, 2026 | Source: Aitana Lopez, Instagram

Zoom in and the hand fails. Where fingers meet object, AI generation loses the logic of grip and weight. Fingertips do not compress against the surface. The object looks like it is placed in the frame rather than held in a hand.

There is no tension at the point where the skin meets the cup.

This is where the image breaks its own spell.

The Awards Layer

The AI Personality of the Year Awards has drawn more than 2,000 entrants competing for a $90,000 prize fund. Organizers describe it as "the Oscars for AI influencers." Aitana Lopez is one of its official ambassadors.

Trophies, categories, prize money, red carpet language: brands are building an entire layer of legitimacy for entities that do not exist. The awards validate the category, not the work.

Tony the Tiger is a product of brand mascot design, not performance.

His value is awareness and recognition. The AI Personality of the Year Awards runs the same logic: build cultural scaffolding around the product category until the category feels like a community, and the community feels like proof.

When the AI Wears Your Face

Some AI influencers are built from someone else's work.

In January 2025, TikTok creator Gracie Nielson found an account on her own feed that had lifted her videos and grafted a stranger's AI-generated face onto her body.

Same composition. Same clothing. Same home. Same angles she had developed and repeated across months of posts.

A 404 Media investigation documented how the operation runs. An operator finds a real creator with a useful body type and hair color. Downloads their video using an online tool. Runs it through a face-swap application. Publishes it under an AI persona, with a link to a paid subscription page.

One account built this way had 94,000 followers. The source footage came from a woman with 2,000 Instagram followers who had no knowledge her work was being used.

A synthetic influencer like Aitana is a designed character in a designed world — a mascot with a feed. The parasitic model takes a real person's framing decisions, their location scouting, their years of learning what their face does in light, and replaces only the face.

The person who built the original gets no credit for any of it. That gap is the AI ethics design problem the industry has not solved.

The Tells: What to Look For

Artificial intelligence generates these images from prompts, not from experience. The tells are the gap between the two.

Hair physics. Does it catch, lift, and separate, or does it hold one direction across the full frame? Generated hair tends toward the single arc.

The contact point. Where a hand holds an object, where a body sits in a chair: AI generation fails at compression and weight. The contact looks placed rather than landed.

Skin texture at motion. Still portraits are now generated with high plausibility. A turned neck, a reached arm, a gripped object: those are where uniformity breaks through. Look for smoothness in a position that should show pull.

Caption register. Real creators write after the image is done, and their captions drift from it. AI-generated captions stay on brief. The language is complete, aligned, and on-message in a way that reads less like a person posting and more like a product description.

Background light logic. AI-generated outdoor scenes frequently place a correctly lit subject in front of a background that does not match the same light source. Check whether the depth of the scene is physically consistent.

Eye engagement. AI-generated faces tend to arrive at the camera fully and exactly. Real people, even in posed shots, have a frame where the engagement is slightly off. That precision is a form of the uncanny valley: the face is too correct to read as caught.

The Real Cost

Influencer guides are now publishing tips on how to make yourself look "more AI" in your content.

Read that again. Real creators are being told to edit toward a synthetic benchmark to compete with AI-generated content that costs a server fee to produce.

What this produces is a feedback loop with a fixed outcome.

AI-generated visual standards (skin without texture, hair without physics, lighting without the sun's actual behavior) migrate into audience expectation.

Real creators edit harder to close the gap. The tools to close it cost money and time. The AI operators pay a server bill.

The AI aesthetic barely has anything to do with beauty. It optimizes for consistency, and consistency at scale is what the algorithm rewards.

What the algorithm penalizes is the part that was always the point. The wrong angle. The bad light. The moment that worked anyway.

Those are the moments a viewer recognizes another person was there. Strip them out, and the image ceases to be a photograph. It becomes a product description with a face.

The Defense Is Literacy

Visual literacy means reading an image for what it is, not just what it depicts.

That means checking the contact point, clocking the hair physics, noticing when the caption is too clean.

It means slowing down on the scroll long enough to ask a question: was this made by a person, or assembled by a prompt?

The Design Awards process is one answer to that question.

It draws a line around work that originated with a person making specific decisions, under real constraints, for a purpose they could explain. That line needs to be visible, and people who can see it need to draw it.

There is still work being made that way. The tells go both directions.

Pixel Rot is a column about how AI design fails, and what those failures expose. Submit work made by humans to the Design Awards.

 

Got work that outlasts the algorithm? Join the DesignRush Design Awards.
SUBMIT YOUR WORK

Our team ranks agencies worldwide to help you discover the best partners for building iconic brand visuals. Visit our Agency Directory for the Top Video Production Companies, as well as: 

👍👎💗🤯
Latest Video Design Trends
Receive our NewsletterJoin over 70,000 B2B decision-makers growing their brands