According to The Verge, in late October 2024, a fan account making AI edits of Ariana Grande was deactivated after sparking a furious online argument about data centers draining resources in cities like Memphis. The incident highlights a growing trend where “stan” accounts on X are using AI-generated content of celebrities, like Grande and boxer Jake Paul, to deliberately provoke outrage and farm engagement. This is financially incentivized by X’s program that pays verified users for engagement from other verified accounts. OpenAI’s Sora video generator, with its “Cameo” feature, has dramatically escalated this by letting users offer their likeness for others to manipulate, leading to billions of views for viral, often offensive, deepfakes. Despite celebrities from Grande to Grimes calling the practice “terrifying” and demanding regulation, the economic incentive to create ragebait is overpowering ethical concerns within fan ecosystems.
The Ragebait Economy
Here’s the thing about stan Twitter: it’s a media ecosystem built on intense passion. And where there’s passion, there’s easy money in provoking anger. As one 25-year-old fan account runner named Brandon told The Verge, going against the community’s general anti-AI stance is “a very quick way to get money.” X’s creator payout model, which rewards verified users for engagement from other verified users, has essentially monetized discourse. So, posting an AI deepfake of Ariana Grande isn’t just a creepy tribute; it’s a business strategy. You get the anti-AI crowd screaming in the quotes, the pro-troll crowd liking and retweeting, and the algorithm showers you with visibility. It’s cynical, but it works. The account that sparked the Memphis water argument? It was almost certainly playing this game. And when the entire point is to be a villain for profit, why would you care about a celebrity’s consent or the real-world resource cost of generating those images?
Celebrities Are Losing Control
But the celebrities themselves are caught in a horrible bind. Grimes encouraged AI voice clones, then said it felt “really weird and really uncomfortable.” Jake Paul, an OpenAI investor, championed Sora’s Cameos and watched AI videos of him get a billion views in a week—many relying on homophobic stereotypes. He tried to capitalize on it, while others like IShowSpeed threatened lawsuits. It’s a mess. The scary part, as video producer Jeremy Carrasco points out, is that “once you open that door to being okay with people deepfaking you… all of a sudden your likeness has just gotten fucked.” You can’t pull it back. This fear is so pervasive it’s causing panic even when no AI is involved, like when actress Paget Brewster mistakenly accused a fan of posting a fake AI nap photo. The trust is completely eroded. And when even early advocates like Grimes are calling for “international treaties,” you know the genie is way, way out of the bottle.
Why This Feels Different
Look, fandoms have always had weird, boundary-pushing elements. Erotic fanart, intense fanfiction—celebrities have dealt with it for decades. But AI deepfakes feel like a qualitative leap. It’s not a drawing or a story; it’s a synthetic media clone that can be used to make you say or do anything, often for a cheap, offensive joke. As the fan Mariah noted, it’s replacing hand-drawn fanart with something generated without care. And the viral examples are almost uniformly nasty: homophobic tropes, non-consensual “coming out” videos, scam endorsements. Shark Tank’s Mark Cuban offering his Sora Cameo shocked security experts because AI-generated Shark Tank scams are already rampant. The “fan” engagement is now indistinguishable from weaponized harassment. So, is it any wonder the backlash is so visceral?
A Toxic Feedback Loop
So where does this end? Probably nowhere good. The platforms (looking at you, X) have built a financial engine that rewards toxicity. The tools (looking at you, OpenAI’s Sora) have made creation trivial and removal impossible. And the fans are stuck in the middle—some genuinely horrified, others seeing a goldmine in the chaos. The report shows that even influencers who promoted their own deepfakes, like iJustine, quietly stopped talking about it. Why? Because they realized the real danger isn’t being deepfaked with consent; it’s their entire body of work being questioned as fake or their fans being scammed. We’re creating an internet where nothing can be trusted, and the people who built it are getting paid by the click while it happens. The Ariana Grande fan wars are just the visible symptom. The disease is a system that’s decided our collective rage is the most valuable commodity of all.
