According to PCWorld, a Reuters investigation has uncovered internal Meta documents showing the company estimates 10% of its 2024 revenue—approximately $16 billion—comes from known or suspected scammers advertising on Facebook, Instagram, and WhatsApp. The report reveals Meta’s own analysis found its platforms are crucial for executing online scams, with about one-third of successful U.S. scams involving Meta services. Internal policies reportedly protect high-spending fraudulent advertisers, requiring up to 500 user reports before taking action against “high value accounts” while regular users need only eight reports. Meta’s automated systems actually charge suspected scammers higher advertising prices rather than banning them, and employee teams are barred from actions that would reduce company revenue by more than 0.15%. Company representative Andy Stone called the figures “rough and overly-inclusive” but didn’t provide updated numbers.
The business of looking the other way
Here’s the thing that really gets me about this story: it’s not that scammers exist on social media platforms. We all know that. It’s that Meta has apparently built an entire business model around monetizing criminal activity while pretending to fight it. The internal documents show they’re not just passively allowing this to happen—they’re actively optimizing their systems to extract maximum value from scammers while doing the absolute minimum required to maintain plausible deniability.
Think about that automated auction system that charges scammers more instead of banning them. That’s not just negligence—that’s a calculated business decision. They’re essentially running a protection racket where criminals pay a premium for the privilege of operating on their platforms. And the employee teams being told they can’t reduce revenue by more than 0.15%? That’s putting an exact dollar value on how much user safety is worth to them. Spoiler alert: it’s not much.
The regulatory calculus
What’s even more damning is that Meta appears to have done the math on regulatory consequences and decided the profits are worth the risk. The documents show they expect any fines to be under $1 billion—less than a tenth of what they’re making from these scams. So basically, they’ve calculated that crime pays, at least for them.
And let’s be real—when your platform is so scam-friendly that internal reports literally say “it is easier to advertise scams on Meta platforms than Google,” you’ve got more than a moderation problem. You’ve got a fundamental business ethics problem. The fact that illegal online casinos were running for over six months after being internally flagged as the “Scammiest Scammer” tells you everything about their priorities.
Where does this leave users?
So what happens now? Well, Meta says they’re working to reduce scam revenue from 10.1% to 7.3% by 2025. But let’s be honest—that’s still billions of dollars from criminal activity. And when your starting position is “we’re okay with earning billions from scammers, just maybe not quite as many billions,” you’ve already lost the moral argument.
The really scary part? This is just the advertising scammers—the ones paying Meta directly. This doesn’t even touch the separate issue of impersonation scams and other fraud happening organically on their platforms. When you combine both problems, you have to wonder if any platform this massive can ever be effectively moderated, especially when there’s so much financial incentive not to.
At the end of the day, this report confirms what many of us have suspected: when user safety conflicts with revenue, we know which one wins. And until regulators start treating this as the systemic business practice it appears to be rather than isolated moderation failures, don’t expect anything to fundamentally change.
