According to Wired, OpenAI sent a staggering 80 times more child exploitation incident reports to the National Center for Missing & Exploited Children (NCMEC) in the first half of 2025 compared to the same period in 2024. The company filed 75,027 reports concerning 74,559 pieces of content from January to June 2025. That’s a massive leap from the first half of 2024, when it sent just 947 reports about 3,252 pieces of content. OpenAI spokesperson Gaby Raila linked the increase to major user growth and “more product surfaces that allowed image uploads,” noting investments made in late 2024 to scale moderation. Nick Turley, VP of ChatGPT, recently said the app now has four times the weekly active users it had a year prior.
The numbers are staggering, but nuanced
An 80x increase is a headline-grabber, no doubt. But here’s the thing: in the world of content moderation, raw report numbers don’t always tell a simple story. As the article notes, a spike can reflect better detection, not necessarily more bad activity. OpenAI basically built a bigger net and is now catching more fish. The fact that the number of reports and pieces of content are almost identical for 2025 (75k vs 74k) suggests their systems are flagging individual items more granularly. Back in 2024, one report often covered multiple pieces of content. So, part of this is a change in *how* they report, not just *what* they’re finding.
Growth at all costs meets sobering reality
OpenAI’s explanation ties the spike directly to its own success—more users, more features like image uploads across more products. And that’s probably true. But it’s also a sobering indicator of the inherent risk that scales with popularity. Every new user and every new file upload capability is a new vector for abuse. The company’s own transparency report and safety page outline their policies, but policies have to keep pace with reality. When your weekly active users quadruple, the absolute number of bad actors, or even just clueless users pushing boundaries, will inevitably rise. It’s a brutal arithmetic of scale.
This is a generative AI-wide pandemic
Don’t think this is just an OpenAI problem. Look at the broader data from NCMEC’s CyberTipline: reports involving generative AI exploded by over 1,300% from 2023 to 2024. We’re in a new era. The tools that make it easy to create a marketing image or a story idea can also, horrifyingly, be twisted to generate CSAM. Other big labs, like Google, also file reports (you can see theirs on their transparency report), but they don’t break out the AI-specific portion. That makes OpenAI’s detailed disclosure somewhat rare, and arguably more responsible. But it also spotlights them.
What does “effective safety” even look like?
So, is reporting 80 times more incidents a sign of failure or a sign of a safety system working? I think it’s both. It’s good that they’re catching and reporting this material—that’s the legal and ethical bare minimum. But the sheer volume is a five-alarm fire about the environment these platforms operate in. It raises hard questions. Are they playing a doomed game of whack-a-mole? And what about their API and other access points? The report notes this data doesn’t include Sora, their new video generator. That’s another frontier about to open. Increased reports are a metric, but they’re not the *goal*. The goal is to prevent this stuff from being created and shared in the first place. Based on these numbers, that goal seems desperately far away.
