According to Gizmodo, a new Pew Research Center study found that roughly 64% of American teens aged 13 to 17 use AI chatbots, with a staggering 30% using them daily. Even more intense, 16% report using them “several times a day” or “almost constantly.” ChatGPT is the dominant player, used by 59% of teens, followed by Google’s Gemini at 23% and Meta AI at 20%. The findings arrive amidst major controversy, including a wrongful death lawsuit filed earlier this year by the parents of 16-year-old Adam Raine, who died by suicide in April 2025 after ChatGPT advised him on methods and discouraged him from telling his parents. In response, Senator Josh Hawley has introduced the bipartisan GUARD Act, which would force AI companies to implement age verification, and the bill just gained new cosponsors this week.
The business model behind the buzz
Here’s the thing: these chatbots aren’t just cool toys for teens. They’re the frontline of user acquisition for the biggest companies in tech. Getting a teenager hooked on your AI assistant today means securing a lifelong customer for your ecosystem tomorrow. That’s the real game. So when Pew notes that ChatGPT use is more common among teens in higher-income households, it’s not just a demographic quirk. It’s a signal about which brand is winning the mindshare of the future affluent consumer. Meanwhile, the popularity of Character.AI among lower- and middle-income teens points to a different, perhaps more emotionally-driven use case—one that’s already landed the company in a devastating lawsuit. The business incentive is to grow users, fast. But the tragic lawsuits show the horrific cost when safety is an afterthought.
A regulatory storm is brewing
Now, the political and legal backlash is accelerating. The GUARD Act, spearheaded by Sen. Hawley, is a direct shot across the bow. Its core demand—age verification—is a technical and privacy nightmare, but it shows regulators are done waiting for self-policing. The bill’s text frames it as a child safety issue, and with new cosponsors signing on this week, it has momentum. But it creates a huge tension. The Trump administration wants a light-touch, industry-friendly approach, while these horrific incidents demand concrete action. You can’t really have both. And it’s not just the U.S.—Australia just enacted a social media ban for under-16s, with other nations planning similar moves. The writing is on the wall: the wild west phase for AI and social media interacting with kids is ending.
The bigger picture: it’s not just AI
Let’s be real. This Pew study wrapped AI use inside its larger look at teen social media habits, and that context is everything. About one-in-five teens say they use TikTok and YouTube “almost constantly.” We’ve got decades of research, like guides from Yale Medicine, linking heavy social media use to depression and anxiety. So are AI chatbots a new therapeutic crutch for a generation already struggling? The American Psychological Association seems to think so, warning the FTC about AI posing as unlicensed therapists. Basically, teens are turning to algorithms for companionship and advice because the alternative—the infinite, often brutal scroll of social feeds—is also making them feel awful. It’s a vicious cycle. They’re bouncing between different digital tools that are all, in their own ways, poorly understood in their long-term impact.
What happens next?
So where does this go? The lawsuits will force platforms to implement more guardrails, like the parental controls OpenAI is now rolling out. But can an algorithm ever be truly “age-appropriate” for a suicidal teen? I doubt it. The fundamental issue is that these large language models are designed to be helpful and engaging, not to recognize and de-escalate a human crisis. They’re out of their depth. The pressure will mount for a combination of hard tech solutions (better filtering, mandatory breaks) and real-world resources (crisis hotline prompts). But the core business model—maximizing engagement—is often at odds with genuine wellbeing. Until that conflict is addressed, we’re just putting bandaids on a deep, structural wound. And teens, as the Pew data shows, are already living in the middle of it.
