According to New Atlas, the journal Nature has named its ten most influential people in science for 2025, highlighting figures from wildly different fields. The list includes Mengran Du, who discovered a thriving animal ecosystem six miles deep in the ocean, and Susan Monarez, who was fired as CDC director for refusing to compromise scientific standards. It also features Achal Agrawal, who exposed systemic research misconduct in India, and Tony Tyson, who after 30 years saw his Vera Rubin Observatory take its first images. Major breakthroughs include Sarah Tabrizi’s gene therapy that slowed Huntington’s disease progression by 75%, and Luciano Moreira’s “Wolbito” mosquito program in Brazil that cut dengue cases by nearly 90%. The tech world was represented by Liang Wenfeng, whose Chinese startup DeepSeek released the powerful, open-source R1 AI model, challenging US giants.
The human stories behind the headlines
Look, these lists can sometimes feel like a pat on the back for the usual suspects. But this one? It’s fascinating because it’s not just about the biggest discovery, but about the kind of work that defines science right now. It’s about integrity under pressure, like Monarez getting fired or Agrawal risking his career to clean up Indian research. And it’s about sheer, stubborn persistence—Tony Tyson working on a telescope for over three decades, or Precious Matsoso singing Beatles songs to get a global pandemic treaty across the line. That’s the real story here. Science isn’t just data; it’s a deeply human endeavor, full of politics, personality, and sometimes, you just need to sing a little.
The AI disruption that wasn’t supposed to happen
Here’s the thing about the DeepSeek story: it breaks the narrative. The dominant idea has been that building frontier AI requires near-infinite capital and a closed, proprietary approach, like OpenAI’s. Then along comes Liang Wenfeng, a former hedge-fund guy who quietly hoarded 10,000 Nvidia GPUs before export controls tightened. He builds DeepSeek and drops the R1 model, a reasoning-focused LLM, as open source. Trained for a fraction of the cost, and released with full technical transparency? That’s a direct challenge to the entire Silicon Valley playbook. It forced a shift. Suddenly, other companies felt pressure to open their models more. And it changed the perception of China’s AI scene from “fast follower” to genuine innovator. It proves that in a capital-intensive field, strategic foresight—like buying GPUs early—can be as valuable as billions in venture funding.
Breakthroughs you can actually measure
Some of these entries are about hope, but others are already showing staggering results. Sarah Tabrizi’s work on Huntington’s is monumental. A 75% reduction in disease progression from a gene therapy? For a fatal, hereditary condition with zero disease-modifying treatments? That’s the kind of result that redefines a field. Similarly, Luciano Moreira’s “Wolbito” project isn’t just a lab experiment. It’s a factory in Curitiba pumping out 80 million mosquito eggs a week, aiming for five billion releases a year. And the data backs it up: a 90% drop in dengue in early deployment cities. That’s public health impact at a scale you rarely see. These aren’t incremental gains; they’re paradigm shifts in how we think about treating genetic diseases and controlling epidemics.
A mixed bag of progress
So what does this list tell us about 2025? Basically, that science is pushing boundaries in every direction—down into the hadal trenches, out into the cosmos with the Vera Rubin Observatory, and into the very code of our biology and technology. But it’s also a reminder of the constant friction. For every triumph like the pandemic treaty, there’s a story of a scientist being fired for upholding standards. For every open-source AI victory, there’s a systemic research integrity scandal. The takeaway isn’t that science is winning or losing. It’s that it’s a messy, human, and incredibly resilient fight for understanding, and these ten people are right in the thick of it.
