An AI Researcher Says AGI Is A Fantasy. Here’s Why.

An AI Researcher Says AGI Is A Fantasy. Here's Why. - Professional coverage

According to TheRegister.com, AI researcher Tim Dettmers of the Allen Institute and Carnegie Mellon University argues that Artificial General Intelligence (AGI) is a “fantasy” due to fundamental hardware limitations. He defines AGI as an intelligence capable of all human tasks, including physical work, but contends current processors are insufficient and scaling is about to hit a wall. Dettmers predicts we have only “maybe one, maybe two more years” of scaling left before further GPU improvements become “physically infeasible,” noting that GPU performance per cost maxed out around 2018. He points out that while Nvidia’s Blackwell GPUs offer 2.5x the performance of Hopper, they require twice the die area and 1.7x the power, and that rack-scale optimizations like the GB200 NVL72 will only buy time until maybe 2026 or 2027. Despite this, he believes the current massive investment in AI infrastructure is justified for inference, but warns that the U.S. focus on winning an AGI arms race is short-sighted compared to China’s pragmatic application-focused approach.

Special Offer Banner

The hard reality of hardware

Here’s the thing that a lot of the philosophical AGI chatter misses: this stuff has to actually run on something. And Dettmers is basically saying the silicon party is almost over. We’ve been living off clever tricks like lower precision data types—BF16, FP8, FP4—which give the illusion of massive leaps by doubling throughput when you halve the precision. But the raw computational grunt? It’s not keeping up.

Look at the numbers. From Ampere to Hopper, a 3x performance bump needed 1.7x more power. From Hopper to Blackwell, 2.5x more performance needs twice the silicon real estate and 1.7x the power again. That’s not sustainable scaling; that’s throwing more stuff at the problem. It’s like trying to make a car go faster by just building a bigger, thirstier engine, not by inventing a better engine. We’re running out of road for that strategy.

The rack-scale band-aid

So what’s the plan? The industry’s current answer is to go wider, not smarter. Nvidia’s GB200 NVL72 is the poster child: stitch 72 GPUs together in a rack and call it a single system. And sure, you get a 30x inference boost. That’s nothing to sneeze at for running today’s models. But Dettmers calls this for what it is: a temporary fix. “Slightly better rack-level hardware optimizations” might get us to 2026 or 2027.

After that? The physics gets really, really hard. This is a critical insight for anyone investing in or betting a business on the continuous, exponential growth of AI capability. The hardware trajectory is pointing towards a plateau. And if the model improvements can’t keep coming without that hardware crutch, we could be left with a lot of very expensive, very power-hungry hardware that’s a liability, not an asset. For industries relying on robust, embedded computing—like manufacturing where you need reliable industrial panel PCs that last for years—this chase for fleeting, data-center-scale performance highlights the value of proven, stable technology.

The physical world problem

This is where Dettmers’ argument gets really interesting. He defines AGI as needing to do *all* human tasks, including physical ones. That means robotics. And that’s a whole other can of worms. The digital world is neat and tidy; you can simulate and generate data. The physical world is messy, unpredictable, and insanely expensive to collect data for.

Think about it. How do you get the trillions of data points needed to train a robot to, say, repair a complex machine in a factory? You can’t just scrape the internet. You’d need to collect it in the real world, which is slow, costly, and incredibly complex. The scaling challenges for hardware meet the scaling challenges for data collection, and the whole proposition starts to look… well, fantastical.

Pragmatism vs. the arms race

So why does the AGI narrative persist? Dettmers nails it: it’s a “compelling narrative.” It drives investment, headlines, and a sense of cosmic purpose. But he contrasts this with a more pragmatic path, one he attributes to China: focus on applying the AI we *already have* to boost productivity and be useful.

It’s a stark choice. One path is a high-stakes, possibly quixotic race for a digital god-in-a-box. The other is the gradual, maybe less sexy, integration of AI as a powerful tool. Dettmers isn’t saying to stop AI research. He’s saying the billions pouring in are good for inference and narrow applications. But betting the farm on AGI? He thinks that’s a fantasy built on hardware that’s about to hit its limits. And you have to wonder, if the scaling really does slow in the next two years, how many grand plans will need a serious reality check.

Leave a Reply

Your email address will not be published. Required fields are marked *