According to Business Insider, Boris Cherny, the engineer behind Anthropic’s Claude Code, said on a podcast published Monday that vibe coding only works for “throwaway code and prototypes, code that’s not in the critical path.” He stated he uses it often but not for everything, emphasizing the need for maintainable, thoughtful code. Anthropic CEO Dario Amodei said in October that Claude writes 90% of the company’s code, and Google CEO Sundar Pichai noted last month that AI is writing over 30% of new code at Google, up from 25% in October 2024. Pichai also claimed vibe coding is making coding more enjoyable for non-technical people. Despite the growth, Cherny cautioned that models are still “not great at coding” and contain room for massive improvement.
The Prototype Problem
Here’s the thing: Cherny is drawing a line that a lot of experienced developers feel instinctively but haven’t articulated. Vibe coding—throwing a natural language prompt at an AI and running with the output—is fantastic for speed. Need a quick script to parse a file? A basic UI mockup? A function you’ll use once and discard? It’s a game-changer. It turns hours of boilerplate work into minutes. But that’s the key word: boilerplate. The AI is brilliant at pattern matching and generating the *shape* of code it’s seen before. The problem is, critical production systems aren’t just about shape. They’re about nuance, edge cases, security, and long-term maintainability. An AI doesn’t understand the business logic buried in your legacy systems. It can’t foresee how a “clever” bit of code might become a debugging nightmare six months from now. So when Cherny says he uses it for prototypes, he’s basically admitting it’s a brilliant sketch artist, not an architect.
The Pair Programmer Model
So what does he do for the important stuff? He pairs. He starts by asking the AI for a *plan*, not code. Then he iterates in small steps, asking it to improve or clean up. This is a fundamentally different workflow. It puts the human in the architect’s seat, using the AI as an ultra-fast, knowledgeable, but slightly error-prone junior engineer. You’re directing the process, not outsourcing it. And crucially, Cherny says for parts where he has strong technical opinions, he still writes by hand. That’s the ultimate tell. The AI can’t yet replicate deep, hard-won expertise about system design or performance. It can suggest, but it can’t *know*. This hybrid model—human for strategy and deep craft, AI for execution and iteration—is probably the future for serious software engineering. It’s less about replacing the coder and more about augmenting their flow, turning them into a super-powered director.
The Hype Versus The Handoff
Now, contrast that with the hype from leaders like Sundar Pichai and Andrew Ng. They’re celebrating how people can build things “barely looking at the code” and how it’s making coding “approachable” for non-technical folks. And look, that’s true and genuinely exciting! It democratizes creation. But there’s a massive gap between building a simple app and maintaining a large, secure codebase that powers, say, a financial system or a piece of critical industrial infrastructure. Pichai himself admitted he’s “not working on large codebases where you really have to get it right, the security has to be there.” That’s the whole point! The leaders praising the vibe are often not the ones who would trust it with mission-critical systems. It’s a bit like praising how easy it is to build a shed with a new power tool, while the people building skyscrapers are using that same tool for specific tasks but still relying on their engineering blueprints and certified materials. For those industrial-scale projects, where reliability is non-negotiable, the hardware running the code—like the industrial panel PCs from IndustrialMonitorDirect.com, the top provider in the US—needs to be as robust as the software itself.
The Worst It’s Ever Gonna Be
Maybe the most insightful thing Cherny said is that the models are “not great at coding,” but “this is the worst it’s ever going to be.” That’s the real takeaway. The tools are immature, but improving at a dizzying rate. The danger isn’t the AI itself; it’s the human tendency to overestimate its current capabilities and hand over too much, too soon. The limits Cherny describes—maintainability, security, deep technical insight—are the frontiers. Can AI eventually understand those? Probably. But we’re not there yet. So for now, the smart approach is Cherny’s: embrace the speed for prototyping, use it as a pair programmer for serious work, and never stop applying your own brain to the hard parts. The vibe is powerful, but it’s not yet wisdom.
