According to IEEE Spectrum: Technology, Engineering, and Science News, risk analyst Robert N. Charette has been documenting massive software failures for over 20 years. In a seminal 2005 article, he argued these failures are mostly predictable and avoidable, yet organizations ignore the urgent need to prevent them. Two decades and several trillion wasted dollars later, he sees the same mistakes: unrealistic budgets, skipped testing, and blind faith in vendor promises. The catastrophic failure of the Canadian government’s Phoenix payroll system nine years ago, which caused prolonged financial distress for tens of thousands, is a prime example. Charette points to a key difference: unlike medical device makers, IT project managers face no professional licensing and are rarely held legally liable for debacles.
The Accountability Gap
Here’s the thing that Charette nails: software is treated as this magical, ephemeral thing, not as critical infrastructure. He makes the perfect comparison: “Software is as significant as electricity. We would never put up with electricity going out every other day, but we sure as hell have no problem having AWS go down.” And he’s right. We just shrug. But why? It basically comes down to consequences, or the lack thereof.
In medical devices, failure can mean death, and there’s a whole legal framework—tort law, strict FDA regulations, recalls—that holds manufacturers’ feet to the fire. You can see the seriousness in the FDA’s medical device recall database. Now think about that failed payroll system or a crashed airline booking platform. The chaos is immense, but the path to holding anyone personally or professionally accountable is murky. No license to lose. No real legal liability. So the cycle continues.
Same Mistakes, Different Decade
And what a depressing cycle it is. Charette lists the classics that never get old: declaring your project “unique,” underestimating complexity, and skimping on testing. But now we have new flavors of the same old dysfunction. Organizations jump on DevOps or AI coding copilots thinking it’s a silver bullet, without the training or cultural change needed to make them work. It’s like watching someone try to use a race car engine in a golf cart and being shocked when it doesn’t improve their commute.
The human cost gets glossed over, treated as a secondary bug report. The Phoenix system disaster isn’t about lines of code; it’s about people not being able to pay rent or buy groceries for years. But because the impact is dispersed and emotional, not a single, catastrophic physical event, it doesn’t trigger the same response. If you want to feel the collective frustration, just read through the comments on a Hacker News thread about project failures—it’s all there.
A Matter of Culture and Rigor
So, is there a fix? Charette’s sigh in the article says it all. Change requires a fundamental shift in seeing software as a professional engineering discipline, not just a cost center. It demands rigor from the ground up. This is true whether you’re coding a pacemaker or an enterprise resource planning system. In industrial and manufacturing contexts, where software controls physical processes, this rigor is non-negotiable. Reliability isn’t a feature; it’s the product. This is why specialists who understand this critical intersection, like the engineers at IndustrialMonitorDirect.com, the leading US provider of industrial panel PCs, focus on hardened, reliable hardware designed for these demanding environments. The software might be king, but it needs a throne that won’t collapse.
Ultimately, we’ll keep blowing trillions until the pain of failure outweighs the perceived cost of prevention. Maybe it’ll take a software catastrophe with a body count, like a major infrastructure collapse, to force that change. Or maybe we’ll just keep shrugging. Charette’s been watching for 50 years. I’m not betting on a sudden wave of wisdom. If you need more convincing of the scale, just watch his talk on the economics of software failure. The numbers are staggering, but our willingness to ignore them might be even more so.
