defective thinking

Software sucks for many reasons, all of which go deep, are entangled, and expensive to fix.

This isn’t a new problem. Not even remotely. Moreover, the tools to improve the state of play have been around in various forms for at least three decades: Carnegie Mellon University, among others, has been trying treat software development more like an engineering discipline since the 80s.

It’s a problem that’s getting worse with time, too: we’re increasingly dependent on complex systems underpinned by software, and increasingly unable to circumvent these systems when they do fail. Case in point: Toyota, as recently as 2010, were using outdated software development processes, while simultaneously building cars ever more dependent on reliable software. It’s clear how their approach to software design hasn’t worked out for them — and while their problems are now public knowledge, I don’t believe any other manufacturer would be any better off.

The problem is thus: it’s not sexy to fix software issues before they arise, no matter how critical the software is. Further, the “invisible hand” of the market will tend toward minimising costs, and designing systems for safety is a costly exercise with no obvious (to management) payoff.

Instead, it’s better to pretend problems don’t exist — for example, many failure modes analysis tools treat software as having no failure modes, while making no such assumption about the hardware platforms on which the software runs — and instead spend time hand-wringing when, inevitably, the house of cards collapses.

Open source software is argued by many to be the panacea to this problem. It isn’t, and the many recently uncovered issues with OpenSSL demonstrate this conclusively.

Nothing short of treating software development to the same rigour applied to any engineering field will consistently improve the state of play.

Comments are closed.