Archive for the ‘security’ Category

screenos vulnerable

Sunday, December 20th, 2015

During a recent internal code review, Juniper discovered unauthorized code in ScreenOS that could allow a knowledgeable attacker to gain administrative access to NetScreen® devices and to decrypt VPN connections. Once we identified these vulnerabilities, we launched an investigation into the matter, and worked to develop and issue patched releases for the latest versions of ScreenOS.

Juniper are to be commended for taking code reviews seriously enough to find major security vulnerabilities. This is still a major concern, though: the choice of words, specifically “unauthorized code,” suggests this was no unintentional defect.

software security

Sunday, October 18th, 2015

On Volkswagen (again):

The explanation was at least mildly plausible, initially, though, because a modern high-end car is staggeringly complex. It requires something like a hundred million lines of code, about two hundred and fifty […] times the number of lines in the Space Shuttle. No one could know every line of that software, making it theoretically possible that engineers could have sneaked in the emissions-defeating protocol without Volkswagen’s upper management knowing. Microsoft engineers did something like that decades ago, when they slipped a flight-simulator game into the shipping version of Excel 1997.

But on Wednesday, Spiegel issued a report, based on one of the many investigations taking place at Volkswagen and around the world, saying that at least thirty managers were involved in the cheating. This squares with Barton’s skepticism, not to mention common sense. Volkswagen engineers didn’t smuggle in software that allows you to play Tetris on in-car G.P.S. screens. They wrote code that fundamentally changed how the company’s diesel cars worked. The altered software affected engine emissions, mileage, cost, and power—all things that auto executives care about. In other words, while it’s technically possible to install such software, it’s hard to imagine that it could have gone unnoticed. Modern automobile engines are made by teams that design, build, test, and tune everything to produce the desired effect. Companies have been building these engines for more than a hundred years, refining a process the leaves no room for mysteries or magic outcomes. When a car produces more power, there is a reason; when a car produces fewer emissions, there is a reason. And when, at Volkswagen, its diesel engine produced forty times more nitrogen oxide when it wasn’t being tested than when it was, many people inside would have known why.

Here’s another thought, if Volkswagen’s executive team were completely oblivious to what was going on: perhaps senior management at Volkswagen have a reputation for really not liking bad news. In that case, the level of sophistication and coordination required to effect such software starts to make a little sense, although what doesn’t make sense to me in that context is the idea that executive management can be so disconnected from the organisation they run.

(And if they are that disconnected – then that raises other, very significant, concerns in itself.)

In a powerful book about the disintegration, immediately after launch, of the Challenger space shuttle, which killed seven astronauts in January of 1986, the sociologist Diane Vaughan described a phenomenon inside engineering organisations that she called the “normalisation of deviance.” In such cultures, she argued, there can be a tendency to slowly and progressively create rationales that justify ever-riskier behaviours. Starting in 1983, the Challenger shuttle had been through nine successful launches, in progressively lower ambient temperatures, across the years. Each time the launch team got away with a lower-temperature launch, Vaughan argued, engineers noted the deviance, then decided it wasn’t sufficiently different from what they had done before to constitute a problem. They effectively declared the mildly abnormal normal, making deviant behaviour acceptable, right up until the moment when, after the shuttle launched on a particularly cold Florida morning in 1986, its O-rings failed catastrophically and the ship broke apart.

If the same pattern proves to have played out at Volkswagen, then the scandal may well have begun with a few lines of engine-tuning software. Perhaps it started with tweaks that optimised some aspect of diesel performance and then evolved over time: detect this, change that, optimise something else. At every step, the software changes might have seemed to be a slight “improvement” on what came before, but at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers, or been identified by Volkswagen executives. Instead, it would have slowly and insidiously led to the development of the defeat device and its inclusion in cars that were sold to consumers.

Except, that software development in and around complicated systems doesn’t work that way.

Sure, incremental development is definitely a known — sensible, even — approach, but to write code that successfully and reliably identifies particular road conditions isn’t just “a few lines of […] software” — it requires consideration, planning, design, and testing. This would have come around once it was clear that the engine profile for drivability and the engine profile for the environmental tests were so far apart, and isn’t something that even a small handful of rogue engineers could reliably knock together.

Certainly, multiple engine maps are plausible: many vehicles have them. Selecting a timing and fuel delivery map based on particular conditions — transmission gear selected; throttle setting; engine RPM for example — is quite common.

However to even consider heading down that path when it comes to the more complicated scene of detecting a rolling road — particularly when the only reason to do so is to cheat surely would have caused at least one team member to have a sleepless night or two.

In summary, it’s still implausible to me that the executive management team at Volkswagen were completely oblivious to what was going on.

all software sucks

Wednesday, July 22nd, 2015

The only thing that surprises me in this article is that the attack took until mid-2015 to happen:

[…] The attack […] can compromise those Uconnect computers—an optional upgrade feature that doesn’t come standard in the Chrysler vehicles—through their cellular Internet connection to tamper with dashboard functions and track their GPS coordinates.

For 2014 Jeep Cherokees in particular, [the attack extends] to the vehicle’s CAN bus, the network that controls functions like steering, brakes, and transmission.

Attacks like this become a safety critical issue — rather than just an annoyance — due to two factors.

First, all software sucks, with consumers failing to demand the kind of qualities in software that they now routinely expect in hardware (mostly, to be fair, because consumers don’t know to ask — or how to ask).

Worse, most engineering reliability analyses don’t know how to model software reliability, so it gets neatly tuned out of a hardware failure modes analysis — meaning that it’s then easy to make the incorrect assumption that the software can’t fail, let alone take hardware out with it.

Engineers and other technical types might not make that assumption, but they’re not the ones generally making the decisions on what and where to cut the budget. Which brings us to the second factor: businesses are attuned to looking for maximum profit at minimum cost.

That means in turn that obvious safety considerations, such as not having any form of physical link between safety critical systems and online entertainment systems, are foregone because it’s an easy way to cut costs. One set of connectivity is cheaper than two, particularly when the safety critical systems already have sensors and the like which the entertainment systems can then utilise.

defective thinking

Wednesday, July 15th, 2015

Software sucks for many reasons, all of which go deep, are entangled, and expensive to fix.

This isn’t a new problem. Not even remotely. Moreover, the tools to improve the state of play have been around in various forms for at least three decades: Carnegie Mellon University, among others, has been trying treat software development more like an engineering discipline since the 80s.

It’s a problem that’s getting worse with time, too: we’re increasingly dependent on complex systems underpinned by software, and increasingly unable to circumvent these systems when they do fail. Case in point: Toyota, as recently as 2010, were using outdated software development processes, while simultaneously building cars ever more dependent on reliable software. It’s clear how their approach to software design hasn’t worked out for them — and while their problems are now public knowledge, I don’t believe any other manufacturer would be any better off.

The problem is thus: it’s not sexy to fix software issues before they arise, no matter how critical the software is. Further, the “invisible hand” of the market will tend toward minimising costs, and designing systems for safety is a costly exercise with no obvious (to management) payoff.

Instead, it’s better to pretend problems don’t exist — for example, many failure modes analysis tools treat software as having no failure modes, while making no such assumption about the hardware platforms on which the software runs — and instead spend time hand-wringing when, inevitably, the house of cards collapses.

Open source software is argued by many to be the panacea to this problem. It isn’t, and the many recently uncovered issues with OpenSSL demonstrate this conclusively.

Nothing short of treating software development to the same rigour applied to any engineering field will consistently improve the state of play.

on content blocking

Wednesday, June 24th, 2015

Australia now has an Internet filter.

Moreover, it’s one which gives the courts the right to determine the method of and scope of a block, and deny anyone other than an ISP the right or ability to challenge this.

The net effect of this: if the court orders a block based on, say, IP address, then any innocent websites that happen to be collocated with the target become collateral damage; a fact that these impacted websites have no recourse to. Only an ISP can do so.

We’ve been here before, and apparently learned nothing from it.

Remember this when it comes time to vote again. Remember, too, that both the Labor opposition and Coalition government waved it through in this form.

internet censorship

Sunday, February 2nd, 2014

The Australian Government is once again pushing legislation to censor the internet. And the sky is up, the grass is green and there’s nothing new under the sun.

This time, Canberra is angling to appoint a new e-safety commissioner and create new legislation in a supposed crusade against online bullying. To that end, the Government is proposing new powers for the rapid takedown of offensive material published on social media networks.

It should barely need to be said again: you can assume that you’ll be as successful in censoring the Internet reliably, as you can be in censoring individuals’ thoughts reliably.

This ain’t 1984.

ransomware

Friday, January 17th, 2014

Pharmacists have become the latest targets of sophisticated computer hacks known as ransomware attacks, which lock up PCs until victims pay up.

Once the hackers plant the virus, the files on a computer become encrypted and unable to be accessed.

I doubt very much that there’s targeting going on; instead I predict that they’ve been subject to bad luck and poor security practices.

It’s appropriate though to point out that proper off-site and off-line backups are a critical component of any business disaster recovery plan.