devil in defeat device details

November 24th, 2015

Bob Lutz affirms what I and many other technical types have been saying:

Ferdinand Piëch, the immensely powerful former chief of Volkswagen’s supervisory board, is more than likely the root cause of the VW diesel-emissions scandal. Whether he specifically asked for, tacitly approved, or was even aware of the company’s use of software to deliberately fudge EPA emissions testing is immaterial.


It’s what I call a reign of terror and a culture where performance was driven by fear and intimidation. He just says, “You will sell diesels in the U.S., and you will not fail. Do it, or I’ll find somebody who will.” The guy was absolutely brutal.

I imagine that at some point, the VW engineering team said to Piëch, “We don’t know how to pass the emissions test with the hardware we have.” The reply, in that culture, most likely was, “You will pass! I demand it! Or I’ll find someone who can do it!”

In these situations, your choice was immediate dismissal or find a way to pass the test and pay the consequences later. Human nature being what it is—if it’s lose your job today for sure or lose your job maybe a year from now, we always pick maybe a year from now.

Add to Volkswagen’s woes, the discovery of a second and completely separate range of impacted vehicles, powered by the three litre V6 turbo-diesel, mere weeks after making fairly explicit denials of any such possibility.

Audi USA has gone on the record to clarify details regarding the emissions testing issue with the Volkswagen Group’s 3.0-litre V6 TDI engine and how the company is progressing with the US authorities.

At the beginning of November, the United States’ Environmental Protection Agency (EPA) issued a second notice of violation against the Volkswagen Group, claiming that certain cars powered by the company’s 3.0-litre turbo-diesel V6 engines were fitted with a defeat device, which allowed it to illegally pass American emissions testing for NOx (oxides of nitrogen).

The EPA alleged that the engine’s control software was able to detect an emissions test and enter a “temperature conditioning” mode that limited the output of NOx. When the test is concluded, the engine reverts to operating in its regular configuration, emitting up to nine times the permitted levels of NOx.

Corporate cultural issues being what they are, I think it’s a sure bet that there’ll be more “surprises” like this before all is said and done, and I wouldn’t be too surprised to see issues beyond emissions appear as well.

concentrated technology

October 18th, 2015

If you were to launch a new mail server right now, many networks would simply refuse to speak to you. The problem: reputation.

Email today is dominated by a handful of major services. […] It’s become increasingly unusual for individuals or businesses to host their own mail, to the point that new servers are viewed with suspicion.

And so it goes for most technology discussions: decision-making power ends up concentrated in the hands of a small handful of organisations, often to a point where those who would like to broaden their horizons are unable to.

The experience with email merely demonstrates that it’s not just an issue of standards compliance.

software security

October 18th, 2015

On Volkswagen (again):

The explanation was at least mildly plausible, initially, though, because a modern high-end car is staggeringly complex. It requires something like a hundred million lines of code, about two hundred and fifty […] times the number of lines in the Space Shuttle. No one could know every line of that software, making it theoretically possible that engineers could have sneaked in the emissions-defeating protocol without Volkswagen’s upper management knowing. Microsoft engineers did something like that decades ago, when they slipped a flight-simulator game into the shipping version of Excel 1997.

But on Wednesday, Spiegel issued a report, based on one of the many investigations taking place at Volkswagen and around the world, saying that at least thirty managers were involved in the cheating. This squares with Barton’s skepticism, not to mention common sense. Volkswagen engineers didn’t smuggle in software that allows you to play Tetris on in-car G.P.S. screens. They wrote code that fundamentally changed how the company’s diesel cars worked. The altered software affected engine emissions, mileage, cost, and power—all things that auto executives care about. In other words, while it’s technically possible to install such software, it’s hard to imagine that it could have gone unnoticed. Modern automobile engines are made by teams that design, build, test, and tune everything to produce the desired effect. Companies have been building these engines for more than a hundred years, refining a process the leaves no room for mysteries or magic outcomes. When a car produces more power, there is a reason; when a car produces fewer emissions, there is a reason. And when, at Volkswagen, its diesel engine produced forty times more nitrogen oxide when it wasn’t being tested than when it was, many people inside would have known why.

Here’s another thought, if Volkswagen’s executive team were completely oblivious to what was going on: perhaps senior management at Volkswagen have a reputation for really not liking bad news. In that case, the level of sophistication and coordination required to effect such software starts to make a little sense, although what doesn’t make sense to me in that context is the idea that executive management can be so disconnected from the organisation they run.

(And if they are that disconnected – then that raises other, very significant, concerns in itself.)

In a powerful book about the disintegration, immediately after launch, of the Challenger space shuttle, which killed seven astronauts in January of 1986, the sociologist Diane Vaughan described a phenomenon inside engineering organisations that she called the “normalisation of deviance.” In such cultures, she argued, there can be a tendency to slowly and progressively create rationales that justify ever-riskier behaviours. Starting in 1983, the Challenger shuttle had been through nine successful launches, in progressively lower ambient temperatures, across the years. Each time the launch team got away with a lower-temperature launch, Vaughan argued, engineers noted the deviance, then decided it wasn’t sufficiently different from what they had done before to constitute a problem. They effectively declared the mildly abnormal normal, making deviant behaviour acceptable, right up until the moment when, after the shuttle launched on a particularly cold Florida morning in 1986, its O-rings failed catastrophically and the ship broke apart.

If the same pattern proves to have played out at Volkswagen, then the scandal may well have begun with a few lines of engine-tuning software. Perhaps it started with tweaks that optimised some aspect of diesel performance and then evolved over time: detect this, change that, optimise something else. At every step, the software changes might have seemed to be a slight “improvement” on what came before, but at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers, or been identified by Volkswagen executives. Instead, it would have slowly and insidiously led to the development of the defeat device and its inclusion in cars that were sold to consumers.

Except, that software development in and around complicated systems doesn’t work that way.

Sure, incremental development is definitely a known — sensible, even — approach, but to write code that successfully and reliably identifies particular road conditions isn’t just “a few lines of […] software” — it requires consideration, planning, design, and testing. This would have come around once it was clear that the engine profile for drivability and the engine profile for the environmental tests were so far apart, and isn’t something that even a small handful of rogue engineers could reliably knock together.

Certainly, multiple engine maps are plausible: many vehicles have them. Selecting a timing and fuel delivery map based on particular conditions — transmission gear selected; throttle setting; engine RPM for example — is quite common.

However to even consider heading down that path when it comes to the more complicated scene of detecting a rolling road — particularly when the only reason to do so is to cheat surely would have caused at least one team member to have a sleepless night or two.

In summary, it’s still implausible to me that the executive management team at Volkswagen were completely oblivious to what was going on.

a further thought on volkswagen

October 10th, 2015


corporate crap, aka a couple of software engineers

October 9th, 2015

Volkswagen — and their misleading and deceptive behaviour — are all the news at the moment. Diesel engines run in a continuum, with variables like particulates, fuel consumption, power output, and nitrous oxide (NOx) production in play, with a summary that to minimise particulates, fuel consumption, and reasonable power output, the production of nitrous oxides result.

In large diesels, such as those found in trucks, buses, and so on, this is dealt with by injecting a urea solution into the exhaust, where a catalyst neutralises the NOx component of the exhaust. This solution adds weight and complexity, as well as the requirement to “refuel” the urea solution regularly, so it isn’t often found on small diesels.

People though don’t tolerate smoky diesels, particularly in small passenger vehicles. The addition of a diesel particulate filter theoretically addresses some of this, but better to minimise the particulates anyway. The added bonus of minimising particulates is lower fuel consumption, and more usable torque and power. NOx aside, low fuel consumption, low soot, and higher power output is what customers do want.

Which brings the question of how small diesel manufacturers have been passing the extremely strict emissions standards, which care more about NOx than particulates, given that NOx is a main ingredient in photochemical smog. As is now turns out, at least one manufacturer — Volkswagen — has been passing the tests through duplicity: running a “fuel-rich,” probably particulate-heavy, NOx-light engine profile when under test; running a leaner, particulate-light, NOx-heavy engine profile for normal driving.

Give a one-dimensional metric to an engineer, and they’ll find a way to ‘optimise’ that metric. Measure ticket closure rates in a support environment, and tickets will be closed as quickly as possible — probably quicker than would result in happy customers.

However, I don’t for a moment believe that “a couple of software engineers” would have gone to these lengths unbidden; for starters, the test regime for such engine profiles would require significant coordination among many different people within the organisation, all the way up to — at the very least — a program manager. Dyno runs; tests to ensure the software can figure out the difference between a rolling road and a real one; the engine mappings themselves: all of this takes time, effort, expense, and coordination. If it were as simple as “a couple of software engineers,” the development costs associated with new cars wouldn’t be as large as they are in the first place.

Further, why would “a couple of software engineers” be too concerned about passing some environmental tests, unless they had been directed to be concerned about it?

My guess, having seen similar behaviours — even including tossing “a couple of software engineers” under the bus when caught — can be summarised like so:

  1. Diesel engine was developed.
  2. Diesel engine fails internal benchmarks against the relevant environmental standards.
  3. “A couple of software engineers” are tasked with fine-tuning the engine run profile to meet the environmental standards.
  4. Engine now passes benchmarks, but fails to provide the driving performance that would be preferred.
  5. “A couple of software engineers” are now tasked with finding a solution to this problem, with the inference that they still need to be able to pass the environmental standards.

Given this trade-off, the use of two separate engine maps, tuned for each use case — and additional code to determine which one to use — is almost the only logical outcome.

So yes, it probably was “a couple of software engineers” who wrote the code. It almost certainly couldn’t have been done without fairly sophisticated coordination across the whole product team.

operations as an endangered species

August 12th, 2015

Back in 2007, I took a change in role away from UNIX type systems administration toward network engineering focused roles. This was for a multitude of reasons, not least of which was it better matched my interests. A consideration however was that virtualisation was well into changing the world of systems administration, and with that it was clear that the market for system administration generalists was going to become more, rather than less, limited with time. Essentially as virtualisation took hold, it was my prediction that one system administration generalist could likely be expected to support an order of magnitude more applications; while this isn’t a bad thing, it certainly limited the career scope of working in related spaces. It was my guess that working in networking would be a little safer, being that irrespective of what happened to compute, there’d likely always be a need for managed network connectivity (more on that later; suffice to say I no longer believe this either).

Roll on 2015. If anything, this forecast has demonstrated itself to to be underoptimistic; while Amazon’s EC2 started making itself felt in around 2008, I and many others failed to second-guess that shared infrastructure compute would become the centrepiece of businesses’ deployment strategies so soon; I would have suggested at the time that relying on EC2 and its ilk for mainstream business needs was — somehow — inappropriate governance.

Late 2008 saw the global financial crisis took hold. I don’t think it’s an accident that business interest in gaining efficiencies by relying on X-as-a-service offerings really took hold in its wake; managing business services in-house can be expensive, and short of overt outsourcing of all aspects of a business’ IT needs, using as-a-service offerings allows a reasonable compromise between a tailored solution and a low-cost, one-size-fits-all one. The GFC caused many businesses to search for efficiencies, and going to fully virtualised platforms is a logical way forward.

With X-as-a-service has come the disappearance of highly skilled, highly focused specialists: these are now hired by, and work for, the businesses delivering the service. In a sense, the ranks of generalists has also diminished somewhat; the sense that there’s a trend here is unmistakable.

Information Technology in all its forms is now becoming more like a utility, insofar as everyone has to have a basic understanding of how to operate all aspects of it that are relevant to them, and a very very few — specialists — actually touch any of the moving parts. This isn’t a short lived trend, and the average CxO *wants* the trend to continue; they don’t want to hire people into generic IT operations roles. IT operations roles have been derided as janitorial for quite some time; the disappearance of operations generalists is the net result.

Nor is this a bad thing. Information Technology — almost more than any other career path — is defined by disruption; it’s driven disruption in every other industry it’s touched, and with that disruption comes changes in the very nature of IT itself.

Back to networking. In 2007 I was of the belief that networking would be relatively immune to this; after all, no matter where the compute is housed, people need access to it. There’s an underlying assumption here: that the nature of networking wouldn’t change along with compute and storage — an assumption that is, naturally, wrong. Network-as-a-service, simplified and automated to the point where a skilled network generalist is no longer required in the business itself, is clearly in the near term future for many businesses.

None of this is to say that generalist operations people are now irrelevant: there are still times where a generalist is needed. However what is changing is the need for multitudes of such staff, hired for and working within a business which treats IT as a supporting function. Instead, a small number, possibly working for multiple businesses — in the same way that a business might have maintenance staff come in once per week, or an electrician on-call for light-duty ad-hoc work — is a more likely outcome.

customer satisfaction

August 7th, 2015

I recently had an experience involving very poor customer satisfaction. This isn’t at all surprising — less-than-brilliant customer service is increasingly the norm, as good customer service can be rather expensive to deliver with no direct economically measurable benefit — but the nature of the particular interaction got me thinking about the nature of customer satisfaction.

The industry, organisation, and individuals involved aren’t, for the purposes of this story, relevant; the fact that I’m personally familiar with many of the parties, much more so, but only to the extent that it allowed me the ability to take a step back and think through what the problem was.

Read the rest of this entry »