How the Heartbleed bug reveals a flaw in online security

The exact contents sent back varied between systems and over time. But as well as information such as user passwords or private data, it could contain something called the private master key.

With access to this key, an “attacker” can electronically impersonate the organization who rightfully owns the key, and unscramble all the private messages sent to that organization — including old ones, if they’ve kept the previously unreadable scrambled versions.

Criminals could, for instance, steal the key of a major bank and then electronically impersonate it. It’s a potential field day for spies, too.

Discovery and consequences
The buggy code was incorporated into a June 2012 release of OpenSSL that was widely adopted, and there it stayed until discovered virtually simultaneously by Google’s security team, and Codenomicon, an Internet security company.

Before informing the public, they informed the OpenSSL developers, who fixed the bug by adding the missing checks.

At this moment, there is no evidence that anybody has maliciously exploited the bug but system administrators have acted both to prevent exploitation, and reduce the consequences if it has already been.

The fix is simple. The task of getting it deployed to the millions of systems using OpenSSL is not.

System administrators across the world have been furiously installing the fix on millions of computers. They’re also scrambling to generate new master keys.

For most end users, the biggest nuisance will come when administrators request password changes.

Most users have multiple Internet accounts; many of these will be affected by the Heartbleed bug and their administrators will request their users to change passwords in case they have been stolen.

In addition, many embedded computers in devices such as home network routers may be vulnerable, and updating these is a time-consuming manual task.

Even if there hasn’t been any malicious exploitation of the bug, the costs of people’s time will likely run into the hundreds of millions of dollars.

A tiny mistake but a major headache
Contrary to a variety of conspiracy theories, the simplest and most likely explanation for the bug is an accidental mistake. Seggelmann denies doing anything deliberately wrong.

Mistakes of the type that caused Heartbleed have led to security problems since the 1970s. OpenSSL is written in a programming language called C, which also dates from the early 1970s. C is renowned for its speed and flexibility, but the trade-off is that it places all responsibility on programmers to avoid making precisely this kind of mistake.

There are currently two broad streams of thought in the technical community about how to reduce the likelihood of such mistakes:

  1. Use technical measures, such as alternative programming languages, that make this type of error less likely
  2. Tighten up the process for making changes to OpenSSL, so that they are subject to much more extensive expert scrutiny before incorporation.

Dealing with risk
My view is that while both of these points have merit, underlying both is that the Heartbleed bug represents a massive failure of risk analysis.

It’s hard to be too critical of those who volunteer to build such a useful tool but OpenSSL’s design prioritizes performance over security, which probably no longer makes sense.

But the bigger failure in risk analysis lies with the organizations which use OpenSSL and other software like it. The development team, language choices and development process of the OpenSSL project are laid bare, in public, for anyone who cares to find out.

The consequences of a serious security flaw in the project are equally obvious. But a huge array of businesses, including very large IT businesses depending on OpenSSL with the resources to act, did not take any steps in advance to mitigate the losses.

They could have chosen to fund a replacement using more secure technologies, and they could have chosen to fund better auditing and testing of OpenSSL so that bugs such as this are caught before deployment.

They didn’t do either, so they — and now we — wear the consequences, which likely far exceed the costs of mitigation.

And while you shake your head at the IT geeks, I leave you with a question — how are you identifying and managing the risks that your own organization faces?

Robert Merkel is Lecturer in Software Engineering at Monash University. This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).