Meltdown and Spectre – what should organisations be doing to protect people’s personal data?

By Nigel Houlden, Head of Technology Policy

IT security

This week Google’s Project Zero team published details of serious security flaws, Meltdown and Spectre, which affect almost every modern computer, and could allow hackers to steal sensitive personal data. The three connected vulnerabilities have been found in processors designed by Intel, AMD and ARM. The full technical details of these vulnerabilities can be found in this blog post, and papers have been published under the names Meltdown and Spectre that give further details.

In essence, the vulnerabilities provide ways that an attacker could extract information from privileged memory locations that should be inaccessible and secure. The potential attacks are only limited by what is being stored in the privileged memory locations – depending on the specific circumstances an attacker could gain access to encryption keys, passwords for any service being run on the machine, or session cookies for active sessions within a browser. One variant of the attacks could allow for an administrative user in a guest virtual machine to read the host server’s kernel memory. This could include the memory assigned to other guest virtual machines.

The implications for data controllers are clear. If these vulnerabilities are exploited on a system that is processing personal data, then that personal data could be compromised. Alternatively, an attacker could steal credentials or encryption keys that would allow them to access personal data stored elsewhere.

It is important to note that actual live attacks do not appear to have been carried out using these vulnerabilities, but malware writers and hackers will be hard at work determining how they can make the best use of these vulnerabilities, and checking whether systems are vulnerable.

We therefore strongly recommend that organisations determine which of their systems are vulnerable, and test and apply the patches as a matter of urgency. Failure to patch known vulnerabilities is a factor that the ICO takes into account when determining whether a breach of the seventh principle of the Data Protection Act is serious enough to warrant a civil monetary penalty. And, under the General Data Protection Regulation taking effect from May 25 this year, there may be some circumstances where organisations could be held liable for a breach of security that relates to measures, such as patches, that should have been taken previously.

Cloud service providers will have to carefully consider whether they will be considered as a data controller for any virtual machines running on vulnerable systems. Organisations that use cloud providers should obtain assurances from the provider that these vulnerabilities have been patched.

To patch or not to patch?

Some organisations may have difficult choices to make in relation to the vulnerabilities. For some there may be a performance drop when the patch to resolve the Meltdown vulnerability is applied. Estimates on the effect have varied wildly, but some initial benchmarking has indicated that some workloads may suffer performance hits.

In addition to this, some organisations may find that their antivirus solution may not be compatible with patches issued by Microsoft – see here for a support notice from Microsoft on the topic. Ultimately, organisations will have to make their own choices on whether to patch, but if they choose not to, we would expect significant mitigations to be in place and well understood.

Privacy by design

Hackers should not be getting to core systems in the first place. Privacy by design should be in every part of your information processing, from the hardware and software to the procedures, guidelines, standards, and polices that your organisation has or should have.

Taking care of the basics will help protect your organisation from potential attacks, and therefore potential loss of data; they are simply part of due diligence. Having an effective layered security system will help to mitigate any attack – your systems can’t be exploited if a hacker can’t get through the front door!

Systems should be protected at each step, you should be looking at your data flows, understanding how your data moves across and beyond your organisation, both in the electronic format and the ‘real’ world format. You should be evaluating the impact of a data breach, or data loss on you, financially, and your reputation. Data should be secure in rest as well as when in transit – even if a hacker gets the data they shouldn’t be able to read it.

A well designed system will ensure that the network infrastructure is protected and should incorporate, firewalls, access control lists, VLANs as well as non-technological preventative measures such as CCTV, fences and security personnel if needed.

Access to data should be under the system of least privilege, not knowing who in your organisation has access to what or who is responsible for it can be a massive hole in your security.

And remember – security isn’t just an IT issue. For good security to work you need senior management to buy-in and support it, and you need to enforce your policies and procedures. Just because at some point in the past someone somewhere wrote a security policy doesn’t mean you’re protected. You need to make sure that staff have read and understood the requirements of that policy, and that the consequences of failing to follow it are understood and enforced.

The more layered approach you take the less likely a vulnerability like Meltdown or Spectre could be exploited.

Nigel Houlden is the Head of Technology Policy. His department develops policy positions and products in relation to information technology and information security issues. Nigel is the ICO’s lead technological advisor and key liaison with the ICO’s technology stakeholders.
This entry was posted in Nigel Houlden and tagged , , , , . Bookmark the permalink.

6 Responses to Meltdown and Spectre – what should organisations be doing to protect people’s personal data?

  1. Karl Fontanari says:

    Very good article and response to this news

  2. Brandfire says:

    “Hackers should not be getting into core systems in the first place”… Hackers WILL be getting into core systems according to the ex-Director of the FBI Robert S. Mueller III.

    Exploits that have been quietly shared with hundreds (possibly thousands) of individuals over the last seven months, one of which can be realised in a few lines of JavaScript, with no obvious IoCs (Indicators of Compromise), which give access to almost all the data on the planet and at the present time aren’t fully 100% patched, and ultimately won’t be patchable across millions of devices which will be languishing throughout supply chains for at least the next decade? As we speak, hard drives in Russia must be brimming with encryption keys and passwords!

    This was an opportunity to move off the rather glib advice of “you must patch your systems / install firewalls, VLANs and security guards so you will be safe and nothing bad will happen”.

    The takeaways from this, surely, should be separation and resilience. Separation of Data, Systems, Processes, People – and resilience for when the worst happens… because it will.

    Remember that whilst your cybersecurity budget might £100,000+ this year, your adversaries can easily afford to spend $250,000+ for a single zero-day exploit.

  3. Ravinder Jamgotre says:

    Organisations should institutionalise Patch and Vulnerability As a managed process. Invest in its IT to remove legacy application hindering patch cycles. There is no prize for 2nd place in this digital world we live and work in.

  4. Nigel Cox says:


    “We therefore strongly recommend that organisations determine which of their systems are vulnerable, and test and apply the patches as a matter of urgency.”

    Is there any plan to re-evaluate this advice taking into account the difficulties and confusion in the industry around the availability, stability and practicality of applying the firmware/microcode patches?


  5. Pete Austin says:

    Intel’s patches, as was predictable given the complexity of the vulnerabilities here, had serious problems and were withdrawn. So would you like to reconsider this advice?

    “We therefore strongly recommend that organisations determine which of their systems are vulnerable, and test and apply the patches as a matter of urgency. Failure to patch known vulnerabilities is a factor that the ICO takes into account”

    Will the ICO also state that it takes into account security risks caused by over-swift patching – especially in the case of extremely complex vulnerabilities? Because if you appear to penalise the failure to patch known vulnerabilities, vs the creation of new vulnerabilities by over-rapid patching, this encourages companies to jump the gun and make bad decisions.

  6. amit says:

    nice solution for security of data but many services are also available with Blockchain technology and smart contracts along with healthureum

Leave a Reply