Subscribe Now

* You will receive the latest news and updates on the Canadian IT marketplace.

Trending News

Blog Post

Meltdown and Spectre: What now?
SECURITY SHELF

Meltdown and Spectre: What now? 

Bryan Pollitt, Vice-President, Professional Services at Toronto-based security firm ISA, explained it well:

“These vulnerabilities are different than most we see, because they are tied to hardware and not to an application or operating system. Hardware vulnerabilities are far rarer. The Meltdown and Spectre vulnerabilities that were discovered by a team of independent researchers including Google’s Project Zero are likely to be the worst processor bugs ever discovered.

The first of these vulnerabilities has been dubbed ‘Meltdown’ because it essentially melts the security boundaries normally enforced by hardware. It takes advantage of a feature on almost all modern processors called ‘speculative execution’ or ‘out-of-order execution’ which allows the processor to execute instructions in a non-sequential manner so that the CPU spends less time idle. It leverages a race condition between instruction execution and privilege checking in order to read memory mapped data that it should not be able to.

The second of these vulnerabilities is called ‘Spectre’ which has been described by researchers as a whole class of potential vulnerabilities in modern processors. Spectre focuses on ‘branch prediction’, which is a part of speculative execution. Unlike the Meltdown vulnerability, Spectre does not rely on a specific feature of the processor memory management and protection system. It is a more generalized idea that has so far been demonstrated to work against user level programs.”

While most Meltdown and Spectre discussions have focused on the fact that almost every Intel processor produced since 1995 appears to be impacted, some AMD and ARM processors are as well. Intel appears fully engaged and cooperating with mitigation efforts, but the company has been criticized for their initial response in which they failed to clearly accept responsibility, claiming “Recent reports that these exploits are caused by a ‘bug’ or a ‘flaw’ and are unique to Intel products are incorrect. Based on the analysis to date, many types of computing devices — with many different vendors’ processors and operating systems — are susceptible to these exploits.”

Major operating system developers, including Microsoft, Apple, and various Linux distributions have released patches to mitigate vulnerabilities; it is likely that additional patches will be required, especially as new Spectre-related exploits are discovered. Estimates of performance impacts vary from negligible to upwards of thirty per cent.

Local code execution is required to exploit these vulnerabilities, providing some risk mitigation. However, JavaScript has been identified as a potential delivery mechanism, making it theoretically possible to execute both attacks from inside a browser. It is possible that malware could leverage the vulnerabilities to bypass security controls. However, like other malware, it must be first delivered to the target system. At this time there is no evidence to suggest that these vulnerabilities would facilitate introduction of malware onto the device in the first place.

Servers, especially in cloud environments, are an area of much greater concern, including the potential for Spectre to facilitate crossing virtual machine boundaries. Hypervisors, as well as containers that share the kernel such as Docker, LXC, or OpenVZ, are all potentially affected. In response, major cloud vendors rushed to patch their systems.

From an end-user and IT operations perspective, the only practical response to Meltdown and Spectre is to apply security-relevant patches as soon as possible. However, individuals and organizations that process highly sensitive information in shared environments should closely monitor updates and adjust risk assessments accordingly.

Hypervisors, and cloud computing in general, have progressed to the point that most organizations are comfortable that they provide trustworthy isolation. But the fact that these hardware vulnerabilities have existed for more than two decades, and both were discovered about the same time by multiple independent researchers, suggests that reconsidering the appropriateness of shared computing resources is warranted.

One good strategy may be to group applications based on security and integrity requirements. For example, mixing public and non-public information on the same physical server may be more risky than previously thought. In more extreme cases, such as systems that could be subject to state-sponsored attacks, the use of dedicated hardware may be more appropriate. In general, system architects should consider that the level of isolation existing on shared hardware may be lower than anticipated.

Have a security question you’d like answered in a future column? Eric would love to hear from you.

Related posts