New Whitepaper on Meltdown and Spectre fixes for HPC

Print Friendly, PDF & Email

A new white paper from Ellexus looks at how you can mitigate the threats posed by the Spectre and Meltdown exploitis.

Can you afford to lose a third of your compute real estate? If not, you need to pre-empt the impact of Meltdown and Spectre.

Meltdown and Spectre are quickly becoming household names and not just in the HPC space. The severe design flaws in Intel microprocessors that could allow sensitive data to be stolen and the fixes are likely to be bad news for any I/O intensive applications such as those often used in HPC.

Ellexus Ltd, the I/O profiling company, has released a white paper: How the Meltdown and Spectre bugs work and what you can do to prevent a performance plummet.

Why is the Meltdown fix worse for HPC applications?

The changes that are being imposed on the Linux kernel (called the KAISER patch) to more securely separate user and kernel space are causing additional overhead to context switches. This is having a measurable impact on the performance of shared file systems and I/O intensive applications, which is particularly noticeable in I/O heavy workloads. A performance penalty could reach 10-30%.

Systems that were previously just about coping with I/O heavy workloads could now be in real trouble. It’s very easy for applications sharing datasets to overload the file system and prevent other applications from working, but bad I/O can also affect each program in isolation, even before the patches for the attacks make that worse.

Profile application I/O to rescue lost performance

You don’t have to put up with poor performance in order to improve security, however. The most obvious way to mitigate performance losses is to profile I/O and identify ways to optimise applications’ I/O performance.

By using the tool suites from Ellexus, Breeze and Mistral, to analyse workflows it is possible to identify changes that will help to eliminate bad I/O and regain the performance lost to these security patches.

Ellexus’ tools locate bottlenecks and applications with bad I/O on large distributed systems, cloud infrastructure and super computer clusters. Once applications with bad I/O patterns have been located, our tools will indicate the potential performance increases as well as pointers on how to achieve them. Often the optimisation is as simple as changing an environment variable, changing a single line in a script or changing a simple I/O call to read more than one byte at a time.

In some cases, the candidates for optimization will be obvious – a workflow that clearly stresses the file system every time it is run, for example, or one that runs for significantly longer than a typical task.

In others it may be necessary to perform an initial high-level analysis of each job. Follow three steps to optimise application I/O and mitigate the impact of the KAISER patch:

  1. Profile all your applications with Mistral to look for the worst I/O patterns. Mistral, our I/O profiling tool, is lightweight enough to run at scale. In this case Mistral would be set up to record relatively detailed information on the type of I/O that workflows are performing over time. It would look for factors such as how many meta data operations are being performed, the number of small I/O and so on.
  2. Deal with the worst applications, delving into detail with Breeze. Once the candidate workflows have been identified they can be analysed in detail with Breeze. As a first step, the Breeze trace can be run through our Healthcheck tool that identifies common issues such as an application that has a high ratio of file opens to writes or a badly configured $PATH causing the file system to be trawled every time a workflow uses “grep”.
  3. Put in place longer-term I/O quality assurance. Implement the Ellexus tools across your systems to get the most from the compute and storage and to prevent problems reoccurring.

By following these simple steps and our best practices guidance it is easy to find and fix the biggest issues quickly and give you more time to optimize for the best performance possible.

Download the white paper

Sign up for our insideHPC Newsletter