Interview: Whamcloud Wins FastForward Contract for Exascale R&D

Print Friendly, PDF & Email

Today Whamcloud announced that the company has been awarded the Storage and I/O Research & Development subcontract for the Department of Energy’s FastForward program. FastForward is set up to initiate partnerships with multiple companies to accelerate the R&D of critical technologies needed for extreme scale computing. To learn more, I caught up with Eric Barton, Whamcloud’s CTO.

insideHPC: Many DOE applications place extreme requirements on computations, data movement, and reliability. What aspects will Whamcloud focus on in this contract?

Eric Barton: All of the above. We’re researching a completely new I/O stack suitable for Exascale.

At the top the stack we’re building an object-oriented storage API based on HDF5 to support high-level data models, their properties and relationships. This will use a non-blocking initiation and completion notification APIs to ensure application developers can overlap compute and I/O naturally and efficiently. The API will also allow distributed updates to be grouped into atomic transactions to ensure that application data and metadata stored in the Exascale storage system remains self consistent in the face of all possible failures.

In the middle of the I/O stack, we’re prototyping a Burst Buffer using persistent solid-state storage accessed using OS bypass technology and a data layout optimizer based on PLFS. This part of the stack, running on dedicated I/O nodes of the Exascale machine, will handle the impedance mismatch between the smooth streaming I/O required for efficient disk utilization with the bursty, fragmented and misaligned I/O that Exascale applications will produce.

At the bottom of the stack we’re designing a new scalable I/O API to replace POSIX for distributed applications. Called DAOS, for Distributed Application Object Storage, This API will support asynchronous transactional I/O within scalable object collections. This will provide the functionality, performance, scalability and fault tolerance foundational to the whole Exascale I/O stack.

insideHPC: Does Whamcloud have a group devoted to R&D at this time?

Eric Barton: Effectively yes – depending on the emphasis you place on the ‘R’ versus the ‘D’. Right now we’re mostly geared towards development, but the Fast Forward project enables us to grow our research efforts.

insideHPC: The release mentions the use of Flash storage, something that I don’t hear about much in conversations about Lustre. How will the two come together in this effort?

Eric Barton: The major innovation in the Fast Forward project relying on solid-state storage is the Burst Buffer. But we won’t be using it like disk at all – we can’t afford the overheads imposed by system calls and legacy storage protocols on the path to storage if we’re to match the message rates possible from the compute cluster network. This is essential to support the fragmented and mis-aligned I/O that application programmers need.

insideHPC: The development of Exascale software has been described as a monumental task that will take years and potentially billions of dollars. As a nation, are we doing enough in this area with programs like FastForward, or is this just the beginning?

Eric Barton: This is just the beginning. We know we need to address the whole I/O stack and this project will let us prove our current ideas on first steps towards Exascale I/O and teach us some valuable lessons. The follow-on work can then start in earnest, both to productize the prototypes we develop and to determine the next areas for research. It’s going to be a long haul and we’ll need to ramp the effort as we go.

insideHPC: Can you give us any recent example technologies developed for extreme scale that have benefitted HPC for the rest of us?

Eric Barton: One thing I’ve learnt since I first started to develop parallel applications in the mid-80s is that it’s a whole lot easier to scale down that it is to scale up. Practically all the HPC technologies, particularly in networking and software have had their baptism of fire at the top end. Lustre is actually a clear example. The DOE funded the original work to create Lustre for their leading computing facilities and now Lustre is clearly a widely used, popular HPC storage technology found in over 60% of the TOP100 supercomputing sites.