Industry Experts Discuss Accelerating Science with Storage Systems Research

Print Friendly, PDF & Email

In this special guest feature, Ken Strandberg describes the highlights of panel discussion on high performance storage at SC15.

storagepanelWhen four industry experts with many years of storage research and development experience came together to address questions presented by moderator Brad Settlemyer from Los Alamos National Laboratory, they had significant things to say about the future needs and direction of high-performance storage research. The panel included, Gary Grider, Los Alamos National Laboratory (LANL) HPC Division Director; Rob Ross, Argonne National Laboratory (ANL) Senior Scientist; Quincey Koziol, HDF Group Lead Architect; and Eric Barton, Intel High Performance Data Division (Intel® HPDD) Lead Architect.

There was significant discussion about identifying the most important workflows, e.g. will checkpoint/restart continue to dominate I/O demands, difficult to analyze scientific datasets, or some new emerging science workflows. In identifying these workflows, we expect to learn where to focus storage research. Moderator Settlemyer asked which of these are among the most important for researchers to be addressing going forward?

Grider has spent some time researching the workflows they experience at Los Alamos. He has concluded that there is a limited number of workflows at Los Alamos.

“We’ve found five or six. And there probably isn’t 100 of them. There’s probably more like dozens.” That sounds like good news for innovating storage systems for the future. As Grider pointed out later, “knowing how many workflows is information we didn’t have before.” It creates a clearer landscape of where researchers need to go moving forward in order to serve the science.

Barton, however, stated there are no most important workflows. He pointed out that we need to focus not on the storage system, but on the scientists. If scientists are writing bad code, it’s “because you’re presenting them with this intractable thing called the file system. You’re expecting them to adapt to you. And actually it should the other way around.” We need to find solutions that let the scientists think about their data and how to persist and instantiate it. According to Barton, for the future development of storage systems, researchers need to be thinking in terms of “how not to foist the meager capabilities of the software we develop onto our users.”

How are we going to do that?

Abstraction and Hardware are Key

Researchers need to think about ways to deliver the underlying file system in a more abstract manner. Don’t give scientists and application developers the rules of storage, but give them the tools to manage and move data. Ross has been trying to get the apps folks at Argonne to adopt NetCDF, HDF, or MPI-IO. Not because he wants them to stop using POSIX, but “it’s because there’s so much more information that they can tell us through those things. And that makes the problem a lot easier.” It also gives researchers more options for the tools they can provide to the scientists. By giving the app developers better abstractions, Ross pointed out, it provides a greater level of flexibility, “so PLFS can be hidden underneath or we can shove an object store underneath.” And Barton agreed. With higher levels of abstraction, there’s opportunity to attach more information and gather more information as the data moves around the system. “That’s going to be fairly key. So, it’s the lifting the level of abstraction and making it easy for the apps developers to do that.”

Barton pointed out how key will be the hardware. Going forward, data will be increasingly fragmented, getting smaller, and harder to deal with. We’re actually seeing that across many industries, where Lustre* has had to adapt from being a file system that was originally designed to serve up massive data sets to being one that moves thousands of small data sets. “Saying there’s going to be a difference between data and metadata, that’s just not where it’s at,” he emphasized. To help manage and move this complex range of data, the industry will continually deliver hardware that is more and more capable with dealing with these types of difficulties in fragmented, distributed, and complex data models, according to Barton.

One of the advancing capabilities of the hardware is in the fabric, Barton added. Intel is integrating the fabric—Intel Omni Path Architecture—into the processor. And new ultra-low-latency, ultra-fine-grained storage technologies on the horizon will require researchers to rethink how software will be implemented. “There is no room for any sort of bloat on top of the hardware,” according to Barton. “For applications to realize the benefits of these technologies, we have to really think hard about the software stack.”

Welcoming a Sea Change

These are big ideas being discussed at the right time, because, according to all the panelists, interesting things are happening in storage research that are pointing to a major sea change. Grider pointed out that “for the first time in about 15 years, people are talking about rewriting codes from scratch to go to new programming models.” That means opportunities to adopt new methods and codes, to leverage the new capabilities of hardware with software, and to introduce higher levels of abstraction into the storage systems. “I think there’s an opportunity for the next six or eight years here that we haven’t had for the last decade,” stated Grider.

With the birth of opportunities upon us, one key item to be thinking about is deployment of software into production, which Settlemyer brought up. Is it better to launch sooner or later when new ideas can be worked into the software? Overall, everyone agreed that it’s a dichotomy and hard to control. With Lustre, for example, Barton stated it was best to get it out there, “which is actually a curse and a blessing. The blessing was, it really gave us a lot of direct experience with dealing with real workflows and real issues, and the curse was that, of course, as soon as you put something into production it nails your feet to the floor.”

Koziol agreed. “Trying to move research into production software is quite a challenge in this environment.” NASA and NOAA and other of his customers deploy immediately, which fixes in place the developments they’ve been working on. So, while his team is motivated by the HPC folks to come up with new ideas, new technologies and new research, once it’s deployed, it’s hard to make changes. NASA doesn’t “want to change the satellite while it’s up in the sky,” he stated. But his people are finding some controls through file and software versioning and other methods “to keep old software working for infinite amounts of time and still get the data back.” And Barton pointed out that at Intel, “we’re trying to hold back from being deployed into production to give us some room to try out a bunch of different things.” It seems in research and development, achieving balance is becoming more important in order to innovate while, at the same time, serving the science that needs to get done.

As someone who works more on the funding side instead of on the development side, Grider had a different perspective. “I think you’ve got to have a real open mind about it. Sometimes you develop something and it doesn’t pop up until two generations later, and just the ideas make it and not the code itself. We can’t think of immediate gratification or we’ll never ever get anything new that way. That’s how I view it. A small percentage makes it as a good idea; and some small percentage gets deployed right away. But hopefully all the ideas get preserved and reused.”

Additionally, as Settlemyer addressed, storage is gaining significant interest outside of HPC, and everyone agreed that there are tools and technologies being developed in the cloud and other storage areas that could be adapted for HPC.

Suggestions for the Next Generation Researchers

Moderator Settlemyer asked what these storage experts would suggest that the new, younger researchers coming out of school and just beginning their careers should be looking at and doing as they become part of next-generation storage research? Ross suggested it’s “an important time to be thinking about different consistency models, eventual consistency, and causality. [The] sort of things that are central to understanding state over time in highly distributed systems.”

Both Barton and Koziol agree that it’s important that researchers understand more than what they were trained in; they need to be able to think from the science to the hardware. Barton expressed it as researchers needing to be “tall and thin. As in having an understanding of all the layers of the stack.”

To develop new research teams, Koziol gives team members a wider perspective. “One of the first things I do is take the more science oriented guys and shove them through data structures and algorithms books. And I take all the CS guys and shove them through science books. They need the breadth at all levels of the stack.” While every field has specialists, the broader the knowledge a specialist has, the better the researcher. And nobody works alone, points out Ross. “It’s a collaborative thing at this point, and if you’re going to work in the science field, you’re going to have to work with a team of people. If you’re really the storage guy, you’re going to be working with some middleware teams. You might be working with resource management groups.” The recommendations point out that new researchers are far from done learning. They’ve really only just begun.

The panel had a lot more to say about the future of storage research. You can view the entire panel discussion from SC15 here.

Sign up for our insideHPC Newsletter