In this video from the DDN User Group at SC18, Glenn Lockwood from NERSC presents: Making Sense of Performance in the Era of Burst Buffers.
Two years ago I had observed that there were two major camps in burst buffer implementations: one that is more tightly integrated with the compute side of the platform that utilizes explicit allocation and use, and another that is more closely integrated with the storage subsystem and acts as a transparent I/O accelerator. Shortly after I made that observation though, Oak Ridge and Lawrence Livermore announced their GPU-based leadership systems, Summit and Sierra, which would feature a new type of burst buffer design altogether that featured on-node nonvolatile memory. This CORAL announcement, combined with the deployment of production, large-scale burst buffers at NERSC, Los Alamos, and KAUST, has led me to re-think my taxonomy of burst buffers. Specifically, it really is important to divide burst buffers into their hardware architectures and software usage modes; different burst buffer architectures can provide the same usage modalities to users, and different modalities can be supported by the same architecture.
See more talks from the DDN User Group at SC18