Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Memory Driven Computing in the Spotlight at MSST Conference Next Week

The MSST Mass Storage Conference in Silicon Valley is just a few days away, and the agenda is packed with High Performance Computing topics. In one of the invited talks, Kimberly Keeton from Hewlett Packard Enterprise speak on Memory Driven Computing. We caught up Kimberly to learn more. 

insideHPC: Can you please define Memory-Driven Computing and tell us about the benefits of this approach?

Kimberly Keeton is a Distinguished Technologist at Hewlett Packard Labs

Kimberly Keeton: Memory-Driven Computing (https://www.labs.hpe.com/the-machine/) is a future system architecture being developed by HPE that brings together fast persistent memory, a fast memory fabric, task-specific processing, and a new software stack to address today’s data growth and analysis challenges. The memory hierarchy will include a capacity tier provided by a large pool of shared persistent memory that’s directly accessible over a memory-semantic (i.e., load/store) fabric, as well as a performance tier provided by local private memory. To software, this will look like memory-speed persistence and direct access to the capacity tier, without the need to mediate requests through another node.

Put more simply: in a Memory-Driven Computing world, memory is large, it’s persistent, and it’s shared through a memory-semantic fabric. The combination of these characteristics provides the opportunity to rethink the entire software stack and many benefits. The fact that memory is persistent means that traditional overheads from slow storage (e.g., data copies and serializations) can be eliminated. As a more specific example, large HPC applications could massively reduce their checkpointing overheads, allowing the applications to focus on the task at hand, rather spending lots of time anticipating a future need for recovery. The fact that memory is shared means that data sets no longer need to be partitioned (as in traditional clustered environments), and that memory can be used for fast communication. The fact that memory is large means that large working sets can be maintained as in-memory data structures, and it’s possible to rethink traditional memory space-computation time tradeoffs.

insideHPC: How will Memory-Driven Computing affect the Programming Model?

Kimberly Keeton: Much as today, we expect to continue to have different ways to interact with persistent data, including file systems, key-value stores, and databases. In addition, Memory-Driven Computing enables programming models that allow applications to use byte-addressable persistent data structures directly. To make application programming easier, we assume that libraries will provide support for managing persistence operations, atomic operations and cache coherence between compute nodes.

insideHPC: How far away are we from seeing this Memory-Driven Computing model employed in Science and Research?

Kimberly Keeton: Memory-Driven Computing is at the center of HPE’s drive toward exascale computing. The requirements of our future exascale customers such as the US Department of Energy are daunting. These customers demand exascale performance (one quintillion (10**18) floating point calculations per second) within a single system, consuming less than 30 MWatts of power in the 2022/23 timeframe. This requires a deep re-think about architectural strategy, especially related to the efficient movement of data within these massive scale systems. In future systems more power will be consumed in moving the data than computing the data. The power efficiencies, performance enhancements and application scalability enabled by Memory-Driven Computing allow us to address these challenges.

Researchers at HPE and elsewhere are already actively exploring some of the software components of Memory-Driven Computing, including persistent memory-aware file systems and databases, persistent memory-aware programming models, and disaggregated memory architectures (albeit typically over RDMA, rather than memory-semantic fabrics). See below** for highlights from our recent Memory-Driven Computing publications. Additionally, we’ve open sourced much of our work, from the operating system up through the programming model.

Large memory server offerings are already available commercially. For example, HPE’s Integrity Superdome X provides up to 48TB of DRAM with up to 16 processors (384 cores).

Persistent memory offerings are already available commercially, including persistent memory in DIMM form and SSD form (e.g., 3D XPoint), with proof points illustrating benefits to applications (e.g., Microsoft SQL Server 2016 Tail of the Log functionality.

And with the formation of the Gen-Z Consortium (www.genzconsortium.org), memory-semantic fabric products are on the horizon. As I mentioned earlier, we’ve been using a memory-semantic fabric in our Machine prototypes; we’re contributing what we’ve learned back to the Gen-Z Consortium.

insideHPC: Will Memory-Driven Computing be something that is only available from HPE at the outset?

Kimberly Keeton: Memory-Driven Computing is HPE’s term for this memory-centric view of future system architectures. We’re seeing other companies starting to think along the same lines we are.

Ultimately, we expect a multi-vendor ecosystem for both persistent memory and memory-semantic fabric technologies. The Gen-Z Consortium provides an open standard for a memory-semantic fabric, meaning that there’s an open ecosystem for fabric-attached memory, compute, networking and storage devices.

We’ve enthusiastically open sourced the software we’ve developed, from the OS to data management to programming models, to provide a foundation for others to innovate in the Memory-Driven Computing space.

insideHPC: What does the Storage Industry need to do to prepare for this new era of Memory Driven Computing?

Kimberly Keeton: Since fabric-attached persistent memory blurs the traditional line between memory and storage, we’re continuing to develop approaches to ensure that it can be used to store persistent data reliably, securely and cost-effectively.

The storage industry has considerable expertise in storage services such as replication, erasure codes, encryption, compression, deduplication, snapshotting, wear leveling, etc. It’s time to revisit these traditional storage services, to see how they should be adapted to provide the same benefits in a Memory-Driven Computing environment, while operating at memory speeds. Software-only implementations can trade performance for reliability, security and cost-effectiveness, but will diminish the benefits possible from these faster memory technologies, so there’s a role for both software and hardware innovation.

Load/store-accessible persistent memory may not be the most cost-effective medium for “cold” persistent data, so the industry continues to need to consider how to manage a multi-tier environment, including DRAM, persistent memory, local storage (e.g., SSDs, disks, tape) and cloud storage. The goal is to ensure that data is in “the right place at the right time” to meet application needs.

Memory-Driven Computing research publication highlights:

  • R. Achermann, C. Dalton, P. Faraboschi, M. Hoffman, D. Milojicic, G. Ndu, A. Richardson, T. Roscoe, A. Shaw, R. Watson. “Separating Translation from Protection in Address Spaces with Dynamic Remapping,” Proc. 16th Workshop on Hot Topics in Operating Systems (HotOS XVI), 2017.
  • T. Hsu, H. Brugner, I. Roy, K. Keeton, P. Eugster. “NVthreads: Practical Persistence for Multi-threaded Applications,” Proc. ACM EuroSys, 2017.
  • S. Nalli, S. Haria, M. Swift, M. Hill, H. Volos, K. Keeton. ”An Analysis of Persistent Memory Use with WHISPER,” Proc. ACM Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2017.
  • H. Kimura, A. Simitsis, K. Wilkinson, “Janus: Transactional processing of navigational and analytical graph queries on many-core servers,” Proc. CIDR, 2017.
  • F. Chen, M. Gonzalez, K. Viswanathan, H. Laffitte, J. Rivera, A. Mitchell, S. Singhal. “Billion node graph inference: iterative processing on The Machine,” Hewlett Packard Labs Technical Report HPE-2016-101, December 2016.
  • P. Laplante and D. Milojicic. “Rethinking operating systems for rebooted computing,” Proc. IEEE International Conference on Rebooting Computing (ICRC), 2016.
  • D. Chakrabarti, H. Volos, I. Roy, and M. Swift. “How Should We Program Non-volatile Memory?”, tutorial at ACM Conf. on Programming Language Design and Implementation (PLDI), 2016.
  • K. Viswanathan, M. Kim, J. Li, M. Gonzalez. “A memory-driven computing approach to high-dimensional similarity search,” Hewlett Packard Labs Technical Report HPE-2016-45, May 2016.
  • N. Farooqui, I. Roy, Y. Chen, V. Talwar, and K. Schwan. “Accelerating Graph Applications on Integrated GPU Platforms via Instrumentation-Driven Optimization,” Proc. ACM Conf. on Computing Frontiers (CF’16), May 2016.
  • El Hajj, A. Merritt, G. Zellweger, D. Milojicic, W. Hwu, K. Schwan, T. Roscoe, R. Achermann, P. Faraboschi. “SpaceJMP: Programming with multiple virtual address spaces,” Proc. ACM ASPLOS, 2016.
  • J. Izraelevitz, T. Kelly, A. Kolli. “Failure-atomic persistent memory updates via JUSTDO logging,” Proc. ACM ASPLOS, 2016.
  • D. Milojicic, T. Roscoe. “Outlook on Operating Systems,” IEEE Computer, January 2016.
  • K. Bresniker, S. Singhal, and S. Williams. “Adapting to thrive in a new economy of memory abundance,” IEEE Computer, December 2015.
  • H. Volos, G, Magalhaes, L, Cherkasova, J, Li. “Quartz: A lightweight performance emulator for persistent memory software,” Proc. of ACM/USENIX/IFIP Conference on Middleware, 2015.
  • J. Li, C. Pu, Y. Chen, V. Talwar, and D. Milojicic. “Improving Preemptive Scheduling with Application-Transparent Checkpointing in Shared Clusters,” Proc. ACM Middleware, 2015.
  • H. Kimura. “FOEDUS: OLTP engine for a thousand cores and NVRAM,” Proc. ACM SIGMOD, 2015.
  • P. Faraboschi, K. Keeton, T. Marsland, D. Milojicic. “Beyond processor-centric operating systems,” Proc. HotOS XV, 2015.
  • S. Gerber, G. Zellweger, R. Achermann, K. Kourtis, and T. Roscoe, D. Milojicic. “Not your parents’ physical address space,” Proc. HotOS, 2015.
  • F. Nawab, D. Chakrabarti, T. Kelly, C. Morrey III. “Procrastination beats prevention: Timely sufficient persistence for efficient crash resilience,” Proc. Conf. on Extending Database Technology (EDBT), 2015.
  • S. Novakovic, K. Keeton, P. Faraboschi, R. Schreiber, E. Bugnion. “Using shared non-volatile memory in scale-out software,” Proc. ACM Workshop on Rack-scale Computing (WRSC), 2015.
  • M. Swift and H. Volos. “Programming and usage models for non-volatile memory,” Tutorial at ACM ASPLOS, 2015.
  • D. Chakrabarti, H. Boehm and K. Bhandari. “Atlas: Leveraging locks for non-volatile memory consistency,” Proc. ACM Conf. on Object-Oriented Programming, Systems, Languages & Applications (OOPSLA), 2014.
  • H. Volos, S. Nalli, S. Panneerselvam, V. Varadarajan, P. Saxena, M. Swift. “Aerie: Flexible file-system interfaces to storage-class memory,” Proc. ACM EuroSys, 2014.

Check out our insideHPC Events Calendar

Leave a Comment

*

Resource Links: