Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


State of Linux Containers

Christian Kniep from Docker Inc. gave this talk at the Stanford HPC Conference. “This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward, Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale.”

Samsung Unveils 30.72TB Enterprise SSD

Today Samsung announced that it has begun mass producing the industry’s largest capacity SAS solid state drive – the PM1643 – for use in next-generation enterprise storage systems. Leveraging Samsung’s latest V-NAND technology with 64-layer, 3-bit 512-gigabit chips, the 30.72 terabyte drive delivers twice the capacity and performance of the previous 15.36TB high-capacity lineup introduced in March 2016. “With our launch of the 30.72TB SSD, we are once again shattering the enterprise storage capacity barrier, and in the process, opening up new horizons for ultra-high capacity storage systems worldwide,” said Jaesoo Han, executive vice president, Memory Sales & Marketing Team at Samsung Electronics. “Samsung will continue to move aggressively in meeting the shifting demand toward SSDs over 10TB and at the same time, accelerating adoption of our trail-blazing storage solutions in a new age of enterprise systems.”

WekaIO: Making Machine Learning Compute-Bound Again

We are going to present WekaIO, the lowest latency, highest throughput file system solution that scales to 100s of PB in a single namespace supporting the most challenging deep learning projects that run today. We will present real life benchmarks comparing WekaIO performance to a local SSD file system, showing that we are the only coherent shared storage that is even faster than the current caching solutions, while allowing customers to linearly scale performance by adding more GPU servers.”

HPE wins $57 Million Supercomputing Contract for DoD Modernization Program

Today HPE announced it has been selected to provide new supercomputers for the DoD High Performance Computing Modernization Program (HPCMP) to accelerate the development and acquisition of advanced national security capabilities. “The DoD’s continuous investment in supercomputing innovation is a clear testament to this development and an important contribution to U.S. national security. HPE has been a strategic partner with the HPCMP for two decades, and we are proud that the DoD now significantly extends this partnership, acknowledging HPE’s sustained leadership in high performance computing.”

Video: The Sierra Supercomputer – Science and Technology on a Mission

Adam Bertsch from LLNL gave this talk at the Stanford HPC Conference. “Our next flagship HPC system at LLNL will be called Sierra. A collaboration between multiple government and industry partners, Sierra and its sister system Summit at ORNL, will pave the way towards Exascale computing architectures and predictive capability.”

Call for Participation: MSST Mass Storage Conference 2018

The 34th International Conference on Massive Storage Systems and Technologies (MSST 2018) has issued its Call for Participation. The event takes place May 14-16 in Santa Clara, California. “The conference invites you to share your research, ideas and solutions, as we continue to face challenges in the rapidly expanding need for massive, distributed storage solutions. Join us and learn about disruptive storage technologies and the challenges facing data centers, as the demand for massive amounts of data continues to increase. Join the discussion on webscale IT, and the demand on storage systems from IoT, healthcare, scientific research, and the continuing stream of smart applications (apps) for mobile devices.”

Take the Exascale Resilience Survey from AllScale Europe

The European Horizon 2020 AllScale project has launched a survey on exascale resilience. “As we approach ExaScale, compute node failure will become commonplace. @AllScaleEurope wants to know how #HPC software developers view fault tolerance today, & how they plan to incorporate fault tolerance in their software in the ExaScale era.”

Supercomputing Graphene Applications in Nanoscale Electronics

Researchers at North Carolina State University are using the Blue Waters Supercomputer to explore graphene’s applications, including its use in nanoscale electronics and electrical DNA sequencing. “We’re looking at what’s beyond Moore’s law, whether one can devise very small transistors based on only one atomic layer, using new methods of making materials,” said Professor Jerry Bernholc, from North Carolina University. “We are looking at potential transistor structures consisting of a single layer of graphene, etched into lines of nanoribbons, where the carbon atoms are arranged like a chicken wire pattern. We are looking at which structures will function well, at a few atoms of width.”

Agenda Posted: OpenPOWER 2018 Summit in Las Vegas

The OpenPOWER Summit has posted its speaker agenda. Held in conjunction with IBM Think 2018, the event takes place March 19 in Las Vegas. “The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry.”

Video: Computing Challenges at the Large Hadron Collider

CERN’s Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. “The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. “In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced.”