Qumulo showcased its scalable file storage for high-performance computing workloads at SC19. The company helps innovative organizations gain real-time visibility, scale and control of their data across on-prem and the public cloud. More and more HPC institutes are looking to modern solutions that help them gain insights from their data, faster,” said Molly Presley, global product marketing director for Qumulo. “Qumulo helps the research community consolidate diverse workloads into a unified, simple-to-manage file storage solution. Workgroups focused on image data, analytics, and user home directories can share a single solution that delivers real-time visibility into billions of files while scaling performance on-prem or in the cloud to meet the demands of the most intensive research environments.”
Qumulo Unified File Storage Reduces Administrative Burden While Consolidating HPC Workloads
STFC Machine Learning Group Deploys Elastic NVMe Storage to Power GPU Servers
At SC19, Excelero announced that the Science and Technology Facilities Council (STFC) has deployed a new HPC architecture to support computationally intensive analysis including machine learning and AI-based workloads using the NVMesh elastic NVMe block storage solution. “Done in partnership with Boston Limited, the deployment is enabling researchers from STFC and the Alan Turing Institute to complete machine learning training tasks that formerly took three to four days, in just one hour – and other foundational scientific computations that researchers formerly could not perform.”
NVIDIA Launches GPU-Accelerated Reference Design for Arm Servers
At SC19, NVIDIA introduced a reference design platform that enables companies to quickly build GPU-accelerated Arm®-based servers, driving a new era of high performance computing for a growing range of applications in science and industry. The new reference design platform — consisting of hardware and software building blocks — responds to growing demand in the HPC community to harness a broader range of CPU architectures. It allows supercomputing centers, hyperscale-cloud operators and enterprises to combine the advantage of NVIDIA’s accelerated computing platform with the latest Arm-based server platforms.
NVIDIA Announces GPU-Accelerated Supercomputer on Azure
At SC19, NVIDIA announced the availability of a new kind of GPU-accelerated supercomputer in the cloud on Microsoft Azure. “Built to handle the most demanding AI and high performance computing applications, the largest deployments of Azure’s new NDv2 instance rank among the world’s fastest supercomputers, offering up to 800 NVIDIA V100 Tensor Core GPUs interconnected on a single Mellanox InfiniBand backend network.”
TYAN Launches AMD EPYC Server Platforms at SC19
Today TYAN rolled out the latest lineup of HPC and storage server platforms based on the AMD EPYC 7002 Series processors that are aimed at the datacenter market at SC19. “The 2nd Gen AMD EPYC processor was designed to provide customers with leadership in architecture, performance, and security,” said Scott Aylor, corporate vice president and general manager, Data Center Solutions Group, AMD. “We’re excited to see our partners, like TYAN, continue to build their portfolios around 2nd Gen EPYC to provide new capabilities for their customers and partners.”
2019 Demand for Rescale-managed Cloud HPC Exceeds All Previous Years Combined
Today Rescale announced that Cloud High Performance Computing (HPC) has reached a major inflection point, with more server hours consumed this year on the Rescale platform than in all prior years combined in the company’s history. Every major cloud provider now offers integrations with the Rescale platform, including AWS, Microsoft Azure, IBM–as well as new offerings from Google and Oracle announced this week. New FedRAMP security and compliance milestones enhance the signal that mainstream companies can adopt cloud HPC.
Dell Technologies taps AMD EPYC processors for Expanse Supercomputer at SDSC
Dell Technologies has been selected to power the next-generation supercomputer at the San Diego Supercomputer Center (SDSC), expected to deploy in mid-2020. “With the compute-dense PowerEdge C6525, including next-generation AMD EPYC processors NVIDIA GPUs, Expanse is projected to have a peak performance of up to five petaflops. This also nearly doubles the performance of SDSC’s current Comet system, allowing SDSC to support more researchers and projects.”
GigaIO Optimizes FabreX Architecture with GPU Sharing and Composition Technology
Today GigaIO announced the FabreX implementation of GPU Direct RDMA (GDR) technology, accelerating communication for GPU storage devices with the industry’s highest throughput and lowest latency. “It is imperative for the supercomputing community to have a system architecture that can handle the compute-intensive workloads being deployed today,” says Alan Benjamin, CEO of GigaIO. “Our team has created that solution with FabreX, which offers unparalleled composability and the lowest hardware latency on the market. Moreover, incorporating GDR technology only enhances the fabric’s cutting-edge capabilities – delivering accelerated performance and increased scalability for truly effortless composing. Combining our new GDR support with our previously announced NVMe-oF capabilities, we are excited to bring real composition without compromise to our customers.”
OpenMP API Specification 5.0 is Major Upgrade of OpenMP Language
The OpenMP Architecture Review Board (ARB) announced Version 5.0 of the OpenMP API Specification, a major upgrade of the OpenMP language. OpenMP 5.0 adds many new features that will be useful for highly parallel and complex applications and now covers the entire hardware spectrum from embedded and accelerator devices to multicore systems with shared-memory. Vendors have made reference implementations of parts of the standard, and user courses will soon be given at OpenMP workshops and major conferences.
DDN steps up with Professional Support for Lustre Clients on Arm Platforms
At SC18 in Dallas, DDN announced that its Whamcloud division is delivering professional support for Lustre clients on Arm architectures. With this support offering, organizations can confidently use Lustre in production environments, introduce new clients into existing Lustre infrastructures, and deploy Arm-based clusters of any size within test, development or production environments. As the use of Lustre continues to expand across HPC, artificial intelligence and data-intensive, performance-driven applications, the deployment of alternative architectures is on the rise.