Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


MareNostrum 4 Named Most Beautiful Datacenter in the World

The MareNostrum 4 supercomputer at the Barcelona Supercomputing Centre has been named the winner of the Most Beautiful Data Center in the world Prize, hosted by the Datacenter Dynamics Company. “Aside from being the most beautiful, MareNostrum has been dubbed the most interesting supercomputer in the world due to the heterogeneity of the architecture it will include once installation of the supercomputer is complete. Its total speed will be 13.7 Petaflops. Its main memory is of 390 Terabytes and it has the capacity to store 14 Petabytes (14 million Gigabytes) of data. A high-speed network connects all the components in the supercomputer to one another.”

Innovate UK Award to Confirm Business Case for Quantum-enhanced Optimization Algorithms

Today D-Wave Systems announced its involvement in a grant-funded UK project to improve logistics and planning operations using quantum computing algorithms. “Advancing AI planning techniques could significantly improve operational efficiency across major industries, from law enforcement to transportation and beyond,” said Robert “Bo” Ewald, president of D-Wave International. “Advancing real-world applications for quantum computing takes dedicated collaboration from scientists and experts in a wide variety of fields. This project is an example of that work and will hopefully lead to faster, better solutions for critical problems.”

Accelerating HPC with Intel FPGAs

FPGAs can improve performance per watt, bandwidth and latency. In this guest post, Intel explores how Field Programmable Gate Arrays (FPGAs) can be used to accelerate high performance computing. “Tightly coupled programmable multi-function accelerator platforms, such as FPGAs from Intel, offer a single hardware platform that enables servers to address many different workloads needs—from HPC needs for the highest capacity and performance through data center requirements for load balancing capabilities to address different workload profiles.”

Supercomputing How First Supernovae Altered Early Star Formation

Over at LBNL, Kathy Kincade writes that cosmologists are using supercomputers to study how heavy metals expelled from exploding supernovae helped the first stars in the universe regulate subsequent star formation. “In the early universe, the stars were massive and the radiation they emitted was very strong,” Chen explained. “So if you have this radiation before that star explodes and becomes a supernova, the radiation has already caused significant damage to the gas surrounding the star’s halo.”

Dell EMC Powers HPC at University of Liverpool with Alces Flight

Today Dell EMC announced a joint solution with Alces Flight and AWS to provide HPC for the University of Liverpool. Dell EMC will provide a fully managed on-premises HPC cluster while a cloud-based HPC account for students and researchers will enable cloud bursting computational capacity. “We are pleased to be working with Dell EMC and Alces Flight on this new venture,” said Cliff Addison, Head of Advanced Research Computing at the University of Liverpool. “The University of Liverpool has always maintained cutting-edge technology and by architecting flexible access to computational resources on AWS we’re setting the bar even higher for what can be achieved in HPC.”

Data Vortex Technologies Teams with Providentia Worldwide for HPC

Data Vortex Technologies has formalized a partnership with Providentia Worldwide, LLC. Providentia is a technologies and solutions consulting venture which bridges the gap between traditional HPC and enterprise computing. The company works with Data Vortex and potential partners to develop novel solutions for Data Vortex technologies and to assist with systems integration into new markets. This partnership will leverage the deep experience in enterprise and hyperscale environments of Providentia Worldwide founders, Ryan Quick and Arno Kolster, and merge the unique performance characteristics of the Data Vortex with traditional systems.

Intel Supports open source software for HPC

In this video from SC17, Thomas Krueger describes how Intel supports Open Source High Performance Computing software like OpenHPC and Lustre. “As the Linux initiative demonstrates, a community-based, vendor-catalyzed model like this has major advantages for enabling software to keep pace with requirements for HPC computing and storage hardware systems. In this model, stack development is driven primarily by the open source community and vendors offer supported distributions with additional capabilities for customers that require and are willing to pay for them.”

System Fabric Works adds support for BeeGFS Parallel File System

Today System Fabric Works announced its support and integration of the BeeGFS file system with the latest NetApp E-Series All Flash and HDD storage systems which makes BeeGFS available on the family of NetApp E-Series Hyperscale Storage products as part of System Fabric Work’s (SFW) Converged Infrastructure solutions for high-performance Enterprise Computing, Data Analytics and Machine Learning. “We are pleased to announce our Gold Partner relationship with ThinkParQ,” said Kevin Moran, President and CEO, System Fabric Works. “Together, SFW and ThinkParQ can deliver, worldwide, a highly converged, scalable computing solution based on BeeGFS, engineered with NetApp E-Series, a choice of InfiniBand, Omni-Path, RDMA over Ethernet and NVMe over Fabrics for targeted performance and 99.9999 reliability utilizing customer-chosen clustered servers and clients and SFW’s services for architecture, integration, acceptance and on-going support services.”

DDN’s HPC Trends Survey: Complex I/O Workloads are the #1 Challenge

Today DDN announced the results of its annual HPC Trends survey, which reflects the continued adoption of flash-based storage as essential to respondent’s overall data center strategy. While flash is deemed essential, respondents anticipate needing additional technology innovations to unlock the full performance of their HPC applications. Managing complex I/O workload performance remains far and away the largest challenge to survey respondents, with 60 percent of end-users citing this as their number one challenge.

Cray Joins Big Data Center at NERSC for AI Development

Today Cray announced the Company has joined the Big Data Center at NERSC. The collaboration between the two organizations is representative of Cray’s commitment to leverage its supercomputing expertise, technologies, and best practices to advance the adoption of Artificial Intelligence, deep learning, and data-intensive computing. “We are really excited to have Cray join the Big Data Center,” said Prabhat, Director of the Big Data Center, and Group Lead for Data and Analytics Services at NERSC. “Cray’s deep expertise in systems, software, and scaling is critical in working towards the BDC mission of enabling capability applications for data-intensive science on Cori. Cray and NERSC, working together with Intel and our IPCC academic partners, are well positioned to tackle performance and scaling challenges of Deep Learning.”