Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Supercomputing How First Supernovae Altered Early Star Formation

Over at LBNL, Kathy Kincade writes that cosmologists are using supercomputers to study how heavy metals expelled from exploding supernovae helped the first stars in the universe regulate subsequent star formation. “In the early universe, the stars were massive and the radiation they emitted was very strong,” Chen explained. “So if you have this radiation before that star explodes and becomes a supernova, the radiation has already caused significant damage to the gas surrounding the star’s halo.”

IBM Spectrum LSF Powers HPC at SC17

In this video from SC17, Gabor Samu describes how IBM Spectrum LSF helps users orchestrate HPC workloads. “This week we celebrate the release of our second agile update to IBM Spectrum LSF 10. And it’s our silver anniversary… 25 years of IBM Spectrum LSF! The IBM Spectrum LSF Suites portfolio redefines cluster virtualization and workload management by providing a tightly integrated solution for demanding, mission-critical HPC environments that can increase both user productivity and hardware utilization while decreasing system management costs.”

Dell EMC Powers HPC at University of Liverpool with Alces Flight

Today Dell EMC announced a joint solution with Alces Flight and AWS to provide HPC for the University of Liverpool. Dell EMC will provide a fully managed on-premises HPC cluster while a cloud-based HPC account for students and researchers will enable cloud bursting computational capacity. “We are pleased to be working with Dell EMC and Alces Flight on this new venture,” said Cliff Addison, Head of Advanced Research Computing at the University of Liverpool. “The University of Liverpool has always maintained cutting-edge technology and by architecting flexible access to computational resources on AWS we’re setting the bar even higher for what can be achieved in HPC.”

Video: Enabling the Future of Artificial Intelligence

In this video from the Intel HPC Developer Conference, Andres Rodriguez describes his presentation on Enabling the Future of Artificial Intelligence. “Intel has the industry’s most comprehensive suite of hardware and software technologies that deliver broad capabilities and support diverse approaches for AI—including today’s AI applications and more complex AI tasks in the future.”

Data Vortex Technologies Teams with Providentia Worldwide for HPC

Data Vortex Technologies has formalized a partnership with Providentia Worldwide, LLC. Providentia is a technologies and solutions consulting venture which bridges the gap between traditional HPC and enterprise computing. The company works with Data Vortex and potential partners to develop novel solutions for Data Vortex technologies and to assist with systems integration into new markets. This partnership will leverage the deep experience in enterprise and hyperscale environments of Providentia Worldwide founders, Ryan Quick and Arno Kolster, and merge the unique performance characteristics of the Data Vortex with traditional systems.

Intel Select Solutions: BigStack 2.0 for Genomics

BIGstack 2.0 incorporates our latest Intel Xeon Scalable processors, Intel 3D NAND SSD, and Intel FPGAs while also leveraging the latest genomic tools from the Broad Institute in GATK 3.8 and GATK 4.0. This new stack provides a 3.34x speed up in whole genome analysis and a 2.2x daily throughput increase. It is able to deliver these performance improvements with a cost of just $5.68 per whole genome analyzed. The result: researchers will be able to analyze more genomes, more quickly and at lower cost, enabling new discoveries, new treatment options, and faster diagnosis of disease.

Cray Joins Big Data Center at NERSC for AI Development

Today Cray announced the Company has joined the Big Data Center at NERSC. The collaboration between the two organizations is representative of Cray’s commitment to leverage its supercomputing expertise, technologies, and best practices to advance the adoption of Artificial Intelligence, deep learning, and data-intensive computing. “We are really excited to have Cray join the Big Data Center,” said Prabhat, Director of the Big Data Center, and Group Lead for Data and Analytics Services at NERSC. “Cray’s deep expertise in systems, software, and scaling is critical in working towards the BDC mission of enabling capability applications for data-intensive science on Cori. Cray and NERSC, working together with Intel and our IPCC academic partners, are well positioned to tackle performance and scaling challenges of Deep Learning.”

SDSC Earthquake Codes Used in 2017 Gordon Bell Prize Research

A Chinese team of researchers awarded this year’s prestigious Gordon Bell prize for simulating the devastating 1976 earthquake in Tangshan, China, used an open-source code developed by researchers at the San Diego Supercomputer Center (SDSC) at UC San Diego and San Diego State University (SDSU) with support from the Southern California Earthquake Center (SCEC). “We congratulate the researchers for their impressive innovations porting our earthquake software code, and in turn for advancing the overall state of seismic research that will have far-reaching benefits around the world,” said Yifeng Cui, director of SDSC’s High Performance Geocomputing Laboratory, who along with SDSU Geological Sciences Professor Kim Olsen, Professor Emeritus Steven Day and researcher Daniel Roten developed the AWP-ODC code.

Microsoft Azure Becomes First Global Cloud Provider to Deploy AMD EPYC

Today AMD announced the first public cloud instances powered by the AMD EPYC processor. Microsoft Azure has deployed AMD EPYC processors in its datacenters in advance of preview for its latest L-Series of Virtual Machines (VM) for storage optimized workloads. The Lv2 VM family will take advantage of the high-core count and connectivity support of […]

VMware moves Virtualized HPC Forward at SC17

In this video from SC17, and Martin Yip and Josh Simons from VMware describe how the company is moving Virtualized HPC forward. “In recent years, virtualization has started making major inroads into the realm of High Performance Computing, an area that was previously considered off-limits. In application areas such as life sciences, electronic design automation, financial services, Big Data, and digital media, people are discovering that there are benefits to running a virtualized infrastructure that are similar to those experienced by enterprise applications, but also unique to HPC.”