Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ScaleMP Powers Largest Shared-Memory Systems in Canada

ScaleMP announced that the government of Canada has extended the contract for its large shared memory systems acquired from Dell. These SMP systems use vSMP Foundation to aggregate more than 64 Intel Xeon processors each, totaling more than 1500 CPUs per system. The systems are used for a variety of HPC workloads, including computer-aided engineering (CAE) and computational fluid dynamics (CFD). “Together with our hardware partners, we have been providing technology to the government of Canada since 2012, and are proud of repeatedly earning their business,” said Shai Fultheim, founder and CEO of ScaleMP. “repeat customers are a big part of the vSMP Foundation user community, and we continue to see expansion of our footprint with existing customers along with strong growth in deployments of vSMP Foundation with new ones.”

ISC 2018 is Now Open for Registration

Registration is now open for ISC 2018 at Early Bird discounted rates. The event takes place June 24-28 in Frankfurt. “Various topical and interest-specific Birds-of-a-Feather sessions, the fast-paced Vendor Showdown, and the Exhibitor Forums will again take place this year. Plus, the three-day ISC exhibition will feature about 150 exhibits from leading HPC companies and research organizations.”

Researchers using HPC to help fight Bioterrorism

Researchers are using computational models powered by HPC to develop better strategies for protecting us from bioterrorism. “Recent advances in data analytics and artificial intelligence systems are fundamentally transforming our ability to personalize treatments to the specific needs of a patient under treat-to-target paradigms,” said Josep Bassaganya-Riera, co-director of the Biocomplexity Institute’s Nutritional Immunology and Molecular Medicine Laboratory. “Our goal in this project will be to leverage the power of modeling and advanced machine learning methods, so a group of people exposed to a harmful pathogen or its toxins can receive faster, safer, more effective and personalized treatments.”

The Use of HPC to Model the California Wildfires

Ilkay Altintas from the San Diego Supercomputer Center gave this talk at the HPC User Forum. “WIFIRE is an integrated system for wildfire analysis, with specific regard to changing urban dynamics and climate. The system integrates networked observations such as heterogeneous satellite data and real-time remote sensor data, with computational techniques in signal processing, visualization, modeling, and data assimilation to provide a scalable method to monitor such phenomena as weather patterns that can help predict a wildfire’s rate of spread.”

Earth-modeling System steps up to Exascale

“Unveiled today by the DOE, E3SM is a state-of-the-science modeling project that uses the world’s fastest computers to more accurately understand how Earth’s climate work and can evolve into the future. The goal: to support DOE’s mission to plan for robust, efficient, and cost-effective energy infrastructures now, and into the distant future.”

Quantum Computing at NIST

Carl Williams from NIST gave this talk at the HPC User Forum in Tucson. “Quantum information science research at NIST explores ways to employ phenomena exclusive to the quantum world to measure, encode and process information for useful purposes, from powerful data encryption to computers that could solve problems intractable with classical computers.”

Radio Free HPC Looks at the New Coral-2 RFP for Exascale Computers

In this podcast, the Radio Free HPC team looks at the new Department of Energy’s RFP for Exascale Computers. “As far as predictions go, Dan thinks one machine will go to IBM and the other will go to Intel. Rich thinks HPE will win one of the bids with an ARM-based system designed around The Machine memory-centric architecture. They have a wager, so listen in to find out where the smart money is.”

Containers Using Singularity on HPC

Abhinav Thota, from Indiana University gave this talk at the 2018 Swiss HPC Conference. “Container use is becoming more widespread in the HPC field. There are various reasons for this, including the broadening of the user base and applications of HPC. One of the popular container tools on HPC is Singularity, an open source project coming out of the Berkeley Lab. In this talk, we will introduce Singularity, discuss how users of Indiana University are using it and share our experience supporting it. This talk will include a brief demonstration as well.”

Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios

Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”

Exascale Computing for Long Term Design of Urban Systems

In this episode of Let’s Talk Exascale, Charlie Catlett from Argonne National Laboratory and the University of Chicago describes how extreme scale HPC will be required to better build Smart Cities. “Urbanization is a bigger set of challenges in the developing world than in the developed world, but it’s still a challenge for us in US and European cities and Japan.”