Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: MIT Makes Billion-Dollar Bet on Ai and Machine Learning

Today MIT announced a new $1 billion commitment to address the global opportunities and challenges presented by the prevalence of computing and the rise of artificial intelligence. The initiative marks the single largest investment in computing and AI by an American academic institution, and will help position the United States to lead the world in preparing for the rapid evolution of computing and AI. ” As we look to the future, we must utilize these important technologies to shape our world for the better and harness their power as a force for social good.”

ExaFLOW Project takes on High-order CFD

After three years of working on key algorithmic challenges in CFD, the European ExaFLOW Project it touting a series of industry milestones. With three flagship runs, ExaFLOW has managed to work on different specific CFD use cases which highlight the importance of their outcomes for both industry and academia. “After three years of working on key algorithmic challenges in CFD, the European ExaFLOW Project it touting a series of industry milestones. With three flagship runs, ExaFLOW has managed to work on different specific CFD use cases which highlight the importance of their outcomes for both industry and academia.”

The Rising AI Tide in HPC – Are You Ready?

This guest article from Dr. Bhushan Desam, Lenovo’s Director, Global Artificial Intelligence Business covers how new HPC tools like Lenovo’s LiCO (Lenovo Intelligent Computing Orchestration) are working to address the growing popularity of AI in HPC and to simplify the convergence of HPC and AI. 

Call for Papers: Special Issue on Advanced Parallel HPC for AI Applications

Elsevier publishers have launched their Call for Papers on their Special Issue on Advances on Parallel and High Performance Computing for AI Applications. “Clusters of computers and accelerators (e.g. GPUs) are routinely used to train and run models, both in research and production. On the other hand, ML and AI have also become a “killer application” for HPC and, consequently, have driven much of research in this area. For example, tailored computer architecture has been devised and new parallel programming frameworks developed to accelerate AI/ML models. The objective of this special issue is to bring together the HPC and AI/ML communities to present their applications and solutions to performance issues, and also to present how AI/ML can be used to solve HPC problems.”

Video: Graphical User Interfaces for HPC

Mark Dawson gave this talk at the Supercomputing Wales Swansea Symposium 2018. “Supercomputing Wales is funded to provide university research teams access to powerful computing facilities to undertake high-profile science and innovation projects within the consortium universities. Software Engineers across Wales to develop algorithms and customized software that harnesses the power of the facilities. The aim of the investment is to capture more external research funding, increase scientific partnerships, create highly-skilled research jobs and support collaborations with industrial and other partners. This will provide a step change in supercomputing-enabled scientific research in Wales.”

ESCAPE-2 Project to develop algorithms for weather and climate prediction at exascale

Today ECMWF launched the ESCAPE-2 project on energy-efficient scalable algorithms for weather and climate prediction at exascale. “It brings together 12 partners, including national meteorological and hydrological services, HPC centers, hardware vendors and universities. The ESCAPE project aims to prepare NWP and climate models for new computing architectures towards exascale computing, with a focus on energy efficiency.”

Intel Powers New AI Research Center at Technion in Israel

Today the Technion technological institute in Israel announced that Intel is collaborating with the institute on its new artificial intelligence (AI) research center. “AI is not a one-size-fits-all approach, and Intel has been working closely with a range of industry leaders to deploy AI capabilities and create new experiences. Our collaboration with Technion not only reinforces Intel Israel’s AI operations, but we are also seeing advancements to the field of AI from the joint research that is under way and in the pipeline.”

New Paper Looks at Solidification of High-pressure Ice in newly discovered Ocean Worlds

A team of theorists from Lawrence Livermore National Laboratory (LLNL) has solved a long-standing puzzle in the nucleation of a high-pressure phase of ice known as ice VII, which is believed to exist near the core of “ocean world” planets recently detected outside of the solar system, and has recently been discovered to exist within the Earth’s mantle. “By dissecting the thermodynamics and kinetics of interfaces, there are entirely new classes of problems that can be studied and, ultimately, controlled. A holy grail is to design self-regulating dynamic systems and machines that can utilize far-from-equilibrium dissipative dynamics to perform complex tasks, as in biological systems — control of nucleation is a step on this path.”

DownUnder GeoSolutions Moves to Skybox Datacenters for Oil & Gas Exploration

Today Australia-based DownUnder GeoSolutions (DUG) announced a move to Skybox Datacenters in Houston for its global expansion of a revolutionary oil and gas exploration technology. “This was an exhaustive world-wide search for a data center location,” said Dr. Matthew Lamont, co-founder of DUG. “Houston was a natural choice given the low cost of power and the fact that Skybox had the available infrastructure ready to go. This facility will allow us to install the fastest supercomputer in the world at this time to meet the ever-increasing demand for energy. We are excited to expand our presence in Houston and expect to be operational by February 2019.”

Video: The March to Exascale

As the trend toward exascale HPC systems continues, the complexities of optimizing parallel applications running on them increase too. Potential performance limitations can occur at the application level which relies on the MPI. While small-scale HPC systems are more forgiving of tiny MPI latencies, large systems running at scale prove much more sensitive. Small inefficiencies can snowball into significant lag.