Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Earth-modeling System steps up to Exascale

“Unveiled today by the DOE, E3SM is a state-of-the-science modeling project that uses the world’s fastest computers to more accurately understand how Earth’s climate work and can evolve into the future. The goal: to support DOE’s mission to plan for robust, efficient, and cost-effective energy infrastructures now, and into the distant future.”

Quantum Computing at NIST

Carl Williams from NIST gave this talk at the HPC User Forum in Tucson. “Quantum information science research at NIST explores ways to employ phenomena exclusive to the quantum world to measure, encode and process information for useful purposes, from powerful data encryption to computers that could solve problems intractable with classical computers.”

Fujitsu Upgrades RAIDEN at RIKEN Center for Advanced Intelligence Project

Fujitsu reports that the company has significantly boosted the performance of the RAIDEN supercompuer. RAIDEN is a computer system for artificial intelligence research originally deployed in 2017 at the RIKEN Center for Advanced Intelligence Project (AIP Center). “The upgraded RAIDEN has increased its performance by a considerable margin, moving from an initial total theoretical computational performance of 4 AI Petaflops to 54 AI Petaflops, placing it in the top tier of Japan’s systems. In having built this system, Fujitsu demonstrates its commitment to support cutting-edge AI research in Japan.”

Intel Open Sources nGraph Deep Neural Network model for Multiple Devices

Over at Intel, Scott Cyphers writes that the company has open-sourced nGraph, a framework-neutral Deep Neural Network (DNN) model compiler that can target a variety of devices. With nGraph, data scientists can focus on data science rather than worrying about how to adapt their DNN models to train and run efficiently on different devices. Continue reading below for highlights of our engineering challenges and design decisions, and see GitHub, our documentation, and our SysML paper for additional details.

Exascale Computing for Long Term Design of Urban Systems

In this episode of Let’s Talk Exascale, Charlie Catlett from Argonne National Laboratory and the University of Chicago describes how extreme scale HPC will be required to better build Smart Cities. “Urbanization is a bigger set of challenges in the developing world than in the developed world, but it’s still a challenge for us in US and European cities and Japan.”

Video: Addressing Key Science Challenges with Adversarial Neural Networks

Wahid Bhimji from NERSC gave this talk at the 2018 HPC User Forum in Tucson. “Machine Learning and Deep Learning are increasingly used to analyze scientific data, in fields as diverse as neuroscience, climate science and particle physics. In this page you will find links to examples of scientific use cases using deep learning at NERSC, information about what deep learning packages are available at NERSC, and details of how to scale up your deep learning code on Cori to take advantage of the compute power available from Cori’s KNL nodes.”

Ceph on the Brain: Storage and Data-Movement Supporting the Human Brain Project

Adrian Tate from Cray and Stig Telfer from StackHPC gave this talk at the 2018 Swiss HPC Conference. “This talk will describe how Cray, StackHPC and the HBP co-designed a next-generation storage system based on Ceph, exploiting complex memory hierarchies and enabling next-generation mixed workload execution. We will describe the challenges, show performance data and detail the ways that a similar storage setup may be used in HPC systems of the future.”

Industry Insights: Download the Results of our AI & HPC Perceptions Survey

The results from our HPC & AI peception survey are here. “90 percent of all respondents felt that their business will ultimately be impacted by AI. Although almost all respondents see AI as playing a role in the future of the business, the survey also revealed the top three industries that will see the most impact. Healthcare came in first, followed by life sciences, and finance/transportation tied in third place. The possibilities of AI are seemingly endless. And the shift has already begun.”

DDN Builds New Engineering Facility in Colorado focused on AI, Cloud, and Enterprise Data Challenges

Today DDN announced the opening of a new facility in Colorado Springs, Colorado, including a significant expansion of lab, testing and benchmarking facilities. The enhanced capabilities will enable DDN to accelerate development efforts and increase in-house capabilities to mimic customer applications and workflows. “Our Enterprise, AI, HPC and Cloud customers have always relied upon us to develop the world’s leading data storage solutions at-scale, and for our long-term focus and sustained investments in research, technology and innovation,” said Alex Bouzari, chief executive officer, chairman and co-founder of DDN. “We are excited to add our new Colorado Springs facility to the DDN R&D centers worldwide and to expand our team of very talented engineers and technologists who will continue to drive innovation for our customers in the years to come.”

Shifter – Docker Containers for HPC

Alberto Madonaa gave this talk at the Swiss HPC Conference. “In this work we present an extension to the container runtime of Shifter that provides containerized applications with a mechanism to access GPU accelerators and specialized networking from the host system, effectively enabling performance portability of containers across HPC resources. The presented extension makes possible to rapidly deploy high-performance software on supercomputers from containerized applications that have been developed, built, and tested in non-HPC commodity hardware, e.g. the laptop or workstation of a researcher.”