Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


France to Double Supercomputing Capacity with Jean Zay Ai System from HPE

HPE is building a new Ai supercomputer for GENCI in France. Called Jean Zay, the 14 Petaflop system is part of an artificial intelligence initiative called for by President Macron to bolster the nation’s scientific and economic growth. “At Hewlett Packard Enterprise, we continue to fuel the next frontier and unlock discoveries with our end-to-end HPC and AI offerings that hold a strong presence in France and have been further strengthened just in the past couple of years.”

HiPEAC Vision 2019 Looks to the Future of Computing

“Today, the possibilities of an interconnected, heterogeneous and intelligent world are only just beginning to make themselves known. This stunning advancement in digital technology was made possible by ever-increasing performance at ever lower costs. However, physical limits mean we won’t be able to keep shrinking computing components while increasing performance for much longer. So where do we go from here? What are the main challenges and conditions for future developments, and where? The HiPEAC Vision 2019 explores all these questions, and more.”

Video: HPC Containers – Democratizing HPC

In this video from SC18 in Dallas, CJ Newburn from NVIDIA describes how developers can quickly containerize their applications and how users can benefit from running their workloads with containers from the NVIDIA GPU Cloud. “A container essentially creates a self contained environment. Your application lives in that container along with everything the application depends on, so the whole bundle is self contained.”

A Future for R: Parallel and Distributed Processing in R for Everyone

In this video from the European R Users Meeting, Henrik Bengtsson from the University of California San Francisco presents: A Future for R: Parallel and Distributed Processing in R for Everyone. “The future package is a powerful and elegant cross-platform framework for orchestrating asynchronous computations in R. It’s ideal for working with computations that take a long time to complete; that would benefit from using distributed, parallel frameworks to make them complete faster; and that you’d rather not have locking up your interactive R session.”

Interview: HPC Thought Leaders Looking Forward to SC19 in Denver

In this special guest feature, SC19 General Chair Michela Taufer catches up with Sunita and Jack Dongarra to discuss the way forward for the November conference in Denver. “By augmenting our models and our ability to do simulation, HPC enables us to understand and do things so much faster than we could in the past – and it will only get better in the future.”

Interview: Gary Grider from LANL on the new Efficient Mission-Centric Computing Consortium

At SC18 in Dallas, I had a chance to catch up with Gary Grider from LANL. “So we’re forming a consortium to chase efficient computing. We see many of the HPC sites today seem to be headed down the path of buying machines that work really well with very dense linear algebra problems. The problem is: hardcore simulation can often not be a great fit on machines built for high Linpack numbers.”

NVIDIA and Pure Storage Power Ai for Oncology with PAIGE

The battle against cancer is now being fought at the data level. A new Ai Startup called PAIGE hopes to revolutionize clinical diagnosis and treatment in pathology and oncology through the use of artificial intelligence. “Through its partnership with Igneous, PAIGE will be able to securely and efficiently manage 8 petabytes of unstructured data, including anonymized tumor scan images and clinical notes, as part of an integrated machine learning-based healthcare AI workflow anchored by an industry-leading Pure Storage FlashBlade and NVIDIA GPU compute cluster.”

Dan Reed Panel on Energy Efficient Computing at SC18

In this video from SC18, Dr. Daniel Reed moderates a panel discussion entitled: “If you can’t measure it, you can’t improve it” — Software Improvements from Power/Energy Measurement Capabilities.” “We have made major gains in improving the energy efficiency of the facility as well as computing hardware, but there are still large gains to be had with software- particularly application software. Just tuning code for performance isn’t enough; the same time to solution can have very different power profiles.”

Looking Back at SC18 and the Road Ahead to Exascale

In this special guest feature from Scientific Computing World, Robert Roe reports on new technology and 30 years of the US supercomputing conference at SC18 in Dallas. “From our volunteers to our exhibitors to our students and attendees – SC18 was inspirational,” said SC18 general chair Ralph McEldowney. “Whether it was in technical sessions or on the exhibit floor, SC18 inspired people with the best in research, technology, and information sharing.”

Researchers Gear Up for Exascale at ECP Meeting in Houston

Scientists and Engineers at Berkeley Lab are busy preparing for Exascale supercomputing this week at the ECP Annual Meeting in Houston. With a full agenda running five days, LBL researchers will contribute Two Plenaries, Five Tutorials, 15 Breakouts and 20 Posters. “Sponsored by the Exascale Computing Project, the ECP Annual Meeting centers around the many technical accomplishments of our talented research teams, while providing a collaborative working forum that includes featured speakers, workshops, tutorials, and numerous planning and co-design meetings in support of integrated project understanding, team building and continued progress.”