Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Has the Decades-Old Floating Point Error Problem been Solved?

Today a company called Bounded Floating Point announced a breakthrough patent in processor design, which allows representation of real numbers accurate to the last digit “for the first time in computer history.” “This bounded floating point system is a game changer for the computing industry, particularly for computationally intensive functions such as weather prediction, GPS, and autonomous vehicles,” said the inventor, Alan Jorgensen, PhD. “By using this system, it is possible to guarantee that the display of floating point values is accurate to plus or minus one in the last digit.”

Red Hat steps up to POWER9 for HPC

In this video from SC17 in Denver, Dan McGuan from Red Hat describes the company’s Multi-Architecture HPC capabilities with the Power9 architecture. “Red Hat and IBM have a long history of collaborating on Linux, going back more than 18 years. We laid the groundwork for supporting POWER9 processors several years ago and continue to collaborate with IBM to enable broader architecture support for IBM Power Systems across Red Hat’s portfolio.”

ClusterVision White Paper Looks at HPC Performance Impact of Spectre and Meltdown

While various kernel patches are already out for the Spectre and Meltdown, the performance impact of these patches on HPC performance has been a big question. Now ClusterVision has published a timely white paper on this important topic. “These vulnerabilities have only been discovered recently, so information is still developing. Therefore, this document should not be interpreted as a complete overview of the situation but as an informative view of the potential impact on HPC.”

Steve Oberlin from NVIDIA Presents: HPC Exascale & AI

Steve Oberlin from NVIDIA gave this talk at SC17 in Denver. “HPC is a fundamental pillar of modern science. From predicting weather to discovering drugs to finding new energy sources, researchers use large computing systems to simulate and predict our world. AI extends traditional HPC by letting researchers analyze massive amounts of data faster and more effectively. It’s a transformational new tool for gaining insights where simulation alone cannot fully predict the real world.”

Video: Deep Learning for Science

Prabhat from NERSC and Michael F. Wehner from LBNL gave this talk at the Intel HPC Developer Conference in Denver. “Deep Learning has revolutionized the fields of computer vision, speech recognition and control systems. Can Deep Learning (DL) work for scientific problems? This talk will explore a variety of Lawrence Berkeley National Laboratory’s applications that are currently benefiting from DL.”

Top 10 HPC White Papers for 2017: Machine Learning, AI, the Cloud & More

Many of the top 10 2017 HPC white papers deal with the next steps in the HPC journey, including moving to the cloud, and discovering the potential of machine learning and AI. The most downloaded reports of the year were written with industry partners such as Red Hat, Dell EMC, Intel, HPE and more.

The U.S. D.O.E. Exascale Computing Project – Goals and Challenges

Paul Messina from Argonne gave this Invited Talk at SC17. “Balancing evolution with innovation is challenging, especially since the ecosystem must be ready to support critical mission needs of DOE, other Federal agencies, and industry, when the first DOE exascale systems are delivered in 2021. The software ecosystem needs to evolve both to support new functionality demanded by applications and to use new hardware features efficiently. We are utilizing a co-design approach that uses over two dozen applications to guide the development of supporting software and R&D on hardware technologies as well as feedback from the latter to influence application development.

Rescale Brings HPC Workloads to the Cloud at SC17

In this video from SC17, Gabriel Broner from Rescale describes how the company brings HPC Workloads to the Cloud. “Rescale offers HPC in the cloud for engineers and scientists, delivering computational performance on-demand. Using the latest hardware architecture at cloud providers and supercomputing centers, Rescale enables users to extend their on-premise system with optimized HPC in the cloud.”

Dr. Eng Lim Goh on HPE’s Spaceborne Supercomputer

In this video from SC17 in Denver, Dr. Eng Lim Goh describes the spaceborne supercomputer that HPE built for NASA. “The research objectives of the Spaceborne Computer include a year-long experiment of operating high performance commercial off-the-shelf (COTS) computer systems on the ISS with its changing radiation climate. During high radiation events, the electrical power consumption and, therefore, the operating speeds of the computer systems are lowered in an attempt to determine if such systems can still operate correctly.”

Video: Comanche Collaboration Moves ARM HPC forward at National Labs

In this video from SC17 in Denver, Rick Stevens from Argonne leads a discussion about the Comanche Advanced Technology Collaboration. By initiating the Comanche collaboration, HPE brought together industry partners and leadership sites like Argonne National Laboratory to work in a joint development effort,” said HPE’s Chief Strategist for HPC and Technical Lead for the Advanced Development Team Nic Dubé. “This program represents one of the largest customer-driven prototyping efforts focused on the enablement of the HPC software stack for ARM. We look forward to further collaboration on the path to an open hardware and software ecosystem.”