Enterprise customers consistently demand improved performance from every component in their HPC infrastructure, including their workload manager. Ian Lumb, Solutions Architect at Univa Corp., speaks on the importance of an innovative workload manager to overall success.
In this video from the Switzerland HPC Conference, Jeffrey Stuecheli from IBM presents: Open CAPI, A New Standard for High Performance Attachment of Memory, Acceleration, and Networks. “OpenCAPI sets a new standard for the industry, providing a high bandwidth, low latency open interface design specification. This session will introduce the new standard and it’s goals. This includes details on how the interface protocol provides unprecedented latency and bandwidth to attached devices.”
Genomic sequencing has progressed so rapidly that researchers can now analyze the genetic profiles of healthy individuals to uncover mutations that will almost certainly lead to a genetic condition. These breakthroughs are demonstrating that the future of genomic medicine will focus not just on the ability to reactively treat diseases, but on predicting and preventing them before they occur.
In this podcast, the Radio Free HPC team looks at the week’s top stories: Quantum Startup Rigetti Computing Raises $64 Million in Funding, Rex Computing has their low-power chip, and Intel is shipping their Optane SSDs.
“In this keynote, Al Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA.”
Altair will host the PBS Works User Group May 22-25 in Las Vegas. This four-day event (including 2 days of user presentations, round table discussions and surrounded by hands-on workshops) is the global user event of the year for PBS Professional and other PBS Works products. “This year we are excited to announce that we will be hosting a tour of the Switch data center facility on Tuesday afternoon.”
Next-generation sequencing (NGS) tools produce vast quantities of genetic data which poses a growing number of challenges to life sciences organizations. Accelerating analytics, providing adequate storage and memory capacity, speeding time-to-solution, and reducing costs are major concerns for IT department operating on traditional computing systems. In this week’s Sponsored Post, Bill Mannel, Vice President & General Manager of HPC Segment Solutions and Apollo Servers, Data Center Infrastructure Group at Hewlett Packard Enterprise, explains how next-generation sequencing is altering the patient care landscape.
Today Fujitsu announced that it has received RIKEN’s order for the “Deep learning system,” one of the largest supercomputers in Japan specializing in AI research. “NVIDIA DGX-1, the world’s first all-in-one AI supercomputer, is designed to meet the enormous computational needs of AI researchers,” said Jim McHugh, VP & GM at Nvidia. “Powered by 24 DGX-1s, the RIKEN Center for Advanced Intelligence Project’s system will be the most powerful DGX-1 customer installation in the world. Its breakthrough performance will dramatically speed up deep learning research in Japan, and become a platform for solving complex problems in healthcare, manufacturing and public safety.”
In this week’s Sponsored Post, Katie Garrison, of One Stop Systems explains how GPUs and Flash solutions are used in radar simulation and anti-submarine warfare applications. “High-performance compute and flash solutions are not just used in the lab anymore. Government agencies, particularly the military, are using GPUs and flash for complex applications such as radar simulation, anti-submarine warfare and other areas of defense that require intensive parallel processing and large amounts of data recording.”
High-performance computing (HPC) tools are helping financial firms survive and thrive in this highly demanding and data-intensive industry. As financial models grow in complexity and greater amounts of data must be processed and analyzed on a daily basis, firms are increasingly turning to HPC solutions to exploit the latest technology performance improvements. Suresh Aswani, Senior Manager, Solutions Marketing, at Hewlett Packard Enterprise, shares how to overcome the learning curve of new processor architectures.