Recent announcements, analyst reports, conferences and anecdotal evidence point to a certain upswing for high performance computing in industry. Many industries have reaped the benefit of HPC for considerable time and are now stepping up a gear with their systems – some even on a par with national facilities, in order to maintain or extend their advantage. Whether in upstream exploration, engine design or aerodynamics – if you can scale up or scale out, you can derive advantage.
“The thing that really excites me is looking at OpenMP 4.0. We’ve got virtually a complete set of 4.0 features. OpenMP 4.0 brings together tasking, which it’s had since its start in ’97, with new capabilities for vectorization and for offload. Bringing those together, and being able to do them at the same time, is extraordinarily powerful. I love teaching classes about it and seeing what people can do with it. And now it’s fully supported in our products.”
The CREST Center for Research in Extreme Scale Technologies is hosting the 20 Years of Beowulf workshop in Annapolis, MD. on Oct. 13-14. “The initial target of the Beowulf cluster project was to develop inexpensive, smaller parallel computing platforms—to bring supercomputing to the masses. The approach was extremely successful and Beowulf/commodity clusters are being used worldwide across a diverse spectrum of uses from teams of high school students to some the world’s most powerful supercomputers.”
This article is the third in an editorial series that explores the benefits the HPC community can achieve by adopting HPC virtualization and secure private cloud technologies. Virtualization has been proven to be a viable architectural approach that addresses the many challenges mentioned in last week’s article. This week and next we look at the benefits of creating a virtualized infrastructure.
“We are excited about launching NESAP in partnership with Cray and Intel to help transition our broad user base to energy-efficient architectures,” said Sudip Dosanjh, director of NERSC, the primary HPC facility for the DOE’s Office of Science. “We expect to see many aspects of Cori in an exascale computer, including dramatically more concurrency and on-package memory. The response from our users has been overwhelming—they recognize that Cori will allow them to do science that can’t be done on today’s supercomputers.”
In this video, Professor Heinz Wolff explains the Optalysys Optical Processor. The Cambridge UK based startup announced today that the company is only months away from launching a prototype optical processor with “the potential to deliver Exascale levels of processing power on a standard-sized desktop computer.”
In a quest to design synthetic microorganisms for alternate fuel sources, Howard Salis from Penn State leveraged AWS to bring supercomputing resources to scientists. “The DNA Compiler has fundamentally changed the way that genetic engineering takes place by providing a way to quantitatively control and optimize the expression of many proteins working together, instead of performing trial-and-error DNA mutagenesis.”