“The HPC Community demands performance, transparency, and value—exactly what Red Hat and open source offer. Red Hat is the standard choice for Linux in HPC clusterers worldwide. But it doesn’t stop there–our cloud, virtualization, storage, platform and service-oriented solutions bring real freedom and collaboration to federal, state, local, and academic programs. And Red Hat’s worldwide support, training and consulting services bring the power of open source to your agency. We are a part of a larger community working together to drive innovation.”
“In the long run, if you need orders of magnitude more bandwidth than is currently available there’s a set of technologies that are sometimes referred to as processor in memory – I call it processor at memory – technologies that involves cheaper processors distributed out to adjacent to the memory chips. Processors are cheaper, simpler, lower power. That could allow a significant reduction in cost to build the systems, which allows you to build them a lot bigger and therefore deliver significantly higher memory bandwidth. That’s a very revolutionary change.”
In this video from SC16, Joe Yaworsky describes how Intel Omni Path is gaining traction on the TOP500. As the interconnect for the Intel Scalable System Framework, Omni-Path is focused on delivering the best possible application performance. “In the nine months since Intel Omni-Path Architecture (Intel OPA) began shipping, it has become the standard fabric for 100 gigabit systems. Intel OPA is featured in 28 of the top 500 most powerful supercomputers in the world announced at Supercomputing 2016 and now has 66 percent of the 100Gb market. Top500 designs include Oakforest-PACS, MIT Lincoln Lab and CINECA.”
In this video from SC16, Steve Conway from IDC moderates a panel discussion on Precision Medicine. “Recently, DOE Secretary Moniz, VA Secretary MacDonald, NCI Director Lowy and the GSK CEO Andrew Witty announced that the Nation’s leading supercomputers would be applied to the challenge of the Cancer Moonshot initiative. This partnership of nontraditional groups, collectively see the path to unraveling the complexities of cancer through the power of new machines, operating systems, and applications that leverage simulations, data science and artificial intelligence to accelerate bringing precision oncology to the patients that are waiting. This initiative is one of many research efforts in the race to solve some of our most challenging medical problems.”
As an HPC technology vendor, Mellanox is in the business of providing the leading-edge interconnects that drive many of the world’s fastest supercomputers. To learn more about what’s new for SC16, we caught up with Michael Kagan, CTO of Mellanox. “Moving InfiniBand beyond EDR to HDR is critical not only for HPC, but also for the numerous industries that are adopting AI and Big Data to make real business sense out the amount of data available and that we continue to collect on a daily basis.”
OpenACC is a directive based programming model that gives C/C++ and Fortran programmers the ability to write parallel programs simply by augmenting their code with pragmas. Pragmas are advisory messages that expose optimization, parallelization, and accelerator offload opportunities to the compiler so it can generate efficient parallel code for a variety of different target architectures including AMD and NVIDIA GPUs plus ARM, x86, Intel Xeon Phi, and IBM POWER processors.
“InfiniBand’s advantages of highest performance, scalability and robustness enable users to maximize their data center return on investment. InfiniBand was chosen by far more end-users compared to a proprietary offering, resulting in a more than 85 percent market share. We are happy to see our open Ethernet adapter and switch solutions enable all of the 40G and the first 100G Ethernet systems on the TOP500 list, resulting in overall 194 systems using Mellanox for their compute and storage connectivity.”
The new TOP500 list is out, and Rad is Free HPC is here podcasting the scoop in their own special way. With two new systems in the TOP10, there are many different perspectives to share. “The Cori supercomputer, a Cray XC40 system installed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), slipped into the number 5 slot with a Linpack rating of 14.0 petaflops. Right behind it at number 6 is the new Oakforest-PACS supercomputer, a Fujitsu PRIMERGY CX1640 M1 cluster, which recorded a Linpack mark of 13.6 petaflops.”
“We go to the show for the technology, the engineering, the science, and the math. It’s HPCMatters and STEM. The vendors are showcasing their technology and the science their technology has enabled. The research exhibits are showing how they are contributing to the scientific process with the largest supercomputers that have cool names. That’s what’s so great about SC: It brings together many of the brilliant minds behind these technologies.”
Intel Omni-Path Architecture (Intel OPA) volume shipments started a mere nine months ago in February of this year, but Intel’s high-speed, low-latency fabric for HPC has covered significant ground around the globe, including integration in HPC deployments making the Top500 list for June 2016. Intel’s fabric makes up 48 percent of installations running 100 Gbps fabrics on the Top500 June list, and they expect a significant increase in Top500 deployments, including one that could end up in the stratosphere among the top ten machines on the list.