“We go to the show for the technology, the engineering, the science, and the math. It’s HPCMatters and STEM. The vendors are showcasing their technology and the science their technology has enabled. The research exhibits are showing how they are contributing to the scientific process with the largest supercomputers that have cool names. That’s what’s so great about SC: It brings together many of the brilliant minds behind these technologies.”
Intel Omni-Path Architecture (Intel OPA) volume shipments started a mere nine months ago in February of this year, but Intel’s high-speed, low-latency fabric for HPC has covered significant ground around the globe, including integration in HPC deployments making the Top500 list for June 2016. Intel’s fabric makes up 48 percent of installations running 100 Gbps fabrics on the Top500 June list, and they expect a significant increase in Top500 deployments, including one that could end up in the stratosphere among the top ten machines on the list.
In this special guest feature, Bill Mannel from Hewlett Packard Enterprise writes that upcoming Intel HPC Developer Conference in Salt Lake City is a great opportunity to learn about code modernization for the next generation of high performance computing applications. “As computing systems grow increasingly complex and new architecture designs become mainstream, training developers to write code which runs on future HPC systems will require a collaborative environment and the expertise of the best and brightest in the industry.”
In this podcast, the Radio Free HPC team previews the ancillary events around SC16 in Salt Lake City. With a full week in store, this could be the best conference yet. After our event roundup, they share their predictions for SC16 total attendance numbers.
SC16 is just around the corner and there so much to explore. On suggestion is to visit with Intel learn how you can power your breakthrough innovations and discoveries with Intel Scalable System Framework. Read on for a list of SC16 events where you can experience how Intel is transforming HPC from traditional modeling and simulation to artificial intelligence, analytics, and visualization.
Scot Schultz from Mellanox writes that the company is moving the industry forward to a world-class off-load network architecture that will pave the way to Exascale. “Mellanox, alongside many industry thought-leaders, is a leader in advancing the Co-Design approach. The key value and core goal is to strive for more CPU offload capabilities and acceleration techniques while maintaining forward and backward compatibility of new and existing infrastructures; and the result is nothing less than the world’s most advanced interconnect, which continues to yield the most powerful and efficient supercomputers ever deployed.”
In this special guest feature, Kim McMahon writes that SC16 will reflect a concerted effort to improve diversity and inclusivity at the conference. “If you want to get the best ideas, you need to ask all of the smart people what they think. HPC is really hard and have big problems to solve that solutions will impact humanity. If we are only talking to one-third of the workforce, are we finding the best ideas and solutions? By tapping into the other two thirds, we get more people and ideas to solve the big HPC problems.”
“SC16 is really unique among conferences in the HPC community. There is simply no other conference where you can go to talk with every major participant in the HPC vendor community, see the latest research results, get HPC-specific training from the authorities in our field, mentor that next generation of leaders, and attend workshops that will shape tomorrow’s technology agenda.”
Are supercomputers practical for Deep Learning applications? Over at the Allinea Blog, Mark O’Connor writes that a recent experiment with machine learning optimization on the Archer supercomputer shows that relatively simple models run at sufficiently large scale can readily outperform more complex but less scalable models. “In the open science world, anyone running a HPC cluster can expect to see a surge in the number of people wanting to run deep learning workloads over the coming months.”
“Over the past six weeks, we took NVIDIA’s developer conference on a world tour. The GPU Technology Conference (GTC) was started in 2009 to foster a new approach to high performance computing using massively parallel processing GPUs. GTC has become the epicenter of GPU deep learning — the new computing model that sparked the big bang of modern AI. It’s no secret that AI is spreading like wildfire. The number of GPU deep learning developers has leapt 25 times in just two years.”