SC17 General Chair Bernd Mohr introduced the theme of the upcoming conference with these fine words: “One connection can change your life. Our community is making millions of connections every day: by bringing together people at workshops, conferences, in research teams and projects, by connecting extreme-scale supercomputers to instruments and visualization and data analytics systems, by inspiring collaborations between different fields of science And all with the goal of making the greatest impact on society and changing our world I invite you to continue on this journey of creating meaningful connections at SC17.”
Today Cycle Computing announced that Dell EMC will offer its software and services as an option with Dell EMC HPC Systems.
At Dell EMC, we are constantly looking for the best ways to serve our customers, and Cycle Computing is a valuable addition to our HPC offerings,” said Jim Ganthier, senior vice president, Validated Solutions, and HPC organization, Dell EMC. “With Cycle, Dell EMC will be the first to offer ‘crate to cloud’ for HPC in a matter of hours and will help our customers accelerate time to results while reducing cost and complexity.”
In this video from SC16, Steve Conway from IDC moderates a panel discussion on Precision Medicine. “Recently, DOE Secretary Moniz, VA Secretary MacDonald, NCI Director Lowy and the GSK CEO Andrew Witty announced that the Nation’s leading supercomputers would be applied to the challenge of the Cancer Moonshot initiative. This partnership of nontraditional groups, collectively see the path to unraveling the complexities of cancer through the power of new machines, operating systems, and applications that leverage simulations, data science and artificial intelligence to accelerate bringing precision oncology to the patients that are waiting. This initiative is one of many research efforts in the race to solve some of our most challenging medical problems.”
If you were not able to attend SC16, have we got a video for you! Courtesy of Asetek, this time-lapse walk-through of the exhibit hall sure looks familiar to this reporter who spent the last four days shooting over 50 interviews.
“Watson and cognitive computing in general can serve significantly in every single arena in which we grapple with multi-layered, data-intensive problems: how to best treat cancers; how to adapt to conditions brought about by climate change; how to quickly and effectively harness new kinds of sustainable energy; how to untangle intractable governmental or community development challenges,” Frase stated. “Now more than ever, visionary thinking will drive an endless and transformative array of applications for Watson and cognitive computing in general, along with whatever comes next.”
“The majority of deep learning frameworks provide good out-of-the-box performance on a single workstation, but scaling across multiple nodes is still a wild, untamed borderland. This discussion follows the story of one researcher trying to make use of a significant compute resource to accelerate learning over a large number of CPUs. Along the way we note how to find good multiple-CPU performance with Theano* and TensorFlow*, how to extend a single-machine model with MPI and optimize its performance as we scale out and up on both Intel Xeon and Intel Xeon Phi architectures.”
In this video from SC16, Figen Ulgen from Intel and Maurizio Davini from the IT Center University of Pisa describe the newly announced Intel® HPC Orchestrator software. “With Intel® HPC Orchestrator, based on the OpenHPC system software stack, you can take advantage of the innovation driven by the open source community – while also getting peace of mind from Intel® support across the HPC system software stack.”
“Maximizing value of data in the kinds of extraordinary environments represented by supercomputing is all about being able to handle extreme, unpredictable storage bandwidth and capacity needs at scale,” said Ken Claffey, vice president and general manager, Seagate HPC systems business. “Seagate’s ClusterStor 300N expands on our proven, engineered systems approach that delivers performance efficiency and value for HPC environments of any size, using a hybrid technology architecture to handle tough workloads at a fraction of the cost of all-flash approaches.”
In this video from the Intel HPC Developer Conference, Prabhat from NERSC describes how high performance computing techniques are being used to scale Machine Learning to over 100,000 compute cores. “Using TB-sized datasets from three science applications: astrophysics, plasma physics, and particle physics, we show that our implementation can construct kd-tree of 189 billion particles in 48 seconds on utilizing ∼50,000 cores.”
Couldn’t make it SC16? Tune in right here on insideHPC to watch all the Nvidia Theater talks this week. “Come join NVIDIA at SC16 to learn how AI supercomputing is breaking open a world of limitless possibilities. This is an era of multigenerational discoveries taking place in a single lifetime. See how other leaders in the field are advancing computational science across domains, get free hands-on training with the newest GPU-accelerated solutions, and connect with NVIDIA experts.”