In this podcast, Shahin Khan from OrionX joins the Radio Free HPC team for a look at the new TOP500 list of the world’s fastest supercomputers. “The 93 Petaflop Sunway TaihuLight supercomputer is not a one-time effort from China. Not only do they now have the two top two supercomputers, China also sponsors the world’s largest state-sponsored Student Cluster Competition with over 170 university teams. The takeaway from today; China is serious about supercomputing, they are in it for the long haul, and they are willing to write the checks to make it happen.”
A new machine called Sunway TaihuLight in China is the fastest supercomputer on the planet. Announced today with the release of the latest TOP500 list, the 93 Petaflop machine sports over 10.6 Million compute cores. “The latest list marks the first time since the inception of the TOP500 that the U.S is not home to the largest number of systems. With a surge in industrial and research installations registered over the last few years, China leads with 167 systems and the U.S. is second with 165. China also leads the performance category, thanks to the No. 1 and No. 2 systems.”
“Accelerated computing is the only path forward to keep up with researchers’ insatiable demand for HPC and AI supercomputing,” said Ian Buck, vice president of accelerated computing at NVIDIA. “Deploying CPU-only systems to meet this demand would require large numbers of commodity compute nodes, leading to substantially increased costs without proportional performance gains. Dramatically scaling performance with fewer, more powerful Tesla P100-powered nodes puts more dollars into computing instead of vast infrastructure overhead.”
Beth Wingate from the University of Exeter presented this talk at the PASC16 conference in Switzerland. “For weather or climate models to achieve exascale performance on next-generation heterogeneous computer architectures they will be required to exploit on the order of million- or billion-way parallelism. This degree of parallelism far exceeds anything possible in today’s models even though they are highly optimized. In this talk I will discuss the mathematical issue that leads to the limitations in space- and time-parallelism for climate and weather prediction models – oscillatory stiffness in the PDE.”
“Supercomputers are key to the Cancer Moonshot. These exceptionally high-powered machines have the potential to greatly accelerate the development of cancer therapies by finding patterns in massive datasets too large for human analysis. Supercomputers can help us better understand the complexity of cancer development, identify novel and effective treatments, and help elucidate patterns in vast and complex data sets that advance our understanding of cancer.”
“For SC16, we’re beginning a three-year thrust that will expand state-of-the-practice discussions with content throughout the conference tracks that emphasizes the innovation happening in operations, tools, and software through today’s HPC centers. I’ve spent my career so far in HPC operations of one kind or another, and I know firsthand that there is an incredible wealth of knowledge and expertise that gets developed in supercomputing centers. SC is well established as the place to share academic results; we believe SC can have a large impact on our community by providing developers and researchers with a more operational focus with a forum to share their results as well.”
Pressures by management for cost containment are answered by improving software maintenance procedures and automating many of the repetitive activities that have been handled manually. This lowers Total Cost of Ownership (TCO), boosting IT productivity, and increasing return on investment (ROI).
“Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”
The National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign is helping change the way genetic medicine is researched and practiced in Africa. Members of the Blue Waters team recently made it possible to discover genomic variants in over 300 deeply sequenced human samples to help construct a genotyping chip specific for […]
For Universities and Colleges that have a traditional infrastructure, adding new programs and applications is a huge endeavor. The IT staff needs to determine if all of the hardware meets the installation requirements and how to deploy these new programs on different models of desktops and notebooks. With a VDI environment that utilizes simple boot-up devices that connect to virtual desktops on the school’s server, the IT staff doesn’t have to worry about the age and capability of each individual PC when installing new software.