Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: The Sierra Supercomputer – Science and Technology on a Mission

Adam Bertsch from LLNL gave this talk at the Stanford HPC Conference. “Our next flagship HPC system at LLNL will be called Sierra. A collaboration between multiple government and industry partners, Sierra and its sister system Summit at ORNL, will pave the way towards Exascale computing architectures and predictive capability.”

HACC: Fitting the Universe inside a Supercomputer

Nicholas Frontiere from the University of Chicago gave this talk at the DOE CSGF Program Review meeting. “In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one.”

Inside SATURNV – Insights from NVIDIA’s Deep Learning Supercomputer

Phil Rogers from NVIDIA gave this talk at SC17. “Like its namesake, In this talk, we describe the architecture of SATURNV, and how we use it every day at NVIDIA to run our deep learning workloads for both production and research use cases. We explore how the NVIDIA GPU Cloud software is used to manage and schedule work on SATURNV, and how it gives us the agility to rapidly respond to business-critical projects. We also present some of the results of our research in operating this unique GPU-accelerated data center.”

Podcast: Supercomputing Better Semiconductors for Solar Energy

Researchers are using XSEDE supercomputers to develop better semiconductors for solar engery. “Dr. Levine models the behavior caused by defects in materials, such as doping bulk silicon to transform it into semiconductors in transistors, LEDs, and solar cells. Levine and his team have used over 975,000 compute hours on the Maverick supercomputer, a dedicated visualization and data analysis resource architected with 132 NVIDIA Tesla K40 “Atlas” GPUs for remote visualization and GPU computing to the national community.”

Application Readiness Projects for the Summit Supercomputer Architecture

Dr. Tjerk P. Straatsma from ORNL gave this talk at SC17. “The Center for Accelerated Application Readiness (CAAR) projects are using an Early Access Power8+/Pascal system named SummitDev to prepare for the Power9/Volta system Summit. This presentation highlights achievements on this system, and the experience of the teams that will be a valuable resource for other development teams.”

Bright Computing Release 8.1 adds new features for Deep Learning, Kubernetes, and Ceph

Today Bright Computing released version 8.1 of the Bright product portfolio with new capabilities for cluster workload accounting, cloud bursting, OpenStack private clouds, deep learning, AMD accelerators, Kubernetes, Ceph, and a new lightweight daemon for monitoring VMs and non-Bright clustered nodes. “The response to our last major release, 8.0, has been tremendous,” said Martijn de Vries, Chief Technology Officer of Bright Computing. “Version 8.1 adds many new features that our customers have asked for, such as better insight into cluster utilization and performance, cloud bursting, and more flexibility with machine learning package deployment.”

Video: Inside Volta GPUs

Stephen Jones from NVIDIA gave this talk at SC17. “The NVIDIA Volta architecture powers the world’s most advanced data center GPU for AI, HPC, and Graphics. Features like Independent Thread Scheduling and game-changing Tensor Cores enable Volta to simultaneously deliver the fastest and most accessible performance of any comparable processor. Join us for a tour of the features that will make Volta the platform for your next innovation in AI and HPC supercomputing.”

OpenFabrics Alliance Workshop 2018 – An Emphasis on Fabric Community Collaboration

In this special guest feature, Parks Fields and Paul Grun from the OpenFabrics Alliance write that the upcoming OFA Workshop in Boulder is an excellent opportunity to collaborate on the next generation of network fabrics. “Come join the community in Boulder this year to lend your voice to shaping the direction of fabric technology in big ways or small, or perhaps just to listen and learn about the latest trends coming down the pike, or to pick up tips and tricks to make you more effective in your daily job.”

Stanford HPC Conference Posts Preliminary Agenda

The Stanford HPC Conference has posted it Preliminary Agenda. The two-day event takes place Feb. 20-21 at Stanford University in California. “Join the Stanford High Performance Computing Center, HPC Advisory Council, its members and experts from all over the world for two days of invited and contributed talks and immersive tutorials on topics of great societal impact and responsibility! February’s open forum brings industry luminaries and leading subject matter experts together to examine emerging and major domains and share in-depth insights on AI, Data Sciences, HPC, Machine Learning and more.”

Analytic Engineering Moves AI GPU Infrastructure to Verne Global in Iceland

Today HPC Cloud provider Verne Global announced that Analytic Engineering of Germany is moving their GPU infrastructure to Iceland. “At Verne Global’s campus, we can grow our business faster and apply more compute resources to our programs than at any other data center that we evaluated,” said Tobias Seifert, Co-CEO at Analytic Engineering. “This is a critical competitive advantage to us, as we look to deliver highly complex software solutions that enable our customers to iterate faster through applications driven by AI and Machine Learning.”