Video: Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze Research Breakthroughs

Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. “The Artificial Intelligence and Big Data group at Pittsburgh Supercomputing Center converges Artificial Intelligence and high performance computing capabilities, empowering research to grow beyond prevailing constraints. The Bridges supercomputer is a uniquely capable resource for empowering research by bringing together HPC, AI and Big Data.”

Video: Machine Learning for Weather Forecasts

Peter Dueben from ECMWF gave this talk at the Stanford HPC Conference. “I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future.”

Interview: Fighting the Coronavirus with TACC Supercomputers

In this video from the Stanford HPC Conference, Dan Stanzione from the Texas Advanced Computing Center describes how their powerful supercomputers are helping to fight the coronavirus pandemic. “In times of global need like this, it’s important not only that we bring all of our resources to bear, but that we do so in the most innovative ways possible,” said TACC Executive Director Dan Stanzione. “We’ve pivoted many of our resources towards crucial research in the fight against COVID-19, but supporting the new AI methodologies in this project gives us the chance to use those resources even more effectively.”

The Incorporation of Machine Learning into Scientific Simulations at LLNL

Katie Lewis from Lawrence Livermore National Laboratory gave this talk at the Stanford HPC Conference. “Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness.”

How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented.”

Video: Major Market Shifts in IT

Shahin Khan from OrionX gave this talk at the Stanford HPC Conference. “We will discuss the digital infrastructure of the future enterprise and the state of these trends. OrionX works with clients on the impact of Digital Transformation on them, their customers, and their messages. Generally, they want to track, in one place, trends like IoT, 5G, AI, Blockchain, and Quantum Computing. And they want to know what these trends mean, how they affect each other, and when they demand action, and how to formulate and execute an effective plan. If that describes you, we can help.”

Surprising Mechanical Lessons About Predicting Innovation Success

Thomas Thurston from WR Hambrecht Ventures gave this talk at the Stanford HPC Conference. “For a century, corporate innovators, entrepreneurs and venture capitalists have had to rely on their instincts when deciding which strategies to pursue, and where to invest for growth. Now data science is turning human instinct on its head with powerful decision technologies that are giving rise to counter-intuitive discoveries about market behavior and predicting innovation success. Learn how venture capital firm WR Hambrecht is using big data and machine learning to better identify growth opportunities, predict new business success and rewrite the rules of innovation.”

Video: HPC and AI Market Update from Intersect360 Research

Addison Snell from Intersect360 Research gave this talk at the Stanford HPC Conference. “As the global shutdown continues, inquiring minds want to know what the effects will be on the HPC and AI market. Intersect360 Research has released a new report guiding its clients that the market for HPC products and services will fall significantly short of its previous 2020 forecast, due to the global COVID-19 pandemic. The newly-revised forecast predicts the overall worldwide HPC market will be flat to down 12% in 2020.”

High Schoolers from Michigan Step Up to High Performance Computing at Stanford

In this video from the Stanford HPC Conference, Surya Sanjay and Tanay Bhangale from Troy High School in Michigan describe how their proposal on combatting prion-based disease resulted in their first experiences with HPC at the Stanford High Performance Computing Center. “Using a model of the infectious murine prion and many mutagenized variants of the benign murine prion as PrPX, created with Rosetta’s ab initio software, we plan to perform molecular dynamics (MD) simulations with GROMACS 2020 to determine the success of models based on the described criteria.”

Update on the HPC AI Advisory Council

Setting the stage for the Stanford HPC Conference this week, Gilad Shainer describes how the HPC AI Advisory Council fosters innovation in the high performance computing community. “The HPC-AI Advisory Council’s mission is to bridge the gap between high-performance computing and Artificial Intelligence use and its potential, bring the beneficial capabilities of HPC and AI to new users for better research, education, innovation and product manufacturing, bring users the expertise needed to operate HPC and AI systems, provide application designers with the tools needed to enable parallel computing, and to strengthen the qualification and integration of HPC and AI system products.”