Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Using AI to Identify Brain Tumors with Federated Learning

Researchers at Intel Labs and the Perelman School of Medicine are using privacy-preserving technique called federated learning to train AI models that identify brain tumors. With federated learning, research institutions can collaborate on deep learning projects without sharing patient data. “AI shows great promise for the early detection of brain tumors, but it will require more data than any single medical center holds to reach its full potential,” said Jason Martin, principal engineer at Intel Labs.

Video: Machine Learning for Weather Forecasts

Peter Dueben from ECMWF gave this talk at the Stanford HPC Conference. “I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future.”

XSEDE Supercomputers Simulate Tsunamis from Volcanic Events

Researchers at the University of Rhode Island are using XSEDE supercomputer to show that high-performance computer modeling can accurately simulate tsunamis from volcanic events. Such models could lead to early-warning systems that could save lives and help minimize catastrophic property damage. “As our understanding of the complex physics related to tsunamis grows, access to XSEDE supercomputers such as Comet allows us to improve our models to reflect that, whereas if we did not have access, the amount of time it would take to such run simulations would be prohibitive.”

Interview: Fighting the Coronavirus with TACC Supercomputers

In this video from the Stanford HPC Conference, Dan Stanzione from the Texas Advanced Computing Center describes how their powerful supercomputers are helping to fight the coronavirus pandemic. “In times of global need like this, it’s important not only that we bring all of our resources to bear, but that we do so in the most innovative ways possible,” said TACC Executive Director Dan Stanzione. “We’ve pivoted many of our resources towards crucial research in the fight against COVID-19, but supporting the new AI methodologies in this project gives us the chance to use those resources even more effectively.”

A Data-Centric Approach to Extreme-Scale Ab initio Dissipative Quantum Transport Simulations

Alexandros Ziogas from ETH Zurich gave this talk at Supercomputing Frontiers Europe. “The computational efficiency of a state of the art ab initio #quantum transport (QT) solver, capable of revealing the coupled electro-thermal properties of atomically-resolved nano-transistors, has been improved by up to two orders of magnitude through a data centric reorganization of the application. The approach yields coarse-and fine-grained data-movement characteristics that can be used for performance and communication modeling, communication-avoidance, and dataflow transformations.”

Slidecast: The Sad State of Affairs in HPC Storage (But there is light at the end of the tunnel)

In this video, Robert Murphy from Panasas describes the current state of the HPC storage market and how Panasas is stepping up with high performance products that deliver economical performance without risk. “According to a recent study published by Hyperion Research, total cost of ownership (TCO) now rivals performance as a top criterion for purchasing HPC storage systems. Newly retooled with COTS hardware and a unique architecture, Panasas delivers surprising performance at a lower TCO than competitive solutions.”

The Incorporation of Machine Learning into Scientific Simulations at LLNL

Katie Lewis from Lawrence Livermore National Laboratory gave this talk at the Stanford HPC Conference. “Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness.”

Xilinx Establishes FPGA Adaptive Compute Clusters at Leading Universities

“We will build novel, experimental FPGA-centric compute systems and develop domain-specific compilers and system tools targeting high-performance computing. We will focus on several important application domains, including AI with deep learning, large-scale graph processing, and computational genomics.”

Supercomputing the San Andreas Fault with CyberShake

With help from DOE supercomputers, a USC-led team expands models of the fault system beneath its feet, aiming to predict its outbursts. For their 2020 INCITE work, SCEC scientists and programmers will have access to 500,000 node hours on Argonne’s Theta supercomputer, delivering as much as 11.69 petaflops. “The team is using Theta “mostly for dynamic earthquake ruptures,” Goulet says. “That is using physics-based models to simulate and understand details of the earthquake as it ruptures along a fault, including how the rupture speed and the stress along the fault plane changes.”

How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented.”