Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Big Compute 20 Conference Announces Speaker Lineup

Today the Big Compute Conference announced sponsors and speakers for its inaugural event, held February 11-12, 2020 in San Francisco. The two-day conference will feature business leaders and scientists describing how they are transforming their industries with access to unlimited cloud compute. “Big Compute 20 brings together thought leaders in aerospace, automotive, AI, biotech, medical, academic, technology, and chemical industries. In addition to inspiring talks, the event will feature workshops, networking, panels and a hackathon sprint, all focused on the freedom to think big.”

Deep Learning for Predicting Severe Weather

Researchers from Rice University have introduced a data-driven framework that formulates extreme weather prediction as a pattern recognition problem, employing state-of-the-art deep learning techniques. “In this paper, we show that with deep learning you can do analog forecasting with very complicated weather data — there’s a lot of promise in this approach.”

Atos completes acquisition of Maven Wave

Today Atos announced the completion of its acquisition of Maven Wave, a U.S.-based cloud and technology consulting firm specialized in delivering digital transformation solutions for large enterprises. With this acquisition, Atos reinforces its global leadership in cloud-solutions for applications, data analytics and machine learning in hybrid and multi-cloud platforms. “Together, Maven Wave and Atos create the strongest Google Cloud services portfolio offered anywhere, providing customers proven expertise and knowledge in executing their digital transformation and delivering outstanding experiences to their customers,” commented Maven Wave Founders Brian Farrar, Jason Lee and Jeff Lee.

Samsung Launches Flashbolt High Bandwidth 2E Memory

Today Samsung Electronics launched ‘Flashbolt,’ its third-generation High Bandwidth Memory 2E (HBM2E). The new 16-gigabyte (GB) HBM2E is uniquely suited to maximize HPC systems and help system manufacturers to advance their supercomputers, AI-driven data analytics and state-of-the-art graphics systems in a timely manner. “With the introduction of the highest performing DRAM available today, we are taking a critical step to enhance our role as the leading innovator in the fast-growing premium memory market,” said Cheol Choi, executive vice president of Memory Sales & Marketing at Samsung Electronics.

How NVIDIA Enables Scientific Research for HPC Developers

“Researchers, scientists, and developers are advancing science by accelerating their high performance computing applications on NVIDIA GPUs using specialized libraries, directives, and language-based programming models. From computational science to AI, CUDA-X HPC, OpenACC, and CUDA are GPU-accelerating applications to deliver groundbreaking scientific discoveries. And popular languages like C, C++, Fortran, and Python are being used to develop, optimize, and deploy these applications.”

Call for Papers: Deep Learning on Supercomputers workshop

The Deep Learning on Supercomputers workshop has issued its Call for Papers. The event takes place June 25 as part of ISC 2020 in Frankfurt, Germany. “The workshop provides a forum for practitioners working on any and all aspects of DL for scientific research in the High Performance Computing context to present their latest research results and development, deployment, and application experiences. The general theme of this workshop series is the intersection of DL and HPC, while the theme of this particular workshop is centered around the applications of deep learning methods in scientific research: novel uses of deep learning methods, e.g., convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial network (GAN), and reinforcement learning (RL), for both natural and social science research, and innovative applications of deep learning in traditional numerical simulation.”

Tachyum Processor to power 2021 AI/HPC Supercomputer

Today semiconductor company Tachyum announced today that its Prodigy Processor AI/HPC Reference Design will be used in a supercomputer at an unnamed customer site in 2021. According to the company, the Prodigy processor slated for 2021 delivery “handily outperforms the fastest processors while consuming one-tenth the electrical power, and it is one-third the cost and outperforms GPUs and TPUs on neural net training and inference workloads.”

DIII-D Researchers Use Machine Learning to Steer Fusion Plasmas

Researchers at the DIII-D National Fusion Facility achieved a scientific first this month when they used machine learning calculations to automatically prevent fusion plasma disruptions in real time, while simultaneously optimizing the plasma for peak performance. The new experiments are the first of what they expect to be a wave of research in which machine learning–augmented controls could broaden the understanding of fusion plasmas. The work may also help deliver reliable, peak performance operation of future fusion reactors.

Efficient Model Selection for Deep Neural Networks on Massively Parallel Processing Databases

Frank McQuillan from Pivotal gave this talk at FOSDEM 2020. “In this session we will present an efficient way to train many deep learning model configurations at the same time with Greenplum, a free and open source massively parallel database based on PostgreSQL. The implementation involves distributing data to the workers that have GPUs available and hopping model state between those workers, without sacrificing reproducibility or accuracy.”

vScaler Launches AI Reference Architecture

A new AI reference architecture from vScaler describes how to simplify the configuration and management of software and storage in a cost-effective and easy to use environment. “vScaler – an optimized cloud platform built with AI and Deep Learning workloads in mind – provides you with a production ready environment with integrated Deep Learning application stacks, RDMA accelerated fabric and optimized NVMe storage, eliminating the administrative burden of setting up these complex AI environments manually.”