Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GPCNeT or GPCNoT?

In this special guest feature, Gilad Shainer from Mellanox Technologies writes that the new GPCNeT benchmark is actually a measure of relative performance under load rather than a measure of absolute performance. “When it comes to evaluating high-performance computing systems or interconnects, there are much better benchmarks available for use. Moreover, the ability to benchmark real workloads is obviously a better approach for determining system or interconnect performance and capabilities. The drawbacks of GPCNeT benchmarks can be much more than its benefits.”

Video: Data Parallel Deep Learning

Huihuo Zheng from Argonne National Laboratory gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Podcast: Spell startup looks to bring AI to the people

In this AI Podcast, Serkan Piantino from Spell describes how his company is making machine learning easier. “We want to empower and transform the global workforce by making deep learning and artificial intelligence accessible to everyone. We believe that as organizations and individuals can harness the power of machine learning, our world will change quickly. Our mission is to make sure the technology driving this change is not mysterious and locked away but open and available for everyone.”

Deep Learning State of the Art in 2020

Lex Fridman gave this talk as part of the MIT Deep Learning series. “This lecture is on the most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general.”

How Deep Learning is Driving New Science

In this special guest feature, Robert Roe from Scientific Computing World looks at the development of deep learning and its impact on scientific applications. “In general, it depends on the use case but you can think of two cases where AI is useful. The first case is to solve problems that are hard to solve in a rule-based way, which is a similar domain as you may have outside science for speech recognition or image recognition.”

Optimizing in a Heterogeneous World is (Algorithms x Devices)

In this guest article, our friends at Intel discuss how CPUs prove better for some important Deep Learning. Here’s why, and keep your GPUs handy! Heterogeneous computing ushers in a world where we must consider permutations of algorithms and devices to find the best platform solution. No single device will win all the time, so we need to constantly assess our choices and assumptions.

Data Center Transformation: Why a Workload-Driven and Scalable Architecture Matters

In this whitepaper QCT (Quanta Cloud Technology) highlights Data Center Transformation: Why a Workload-Driven and Scalable Architecture Matters. The company is offering its QCT Platform on Demand (QCT POD) solution that empowers enterprises to kickstart their transformation journey. It combines advanced technology with a unique user experience to help enterprises reach better performance and gain more insights. With flexibility and scalability, QCT POD enables enterprises to address a broader range of HPC, Deep Learning, and Data Analytic demands that fulfill various applications.

Progress and Challenges for the Use of Deep Learning to Improve Weather Forecasts

Peter Dueben from ECMWF gave this talk at the UK HPC Conference. “I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will then talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future.”

Deep Learning for Natural Language Processing – Choosing the Right GPU for the Job

In this new whitepaper from our friends over at Exxact Corporation we take a look at the important topic of deep learning for Natural Language Processing (NLP) and choosing the right GPU for the job. Focus is given to the latest developments in neural networks and deep learning systems, in particular a neural network architecture called transformers. Researchers have shown that transformer networks are particularly well suited for parallelization on GPU-based systems.

Designing Scalable HPC, Deep Learning, Big Data, and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the UK HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, Big Data and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, and GPGPUs (including GPUDirect RDMA).”