Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: IBM Powers Ai at the GPU Technology Conference

In this video from the GPU Technology Conference, Sumit Gupta from IBM describes how IBM is powering production-level Ai and Machine Learning. “IBM PowerAI provides the easiest on-ramp for enterprise deep learning. PowerAI helped users break deep learning training benchmarks AlexNet and VGGNet thanks to the world’s only CPU-to-GPU NVIDIA NVLink interface. See how new feature development and performance optimizations will advance the future of deep learning in the next twelve months, including NVIDIA NVLink 2.0, leaps in distributed training, and tools that make it easier to create the next deep learning breakthrough.”

Video: NVIDIA Showcases Programmable Acceleration of multiple Domains with one Architecture

In this video from GTC 2019 in Silicon Valley, Marc Hamilton from NVIDIA describes how accelerated computing is powering AI, computer graphics, data science, robotics, automotive, and more. “Well, we always make so many great announcements at GTC. But one of the traditions Jensen has now started a few years ago is coming up with a new acronym to really make our messaging for the show very, very simple to remember. So PRADA stands for Programmable Acceleration Multiple Domains One Architecture. And that’s really what the GPU has become.”

Mellanox HDR 200G InfiniBand Speeds Machine Learning with NVIDIA

Today Mellanox announced that its HDR 200G InfiniBand with the “Scalable Hierarchical Aggregation and Reduction Protocol” (SHARP) technology has set new performance records, doubling deep learning operations performance. The combination of Mellanox In-Network Computing SHARP with NVIDIA 100 Tensor Core GPU technology and Collective Communications Library (NCCL) deliver leading efficiency and scalability to deep learning and artificial intelligence applications.

Agenda Posted for ExaComm 2018 Workshop in Frankfurt

The ExaComm 2018 workshop has posted their Speaker Agenda. Held in conjunction with ISC 2018, the Fourth International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale takes place June 28 in Frankfurt. ” The goal of this workshop is to bring together researchers and software/hardware designers from academia, industry and national laboratories who are involved in creating network-based computing solutions for extreme scale architectures. The objectives of this workshop will be to share the experiences of the members of this community and to learn the opportunities and challenges in the design trends for exascale communication architectures.”

Brain Research: A Pathfinder for Future HPC

Dirk Pleiter from the Jülich Supercomputing Centre gave this talk at the NVIDIA GPU Technology Conference. “One of the biggest and most exiting scientific challenge requiring HPC is to decode the human brain. Many of the research topics in this field require scalable compute resources or the use of advance data analytics methods (including deep learning) for processing extreme scale data volumes. GPUs are a key enabling technology and we will thus focus on the opportunities for using these for computing, data analytics and visualization. GPU-accelerated servers based on POWER processors are here of particular interest due to the tight integration of CPU and GPU using NVLink and the enhanced data transport capabilities.”

Video: Deep Learning for the Enterprise with POWER9

Sumit Gupta from IBM gave this talk at H2O World. “From chat bots, to recommendation engines, to Google Voice and Apple Siri, AI has begun to permeate our lives. We will demystify what AI is, present the difference between machine learning and deep learning, why the huge interest now, show some fun use cases and demos, and then discuss use cases of how deep learning based AI methods can be used to garner insights from data for enterprises. We will also talk about what IBM is doing to make deep learning and machine learning more accessible and useful to a broader set of data scientists, and how to build out the right hardware infrastructure.”

Video: IBM Launches POWER9 Nodes for the World’s Fastest Supercomputers

In this video from SC17, Adel El Hallak from IBM unveils the POWER9 servers that will form the basis of the world’s fastest “Coral” supercomputers coming to ORNL and LLNL. “In addition to arming the world’s most powerful supercomputers, IBM POWER9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry.”

Video: Introducing the 125 Petaflop Sierra Supercomputer

In this video, researchers from Lawrence Livermore National Laboratory describe Sierra, LLNL’s next-generation supercomputer. “The IBM-built advanced technology high-performance system is projected to provide four to six times the sustained performance and be at least seven times more powerful than LLNL’s current most advanced system, Sequoia, with a 125 petaFLOP/s peak. At approximately 11 megawatts, Sierra will also be about five times more power efficient than Sequoia.”

For HPC and Deep Learning, GPUs are here to stay

In this special guest feature from Scientific Computing World, David Yip, HPC and Storage Business Development at OCF, provides his take on the place of GPU technology in HPC. “Using GPUs in the HPC datacenter in place of CPUs can dramatically increase the power requirements needed, but if your computational performance goes through the roof, then I’d argue it’s a trade-off worth making.”

No speed limit on NVIDIA Volta with rise of AI

In this special guest feature, Brad McCredie from IBM writes that launch of Volta GPUs from NVIDIA heralds a new era of AI. “We’re excited about the launch of NVIDIA’s Volta GPU accelerators. Together with the NVIDIA NVLINK “information superhighway” at the core of our IBM Power Systems, it provides what we believe to be the closest thing to an unbounded platform for those working in machine learning and deep learning and those dealing with very large data sets.”