Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Interview: HPC Thought Leaders Looking Forward to SC19 in Denver

In this special guest feature, SC19 General Chair Michela Taufer catches up with Sunita and Jack Dongarra to discuss the way forward for the November conference in Denver. “By augmenting our models and our ability to do simulation, HPC enables us to understand and do things so much faster than we could in the past – and it will only get better in the future.”

NVIDIA T4 GPUs Come to Google Cloud for High Speed Machine Learning

Today the Google Cloud announced Public Beta availability of NVIDIA T4 GPUs for Machine Learning workloads. Starting today, NVIDIA T4 GPU instances are available in the U.S. and Europe as well as several other regions across the globe, including Brazil, India, Japan and Singapore. “The T4 is the best GPU in our product portfolio for running inference workloads. Its high-performance characteristics for FP16, INT8, and INT4 allow you to run high-scale inference with flexible accuracy/performance tradeoffs that are not available on any other accelerator.”

Video: Five Things to Know About SUSE Linux Enterprise for HPC

In this video, Jay Kruemcke from SUSE talks presents: Five Things to Know About SLE HPC. “SUSE Linux Enterprise for High Performance Computing provides a parallel computing platform for high performance data analytics workloads such as artificial intelligence and machine learning. Fueled by the need for more compute power and scale, businesses around the world today are recognizing that a high performance computing infrastructure is vital to supporting the analytics applications of tomorrow.”

Call for Papers: High Performance Machine Learning Workshop (HPML2019) in Cyprus

The 2nd High Performance Machine Learning Workshop (HPML2019) has issued their Call for Papers. The May 14 workshop will be collocated with CCGrid (19th IEEE/ACM International Symposium in Cluster, Cloud, and Grid Computing) in Cyprus. “This workshop is intended to bring together the Machine Learning (ML), Artificial Intelligence (AI) and High Performance Computing (HPC) communities. In recent years, much progress has been made in Machine Learning and Artificial Intelligence in general. This progress required heavy use of high performance computers and accelerators. Moreover, ML and AI have become a “killer application” for HPC and, consequently, driven much research in this area as well. These facts point to an important cross-fertilization that this workshop intends to nourish.”

Video: Overview of DDN’s Accelerated, Any-Scale AI

In this video from DDN booth at SC18, Kurt Kuckein from DataDirect Networks presents an overview of DDN A3i: (Accelerated, Any-Scale AI). “Engineered from the ground up for the AI-enabled data center, DDN A³I solutions are fully-optimized to accelerate AI applications and streamline DL workflows for greatest productivity. DDN A³I solutions make AI-powered innovation easy, with faster performance, effortless scale, and simplified operations—all backed by the data at scale experts.”

Video: How DDN Powers HPC & Ai Applications

In this video from SC18 in Dallas, James Coomer from DDN describes how the company powers HPC and Machine Learning applications. “Organizations around the world are leveraging DDN’s people, technology, performance and innovation to achieve their greatest visions and make revolutionary insights and discoveries! Designed, optimized and right-sized for Commercial HPC, Higher Education and Exascale Computing, our full range of DDN products and solutions are changing the landscape of HPC and delivering the most value with the greatest operational efficiency. Meet with our team of technologists to see how DDN is delivering the most optimized and efficient storage solutions for HPC, AI, and Hybrid Cloud.”

IBM Publishes Compendium of Ai Research Papers

Today IBM Research released a 2018 retrospective and blog essay by Dr. Dario Gil, COO of IBM Research, that provides a sneak-peek into the future of AI. “We have curated a collection of one hundred IBM Research AI papers we have published this year, authored by talented researchers and scientists from our twelve global Labs. These scientific advancements are core to our mission to invent the next set of fundamental AI technologies that will take us from today’s “narrow” AI to a new era of “broad” AI, where the potential of the technology can be unlocked across AI developers, enterprise adopters and end-users.”

How Lenovo is Helping Build Ai Solutions to Solve the World’s Toughest Challenges

In this video from SC18 in Dallas, Madhu Matta from Lenovo describes how the company is driving HPC & Ai technologies for Science, Research, and Enterprises across the globe. “Lenovo cares about solving real-world problems, and working with researchers is one of the best ways to gather insights from those whose daily work involves high-computing data and analytics to do just that.”

Intel Pushes the Envelope at SC18

Intel has a long history of making important announcements at the annual Supercomputer shows, and this year was no exception. This guest post from Intel covers what new technology was front and center from Intel at SC18, including its Cascade Lake advanced performance processors, Intel Optane Persistent Memory and more. Learn more about these new technologies designed to accelerate the convergence of high-performance computing and AI.

New Paper: A First Step towards Quantum-Powered Machine Learning

“Given the remarkable performance improvements over many generations of classical microprocessors [7] and the impressive algorithmic improvements in mixed-integer programming tools like Gurobi [29] over the past several decades, it is surprising that D-Wave’s third generation hardware and our straightforward algorithm can be competitive at all. In the series of four chips that D-Wave has released, the number of qubits has approximately doubled from one generation to the next while the number of couplers per qubit has remained essentially unchanged. D-Wave’s fifth generation chip is expected to at least double the number of couplers per qubit [30, 3]. If this comes to fruition, it would likely have a significant, positive impact on the performance of the DWave for the problems we consider here.”