Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Adaptive Deep Reuse Technique cuts AI Training Time by more than 60 Percent

North Carolina State University researchers have developed a technique that reduces training time for deep learning networks by more than 60 percent without sacrificing accuracy, accelerating the development of new artificial intelligence applications. “One of the biggest challenges facing the development of new AI tools is the amount of time and computing power it takes to train deep learning networks to identify and respond to the data patterns that are relevant to their applications. We’ve come up with a way to expedite that process, which we call Adaptive Deep Reuse. We have demonstrated that it can reduce training times by up to 69 percent without accuracy loss.”

Spectra Logic and Arcitecta team up for Genomics Data Management

Spectra Logic is teaming with Arcitecta for tackling the massive datasets used in life sciences. The two companies will showcase their joint solutions at the BioIT World conference this week in Boston. “Addressing the needs of the life sciences market with reliable data storage lies at the heart of the Spectra and Arcitecta relationship,” said Spectra CTO Matt Starr. “This joint solution enables customers to better manage their data and metadata by optimizing multiple storage targets, retrieving data efficiently and tracking content and resources.”

Quobyte Distributed File System adds TensorFlow Plug-In for Machine Learning

Today Quobyte announced that the company’s Data Center File System is the first distributed file system to offer a TensorFlow plug-in, providing increased throughput performance and linear scalability for ML-powered applications to enable faster training across larger data sets while achieving higher-accuracy results. “By providing the first distributed file system with a TensorFlow plug-in, we are ensuring as much as a 30 percent faster throughput performance improvement for ML training workflows, helping companies better meet their business objectives through improved operational efficiency,” said Bjorn Kolbeck, Quobyte CEO.

Wolfram Research Releases Mathematica Version 12 for Advanced Data Science

Today Wolfram Research released Version 12 of Mathematica for advanced data science and computational discovery. “After three decades of continuous R&D and the introduction of Mathematica Version 1.0, Wolfram Research has released its most powerful software offering with Version 12 of Wolfram Language, the symbolic backbone of Mathematica. The latest version includes over a thousand new functions and features for multiparadigm data science, automated machine learning, and blockchain manipulation for modern software development and technical computing.”

Wave Computing Launches TritonAI 64 Platform for High-Speed Inferencing

Today AI startup Wave Computing announced its new TritonAI 64 platform, which integrates a triad of powerful technologies into a single, future-proof intellectual property (IP) licensable solution. Wave’s TritonAI 64 platform delivers 8-to-32-bit integer-based support for high-performance AI inferencing at the edge now, with bfloat16 and 32-bit floating point-based support for edge training in the future.

Podcast: Enterprises go HPC at GPU Technology Conference

In this podcast, the Radio Free HPC team looks at news from the GPU Technology Conference. “Dan has been attending GTC since well before it became the big and important conference that it is today. We get a quick update on what was covered: the long keynote, automotive and robotics, the Mellanox acquisition, how a growing fraction of enterprise applications will be AI.”

AMD Powers Corona Cluster for HPC Analytics at Livermore

Lawrence Livermore National Lab has deployed a 170-node HPC cluster from Penguin Computing. Based on AMD EPYC processors and Radeon Instinct GPUs, the new Corona cluster will be used to support the NNSA Advanced Simulation and Computing (ASC) program in an unclassified site dedicated to partnerships with American industry. “Even as we do more of our computing on GPUs, many of our codes have serial aspects that need really good single core performance. That lines up well with AMD EPYC.”

Arm A64fx and Post-K: A Game-Changing CPU & Supercomputer

Satoshi Matsuoka from RIKEN gave this talk at the HPC User Forum in Santa Fe. “Post-K is the flagship next generation national supercomputer being developed by Riken and Fujitsu in collaboration. Post-K will have hyperscale class resource in one exascale machine, with well more than 100,000 nodes of sever-class A64fx many-core Arm CPUs, realized through extensive co-design process involving the entire Japanese HPC community.”

Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer

Jack Wells from ORNL gave this talk at the GPU Technology Conference. “HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We’ll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows.”

Podcast: Intel to power Anthos Google Cloud Platform

In this Chip Chat podcast, Paul Nash from the Google Cloud Platform discusses the industry trends impacting IaaS and how Google Cloud Platform together with Intel are driving innovation in the cloud. “The two companies will collaborate on Anthos, a new reference design based on the 2nd-Generation Intel Xeon Scalable processor and an optimized Kubernetes software stack that will deliver increased workload portability to customers who want to take advantage of hybrid cloud environments. Intel will publish the production design as an Intel Select Solution, as well as a developer platform.”