Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Excelero Powers AI as a Service with Shared NVMe at InstaDeep

“InstaDeep offers a pioneering AI as a Service solution enabling organizations of any size to leverage the benefits of AI and Machine Learning without the time, costs and expertise required to run their own AI stacks. Excelero’s NVMesh, in turn, allows InstaDeep to access the low-latency, high-bandwidth performance that is essential for running customer AI and ML workloads efficiently – and gain the scalability vital to InstaDeep’s own rapid growth.”

CoolIT Systems Launches Liquid Cooling Solution for Intel Server System S9200WK

Today CoolIT Systems announced an integrated liquid cooling solution to support the Intel Server System S9200WK. “The Intel Server System S9200WK uses CoolIT’s innovative Rack DLC coldplate solution, featuring patented Split-Flow design. The liquid cooling solution for this 2U, four node server manages heat from the recently announced dual Intel Xeon Platinum 9200 processor CPUs, voltage regulators, and memory.”

DDN Moves Closer to the Edge with Nexenta Acquisition

Today DDN announced its intent to acquire Nexenta, the market leader in Software Defined Storage for 5G and Internet of Things (IoT). “Our clients benefit from the flexibility and performance of Nexenta’s robust SDS solutions and platform-agnostic strategy, which provide great differentiation for the HPC, AI, and high-performance data analytics (HPDA) verticals we serve. With escalating demands from our clients for ultra-scalable compute and data storage platforms, we look forward to the exciting developments which will result from this new relationship.”

‘AI on the Fly’: Moving AI Compute and Storage to the Data Source

The impact of AI is just starting to be realized across a broad spectrum of industries. Tim Miller, Vice President Strategic Development at One Stop Systems (OSS), highlights a new approach — ‘AI on the Fly’ — where specialized high-performance accelerated computing resources for deep learning training move to the field near the data source. Moving AI computation to the data is another important step in realizing the full potential of AI.

Podcast: Accelerating AI Inference with Intel Deep Learning Boost

In this Chip Chat podcast, Jason Kennedy from Intel describes how Intel Deep Learning Boost works as an embedded AI accelerator in the CPU designed to speed deep learning inference workloads. “The key to Intel DL Boost – and its performance kick – is augmentation of the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This innovation significantly accelerates inference performance for deep learning workloads optimized to use vector neural network instructions (VNNI). Image classification, language translation, object detection, and speech recognition are just a few examples of workloads that can benefit.”

E4 teams with bluechip Computer AG in Germany for HPC

Today Italian HPC and AI specialist E4 Computer Engineering announced a strategic partnership with bluechip Computer AG, a leading German manufacturer of server, storage, workstation and client systems. Under the agreement, the two companies will co-market bluechip servers and storage along with consulting, installation, and support services for HPC and AI applications. “Thanks to the strategic partnership with E4 Computer Engineering, we will be able to implement HPC solutions in a much more efficient way in the DACH region. Both companies will benefit from their specialized knowledge, which complements each other and will enable bluechip to deliver an excellent German-language support to its customers in this area.“, said Bogdan Kruszewski, Head of product marketing at bluechip Computer AG. 

New Funding and DARPA Grant to Propel Optical Interconnects at Ayar Labs

Today Ayar Labs announced that the company has secured additional funding to fuel its growth as it drives to productize its TeraPHY optical I/O chiplets and SuperNova multi-wavelength lasers in 2019. The company aims to disrupt the traditional performance, cost, and efficiency curves of the semiconductor and computing industries by driving a” 1000x improvement” in interconnect bandwidth density at 10x lower power.

Bill Dally from NVIDIA presents: Accelerating AI

Bill Dally from NVIDIA gave this talk at the Matroid Scaled Machine Learning Conference. “The world of computing is experiencing an incredible change with the introduction of deep learning and AI. Deep learning relies on GPU acceleration, both for training and inference, and NVIDIA delivers it everywhere you need it—to data centers, desktops, laptops, the cloud, and the world’s fastest supercomputers.”

Liqid Enables Multi-Fabric Support for Composable Infrastructure

Today Liqid announced unified multi-fabric support for composability across all major fabric types including PCIe Gen 3, PCIe Gen 4, Ethernet, Infiniband, and laying the foundation for the up-coming Gen-Z specifications. “Providing Ethernet and Infiniband composability in addition to PCIe is a natural extension of our expertise in fabric management and aligns with our mission to facilitate data center disaggregation,” said Sumit Puri, CEO and Co-founder, Liqid.

Long Live Posix – HPC Storage and the HPC Datacenter

Robert Triendl from DDN gave this talk at the Swiss HPC Conference. “The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems. Since it was developed over 30 years ago, storage has changed dramatically. To improve the IO performance of applications, many users have called for the relaxation in POSIX IO that could lead to the development of new storage mechanisms to improve not only application performance, but management, reliability, portability, and scalability.”