Sign up for our newsletter and get the latest big data news and analysis.

TYAN adds 2nd Generation Intel Xeon Scalable Processors

Today TYAN announced support for the extended 2nd generation Intel Xeon Scalable processors lineup (Cascade Lake-SP Refresh). TYAN’s full line of HPC, cloud computing and storage server platforms continue to offer advanced performance and hardware-enhanced security to enterprises, cloud and hyperscale data centers. “Customers from the data center to the enterprise are facing the challenge of getting more value from enormous amounts of data. The demand requires IT infrastructure migration to faster I/O throughput, shorter data process period, and higher storage capacity,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit. “Thanks to Intel’s improvements in CPU clocks, cores and cache, the 2nd generation Intel Xeon Scalable processors let our customers enjoy performance jumps while running cloud computing, HPC and storage applications”.

Intel and AWS Team for HPC Performance in the Cloud

In this video from SC19, Trish Damkroger from Intel and Ian Colle from AWS describe how the two companies collaborate to deliver the best possible application performance in the Cloud. “HPC on AWS, powered by Intel Xeon Scalable processors, offers the most elastic, scalable cloud infrastructure to run HPC applications, and the range of services makes it easier than ever to get started quickly, securely, and cost-effectively.”

Converging Workflows Pushing Converged Software onto HPC Platforms

Are we witnessing the convergence of HPC, big data analytics, and AI? Once, these were separate domains, each with its own system architecture and software stack, but the data deluge is driving their convergence. Traditional big science HPC is looking more like big data analytics and AI, while analytics and AI are taking on the flavor of HPC.

Podcast: Accelerating AI Inference with Intel Deep Learning Boost

In this Chip Chat podcast, Jason Kennedy from Intel describes how Intel Deep Learning Boost works as an embedded AI accelerator in the CPU designed to speed deep learning inference workloads. “The key to Intel DL Boost – and its performance kick – is augmentation of the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This innovation significantly accelerates inference performance for deep learning workloads optimized to use vector neural network instructions (VNNI). Image classification, language translation, object detection, and speech recognition are just a few examples of workloads that can benefit.”

Video: New Cascade Lake Xeons to Speed Ai with Intel Deep Learning Boost

This week at the Data-Centric Innovation Summit, Intel laid out their near-term Xeon roadmap and plans to augment their AVX-512 instruction set to boost machine learning performance. “This dramatic performance improvement and efficiency – up to twice as fast as the current generation – is delivered by using a single instruction to handle INT8 convolutions for deep learning inference workloads which required three separate AVX-512 instructions in previous generation processors.”