This guest post from Intel covers what new technology was front and center from Intel at SC18, including its Cascade Lake advanced performance processors, Intel Optane Persistent Memory and more.
Intel has a long history of making important announcements at the annual Supercomputer shows, and this year was no exception.
Intel at SC18 revealed the following:
- Previewed its next generation Cascade Lake advanced performance processors, a new class of Intel® Xeon® Scalable processors designed to support converged high-performance computing and AI workloads,
- Disclosed performance results for Cascade Lake advanced performance across a range of HPC and AI benchmarks as well as real HPC applications, and
- Highlighted the use of Intel® Optane™ DC Persistent Memory in next generation supercomputers.
According to Rajeeb Hazra, Intel corporate vice president and general manager, Enterprise & Government Business, “The pace of innovation within today’s supercomputers is staggering, and it shows no signs of slowing. From the scientific community to the commercial HPC industry, customers are demanding the ability to handle the convergence of AI and HPC at unprecedented scale. Intel is uniquely positioned to deliver on the promise of this next-generation intelligent infrastructure, opening the door to new research and industrial insights previously unimagined.”
Major technology advances, such as the Cascade Lake processors, make this possible. Available in the first half of 2019, the processors are designed to accelerate applications in the fields of physics, weather modeling, manufacturing, and life and material science.
Cascade Lake advanced performance processors feature Intel® DLBoost™ technology, which improves AI/deep learning inference performance by up to 17 times[1] compared with Intel Xeon Platinum processor measurements at its 2017 launch.
The North German Supercomputing Alliance will also adopt Cascade Lake advanced performance processors in its next-generation supercomputer to enable significant computation gains and improved efficiency.
The Texas Advanced Computing Center (TACC) is a good example of how the industry is changing – moving beyond classical high-performance computing and into a new data-centric era, where HPC becomes intertwined with analytics and AI at a massive scale. Intel’s data-centric portfolio, which delivers world-class technologies to move, store and process data, is accelerating insatiable requirements in supercomputing as the industry drives towards the convergence of HPC and AI workloads on a common high performant infrastructure. For the past four years, more than 90 percent of Top500 supercomputing customers have chosen Intel2.
The North German Supercomputing Alliance will also adopt Cascade Lake advanced performance processors in its next-generation supercomputer to enable significant computation gains and improved efficiency. Several OEMs have also announced intent to support Cascade Lake advanced performance, including Bull Atos*, Colfax*, Cray*, HPE*, Inspur*, Lenovo*, Megware*, Penguin*, Quanta*, Sugon*, and Supermicro*.
In the Supercomputing 18 booth, Intel engineers and scientists demonstrated key products and technologies designed to propel the convergence of high-performance computing and AI workload initiatives. Included were:
- Cascade Lake with Intel® DLBoost™: Providing a vector neural network instruction set based on Intel® AVX-512 for faster inference acceleration – many instructions can be accomplished with just one.
- Intel® Quantum and Neuromorphic Computing: New computing paradigm with the potential of unlocking performance and power efficiency.
- Cloudified AI with Containers: Scalable AI cloud reference architecture for enterprise, government and next wave Cloud Service Providers that includes the performance benefits of Intel® Architecture for the most common AI workloads.
- 3D AI: Optimized DL Training and inference on 3D Volumetric data with 3D convolutions on Intel® Xeon® Processor-based servers.
- Intel® Rendering Framework: Leadership modeling and visualization solution, (Intel SSDs, OPA) provides a CPU-based solution for visual analysis of even the most complex and large sized data.
Intel also announced momentum for the use of Intel Optane DC Persistent Memory in the HPC arena, with TACC being the first supercomputer to adopt the technology in its forthcoming Frontera system. Intel Optane DC Persistent Memory, which will be delivered with Cascade Lake, is an innovative new technology that increases server memory capacity, accelerates application performance and, unlike DRAM, offers the benefits of data persistence in supercomputing.
Intel’s persistent memory offers the potential for in-memory databases-style computations, large memory applications, nearly “instant boot” of full racks and advanced check-pointing at extreme scale. Based on next-generation Intel Xeon Scalable processors and set to be operational in 2019, TACC’s Frontera is expected to be the fastest supercomputer on a university campus. Frontera will allow academic researchers to make important discoveries in all fields of science, from astrophysics to zoology.
Learn more about these exciting new technologies designed to accelerate the convergence of high-performance computing and AI.
Intel, the Intel logo, Xeon, Optane and Intel DLBoost are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information see here.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com/hpc.
1DL Inference: Platform: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe, revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from ResNet-50, and ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“. Tested by Intel as of July 11th 2017 -. compared to 1-node, 2-socket 48-core Cascade Lake Advanced Performance processor projections by Intel as of 10/7/2018.
2 The 52nd edition of the TOP500