Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Intel Xeon Processors and Intel Omni-Path Architecture Offer Breakthroughs for Top500 Systems

Last week, the rhetorical one-two punch of the Intel HPC Developer Conference and Supercomputing 2017 offered global HPC aficionados new insights into the direction of advanced HPC technologies, and how those tools will empower the future of discovery and innovation. In case you missed it, here is a breakdown of all the action.

Catch Today’s ‘Artificial Intelligence and The Virtuous Cycle of Compute’ Presentation by Intel

At SC17, Intel is reinforcing its ongoing commitment to HPC and AI technologies. If you want to learn more about Intel’s investment in Artificial Intelligence, and the convergence with HPC, then be sure to catch the future-focused presentation on “Artificial Intelligence and The Virtuous Cycle of Compute” by Pradeep Dubey, Intel Fellow, and Director of Parallel Computing Lab.  

Building Fast Data Compression Code with Intel Integrated Performance Primitives (Intel IPP) 2018

Intel® Integrated Performance Primitives (Intel IPP) is a highly optimized, production-ready, library for lossless data compression/decompression targeting image, signal, and data processing, and cryptography applications. Intel IPP includes more than 2,500 image processing, 1,300 signal processing, 500 computer vision, and 300 cryptography optimized functions for creating digital media, enterprise data, embedded, communications, and scientific, technical, and security applications.

Composable Infrastructure: Composing Greater HPC Breakthroughs

Composable infrastructure allows any number of CPU nodes to dynamically map the optimum number of GPU and NVMe storage resources to each node required to complete a specific task.  In this sponsored post, Katie Rivera of One Stop Systems, explores the power of GPUs and the potential benefits of composable infrastructure for HPC. 

Intel at SC17: Showcasing HPC technologies, luminary speakers, and a virtual motorsports experience

This year at SC17, Intel offers many opportunities to learn about the newest technologies, emerging fields like Artificial Intelligence, and the ways organizations are applying those capabilities for real-world applications.”

The Inflection Point of Wattage in HPC, Deep Learning and AI

Magnified in 2017 by machine learning and AI, there is a heightened concern in the HPC community over wattage trends in CPUs, GPUs and emerging neural chips required to meet accelerating computational demands in HPC clusters. In this sponsored post from Asetek, the company examines how high wattage trends in HPC, deep learning and AI might be reaching an inflection point.

Intel Compilers 18.0 Tune for AVX-512 ISA Extensions

Intel Compilers 18.0 and Intel Parallel Studio XE 2018 tuning software fully support the AVX-512 instructions. By widening and deepening the vector registers, the new instructions and added enhancements let the compiler squeeze more vector parallelism out of applications than before. Applications compiled with the –xCORE-AVX512 will generate an executable that utilizes these new high-performance instructions.

Register Now For the Intel HPC Developer Conference 2017

Powerful technologies today fuel tomorrow’s HPC and High-Performance Data Analytics innovations and help organizations accelerate toward discoveries. Get ahead of the curve at the Intel HPC Developer Conference 2017 in Denver, Colorado on November 11-12.

Visualization in Software using Intel Xeon Phi processors

“Intel has been at the forefront of working with software partners to develop solutions for visualization of data that will scale in the future as many core systems such as the Intel Xeon Phi processor scale. The Intel Xeon Phi processor is extremely capable of producing visualizations that allow scientists and engineers to interactively view massive amounts of data.”

1000x Faster Deep-Learning at Petascale Using Intel Xeon Phi Processors

A cumulative effort over several years to scale the training of deep-learning neural networks has resulted in the first demonstration of petascale deep-learning training performance, and further to deliver this performance when solving real science problems. The result reflects the combined efforts of NERSC (National Energy Research Scientific Computing Center), Stanford and Intel to solve real world use cases rather than simply report on performance benchmarks.