Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


UMass Dartmouth Speeds Research with Hybrid Supercomputer from Microway

UMass Dartmouth’s powerful new cluster from Microway affords the university five times the compute performance its researchers enjoyed previously, with over 85% more total memory and over four times the aggregate memory bandwidth. “The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads. “Over 50 nodes include Intel Xeon Scalable Processors, DDR4 memory, SSDs and Mellanox ConnectX-5 EDR 100Gb InfiniBand. A subset of systems also feature NVIDIA V100 GPU accelerators. Equally important are a second subset of POWER9 with 2nd Generation NVLink- based- IBM Power Systems AC922 Compute nodes.”

Video: FPGAs and Machine Learning

James Moawad and Greg Nash from Intel gave this talk at ATPESC 2019. “FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design.”

Full Roundup: SC19 Booth Tour Videos from insideHPC

Now that SC19 is behind us, it’s time to gather our booth tour videos in one place. Throughout the course of the show, insideHPC talked to dozens of HPC innovators showcasing the very latest in hardware, software, and cooling technologies.

Podcast: Advancing Deep Learning with Custom-Built Accelerators

In this Chip Chat podcast, Carey Kloss from Intel outlines the architecture and potential of the Intel Nervana NNP-T. He gets into major issues like memory and how the architecture was designed to avoid problems like becoming memory-locked, how the accelerator supports existing software frameworks like PaddlePaddle and TensorFlow, and what the NNP-T means for customers who want to keep on eye on power usage and lower TCO.

Theta and the Future of Accelerator Programming at Argonne

Scott Parker from Argonne gave this talk at ATPESC 2019. “Designed in collaboration with Intel and Cray, Theta is a 6.92-petaflops (Linpack) supercomputer based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.”

Reimagining HPC Compute and Storage Architecture with Intel Optane Technology

Andrey Kudryavtsev from Intel gave this talk at the DDN User Group in Denver. “Intel Optane DC SSDs are proven technologies that have helped data centers remove storage bottlenecks and accelerate application performance for the past two years. Now, the launch of Intel Optane persistent memory and the integrated support of 2nd Generation Intel Xeon Scalable processors is further shrinking the gap between storage and DRAM. Together, Intel Optane DC persistent memory and Intel Optane SSDs deliver value across four crucial vectors.”

Mapping Disasters with Artificial Intelligence

In this Chip Chat podcast, Intel’s Alexei Bastidas describes the technology behind making maps from satellite images. “Good maps rely on good data. The Red Cross’ Missing Maps Project leverages AI to provide governments and aid workers with the tools they need to navigate disasters like hurricanes and floods. Using a wealth of satellite imagery and machine learning, the Missing Maps Project is working to make villages, roads, and bridges more accessible in the wake of devastation.”

Podcast: The Evolution of Neuromorphic Computing

Intel’s Mike Davies describes Intel’s Loihi, a neuromorphic research chip that contains over 130,000 “neurons.” “To be sure, neuromorphic computing isn’t biomimicry or about reconstructing the brain in silicon. Rather, it’s about understanding the processes and structures of neuroscience and using those insights to inform research, engineering, and technology.”

Intel Optane comes to new Inspur Storage Products

Inspur recently launched all-flash storage systems with dual-port Intel Optane SSDs. “With the Optane SSD as the high-speed cache layer, AS5000G5-F combines innovative technologies such as intelligent data layering, on-board hardware acceleration and online compression/deduplication, which achieves up to 8 million IOPS and a latency of 0.1ms, representing one of the highest performing mid-range storage systems available in the industry.”

Intel HPC Devcon Keynote: Exascale for Everyone

The convergence of HPC and AI is driving a paradigm shift in computing. Learn about Intel’s software-first strategy to further accelerate this convergence and extend the boundaries of HPC as we know it today. oneAPI will ease application development and accelerate innovation in the xPU era. Intel delivers a diverse mix of scalar, vector, spatial, and matrix architectures deployed in a range of silicon platforms (such as CPUs, GPUs, FPGAs), and specialized accelerators—each being unified by an open, industry-standard programming model. The talk concludes with innovations in a new graphics architecture and the capabilities it will bring to the Argonne exascale system in 2021.