Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Radio Free HPC Recaps the Hot Chips Conference

In this podcast, the Radio Free HPC team is joined by Glenn Heinle to review the highlights of the Hot Chips conference. “Since it started in 1989, HOT CHIPS has been known as one of the semiconductor industry’s leading conferences on high-performance microprocessors and related integrated circuits. The conference is held once a year in August in the center of the world’s capital of electronics activity, Silicon Valley.”

Intel Talks at Hot Chips gear up for “AI Everywhere”

Today at Hot Chips 2019, Intel revealed new details of upcoming high-performance AI accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. InsideHPC has got all the details, here, all in one place.

UPMEM Puts CPUs Inside Memory to Allow Apps to Run 20 Times Faster

Today UPMEM announced a Processing-in-Memory (PIM) acceleration solution that allows big data and AI applications to run 20 times faster and with 10 times less energy. Instead of moving massive amounts of data to CPUs, the silicon-based technology from UPMEM puts CPUs right in the middle of data, saving time and improving efficiency. By allowing compute to take place directly in the memory chips where data already resides, data-intensive applications can be substantially accelerated.

Radio Free HPC Looks at Hot Chips for 2018

In this podcast, the Radio Free HPC team looks at the latest developments in processor technology coming out of the recent Hot Chips conference. “The HOT CHIPS conference typically attracts more than 500 attendees from all over the world. It provides an opportunity for chip designers, computer architects, system engineers, press and analysts, as well as attendees from national laboratories and academia to mix, mingle and see presentations on the latest technologies and products.”

Tachyum Touts Benefits of Universal Processor at HOT CHIPS

This week at the Hot Chips conference, Tachyum CEO Dr. Radoslav Danilak described how the company’s its Prodigy Universal Processor Chip combines the best attributes of CPU, GPU and TPU architectures to overcome HPC challenges. “I look forward to sharing with attendees at HOT CHIPS how a new approach is needed to overcome the challenges faced by all those in the hyperscale datacenter, HPC and AI markets.”

Phytium from China Unveils 64-core ARM HPC Processor

This week at the Hot Chips conference, Phytium Technology from China unveiled a 64-core CPU and a related prototype computer server. “Phytium says the new CPU chip, with 64-bit arithmetic compatible with ARMv8 instructions, is able to perform 512 GFLOPS at base frequency of 2.0 GHz and on 100 watts of power dissipation.”

Fujitsu Unveils Processor Details for Post-K Computer

The Fujitsu Journal has posted details on a recent Hot Chips presentation by Toshio Yoshida about the instruction set architecture (ISA) of the Post-K processor. “The Post-K processor employs the ARM ISA, developed by ARM Ltd., with enhancements for supercomputer use. Meanwhile, Fujitsu has been developing the microarchitecture of the processor. In Fujitsu’s presentation, we also explained that our development of mainframe processors and UNIX server SPARC processors will continue into the future. The reason that Fujitsu is able to continuously develop multiple processors is our shared microarchitecture approach to processor development.”

ARM Ramps up for HPC with SVE Scalable Vector Extensions

Over at the ARM Community Blog, Nigel Stephens writes that the company has introduced scalable vector extensions (SVE) their A64 instruction set to bolster high performance computing. Fujitsu is developing a new HPC processor conforming to ARMv8-A with SVE for the Post-K computer.

Call for Contributions: Hot Chips 2016

hotchipsThe Hot Chips 2016 conference has issues its Call for Proposals. The event takes place August 21-23 in Cupertino, California. “Presentations at HOT CHIPS are in the form of 30 minute talks using PowerPoint or PDF. Presentation slides will be published in the HOT CHIPS Proceedings. Participants are not required to submit written papers, but a select group will be invited to submit a paper for inclusion in a special issue of IEEE Micro.”

Video: AMD’s next Generation GPU and High Bandwidth Memory Architecture

“HBM is a new type of CPU/GPU memory (“RAM”) that vertically stacks memory chips, like floors in a skyscraper. In doing so, it shortens your information commute. Those towers connect to the CPU or GPU through an ultra-fast interconnect called the “interposer.” Several stacks of HBM are plugged into the interposer alongside a CPU or GPU, and that assembled module connects to a circuit board. Though these HBM stacks are not physically integrated with the CPU or GPU, they are so closely and quickly connected via the interposer that HBM’s characteristics are nearly indistinguishable from on-chip integrated RAM.”