Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Solving AI Hardware Challenges

For many deep learning startups out there, buying AI hardware and a large quantity of powerful GPUs is not feasible. So many of these startup companies are turning to cloud GPU computing to crunch their data and run their algorithms. Katie Rivera, of One Stop Systems, explores some of the AI hardware challenges that can arise, as well as the new tools designed to tackle these issues. 

Scaling Hardware for In-Memory Computing

The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. “High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors.”

Fujitsu Unveils Processor Details for Post-K Computer

The Fujitsu Journal has posted details on a recent Hot Chips presentation by Toshio Yoshida about the instruction set architecture (ISA) of the Post-K processor. “The Post-K processor employs the ARM ISA, developed by ARM Ltd., with enhancements for supercomputer use. Meanwhile, Fujitsu has been developing the microarchitecture of the processor. In Fujitsu’s presentation, we also explained that our development of mainframe processors and UNIX server SPARC processors will continue into the future. The reason that Fujitsu is able to continuously develop multiple processors is our shared microarchitecture approach to processor development.”