Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Radio Free HPC Looks at NREL’s new Eagle Supercomputer

In this podcast, the Radio Free HPC team looks at the new Eagle supercomputing under construction at NREL. “The new machine from HPE will run more detailed models that simulate complex processes, systems, and phenomena to advance early research and development on energy technologies across fields including vehicle, wind power, and data sciences.”

Dr. Eng Lim Goh on HPE’s Spaceborne Supercomputer

In this video from SC17 in Denver, Dr. Eng Lim Goh describes the spaceborne supercomputer that HPE built for NASA. “The research objectives of the Spaceborne Computer include a year-long experiment of operating high performance commercial off-the-shelf (COTS) computer systems on the ISS with its changing radiation climate. During high radiation events, the electrical power consumption and, therefore, the operating speeds of the computer systems are lowered in an attempt to determine if such systems can still operate correctly.”

Gabriel Broner on Why Cloud is the Next Disruption in HPC

“Like the previous disruptions of clusters vs. monolithic systems or Linux vs. proprietary operating systems, cloud changes the status quo, takes us out of our comfort zone, and gives us a sense of lack of control. But the effect of price, the flexibility to dynamically change your system size and choose the best architecture for the job, the availability of applications, the ability to select system cost based on the needs of a particular workload, and the ability to provision and run immediately, will prove very attractive for HPC users.”

Dr. Eng Lim Goh on HPE’s Recent PathForward Award for Exascale Computing

In this video from ISC 2017, Dr. Eng Lim Goh from HPE discusses the company’s recent PathForward award as well as the challenges of designing energy efficient Exascale systems. After that, he gives his unique perspective on HPE’s “The Machine” architecture for memory-driven computing. “The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand.”

Agenda Posted for HP-CAST at ISC 2017

Hewlett Packard Enterprise has posted their Preliminary Agenda for HP-CAST. As HPE’s user group meeting for high performance computing, the event takes place June 16-17 in Frankfurt, just prior to ISC 2017. “The High Performance Consortium for Advanced Scientific and Technical (HP-CAST) computing users group works to increase the capabilities of Hewlett Packard Enterprise solutions for large-scale, scientific and technical computing. HP-CAST provides guidance to Hewlett Packard Enterprise on the essential development and support issues for such systems. HP-CAST meetings typically include corporate briefings and presentations by HPE executives and technical staff (under NDA), and discussions of customer issues related to high-performance technical computing.”

Pascal GPUs to Accelerate TSUBAME 3.0 Supercomputer at Tokyo Tech

“TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5,” writes Marc Hamilton from Nvidia. “It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.”

Five Ways Scale-Up Systems Save Money and Improve TCO

The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.

Scaling Software for In-Memory Computing

“The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.”

Scaling Hardware for In-Memory Computing

The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. “High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors.”

In-Memory Computing for HPC

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources: scale-up or scale-out. The scale-up in-memory system provides a much better total cost of ownership and can provide value in a variety of ways. “If the application program has concurrent sections then it can be executed in a “parallel” fashion. Much like using multiple bricklayers to build a brick wall. It is important to remember that the amount and efficiency of the concurrent portions of a program determine how much faster it can run on multiple processors. Not all applications are good candidates for parallel execution.”