Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Gabriel Broner on Why Cloud is the Next Disruption in HPC

“Like the previous disruptions of clusters vs. monolithic systems or Linux vs. proprietary operating systems, cloud changes the status quo, takes us out of our comfort zone, and gives us a sense of lack of control. But the effect of price, the flexibility to dynamically change your system size and choose the best architecture for the job, the availability of applications, the ability to select system cost based on the needs of a particular workload, and the ability to provision and run immediately, will prove very attractive for HPC users.”

Dr. Eng Lim Goh on HPE’s Recent PathForward Award for Exascale Computing

In this video from ISC 2017, Dr. Eng Lim Goh from HPE discusses the company’s recent PathForward award as well as the challenges of designing energy efficient Exascale systems. After that, he gives his unique perspective on HPE’s “The Machine” architecture for memory-driven computing. “The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand.”

Agenda Posted for HP-CAST at ISC 2017

Hewlett Packard Enterprise has posted their Preliminary Agenda for HP-CAST. As HPE’s user group meeting for high performance computing, the event takes place June 16-17 in Frankfurt, just prior to ISC 2017. “The High Performance Consortium for Advanced Scientific and Technical (HP-CAST) computing users group works to increase the capabilities of Hewlett Packard Enterprise solutions for large-scale, scientific and technical computing. HP-CAST provides guidance to Hewlett Packard Enterprise on the essential development and support issues for such systems. HP-CAST meetings typically include corporate briefings and presentations by HPE executives and technical staff (under NDA), and discussions of customer issues related to high-performance technical computing.”

Pascal GPUs to Accelerate TSUBAME 3.0 Supercomputer at Tokyo Tech

“TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5,” writes Marc Hamilton from Nvidia. “It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.”

Five Ways Scale-Up Systems Save Money and Improve TCO

The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.

Scaling Software for In-Memory Computing

“The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.”

Scaling Hardware for In-Memory Computing

The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. “High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors.”

In-Memory Computing for HPC

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources: scale-up or scale-out. The scale-up in-memory system provides a much better total cost of ownership and can provide value in a variety of ways. “If the application program has concurrent sections then it can be executed in a “parallel” fashion. Much like using multiple bricklayers to build a brick wall. It is important to remember that the amount and efficiency of the concurrent portions of a program determine how much faster it can run on multiple processors. Not all applications are good candidates for parallel execution.”

insideHPC Research Report on In-Memory Computing

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources. A scale-up design that allows multiple cores to share a large global pool of memory and a scale-out design design that distributes data sets across the memory on separate host systems in a computing cluster. To learn more about In-Memory computing download this guide from IHPC and SGI.

Interview: Bill Mannel and Dr. Eng Lim Goh on What’s Next for HPE & SGI

In this video, Bill Mannel, VP & GM, High-Performance Computing and Big Data, HPE & Dr. Eng Lim GoH, PhD, SVP & CTO of SGI join Dave Vellante & Paul Gillin at HPE Discover 2016. “The combined HPE and SGI portfolio, including a comprehensive services capability, will support private and public sector customers seeking larger high-performance computing installations, including U.S. federal agencies as well as enterprises looking to leverage high-performance computing for business insights and a competitive edge.”