SC15 HPC Transforms – but how, what, where, when, why and for whom?

Print Friendly, PDF & Email
system interconnect technology

In this special guest feature from the Print’nFly Guide to SC15 in Austin, Peter ffoulkes from OrionX looks at how HPC Transforms.

Peter ffoulkes, OrionX

Peter ffoulkes, OrionX

It seems almost no time at all since we were in New Orleans at SC14 with the theme “HPC Matters.” While that is a message that resonates well within the community, it is also something that frequently appears to fall on deaf ears in some government and enterprise computing circles. Last year I was attending SC in my role as a research director with 451 Research, one of the small band of analysts that actively cover HPC vendors and the user community. Perhaps my strongest recollection of last year’s conference was when a former colleague fixed me with a shrewd look and said, “HPC is boring right now!” After several days of the conference I was inclined to agree with her. It’s not that there weren’t any new products or advancements being made, but they mostly seemed like incremental or iterative improvements, nothing obviously disruptive or game changing.

For SC15 the theme is much more dynamic and active – HPC Transforms, which begs many questions: how, what, where, when, and why does HPC transform and for whom? If this year’s conference answers those questions then perhaps the reasons why HPC matters will become a little clearer to the wider community.

HPC Matters: The TOP500 and the road to exascale

Performance

Although the High Performance Linpack (HPL) benchmark is only one valid measure of the performance of the world’s most capable machines, The TOP500 is still a good proxy for tracking the pace of super computing technology development.

The first petascale machine, ‘Roadrunner’ debuted in June 2008, twelve years after the first terascale machine – ASCI Red in 1996. Until just a few years ago 2018 was the target for hitting the exascale capability level. As 2015 comes to its close the first exascale machine seems much more likely to debut in the first half of the next decade and probably later in that window rather than earlier. So where are we with the TOP500, and what can we expect in the next few lists?

 

Source: TOP500.org

Source: TOP500.org

Observations from the June 2015 TOP500 List on performance:

  • The total combined performance of all 500 systems had grown to 361 Pflop/s, compared to 309 Pflop/s last November and 274 Pflop/s a year previously, indicating a noticeable slowdown in growth compared to the previous long-term trend.
  • Almost 20% of the systems (a total of 90) use accelerator/co-processor technology, up from 75 in November 2014.
  • As of June 2015 there were 68 petascale systems on the list, up from 50 a year earlier, and more than double the number two years earlier.

So what do we conclude from this? Certainly that the road to exascale is significantly harder than we may have thought, not just from a technology perspective, but even more importantly from a geo-political and commercial perspective. The aggregate performance level of all of the TOP500 machines is less than 40% of the HPL metric for an exascale machine.

Hybrid architectures using math accelerators are gaining traction and momentum in addressing the computational bottlenecks in HPL performance, which may point towards looking at hybrid architectures in other aspects of system technology going forwards.

Most importantly, if HPC actually does matter, then doubling the number of petascale-capable resources available to scientists, researchers, and other users in a two year period moves the needle much more significantly. From a useful outcome and transformational perspective it is much more important to support advances in science, research and analysis than to ring the bell with the world’s first exascale system on the TOP500 in 2018, 2023 or 2025.

ArchitectureHPL and the TOP500 performance benchmark are only one part of the HPC equation. Building a world leading system involves overcoming system bottlenecks which shift over time. For a long while floating-point computational performance was a major bottleneck, but in recent years the balance has shifted to other areas including system interconnect and memory performance which are not directly measured by the HPL benchmark.

The combination of modern multi-core 64 bit CPUs and math accelerators from Nvidia, Intel and others have addressed many of the issues related to computational performance. The focus on bottlenecks has shifted away from computational strength to data-centric and energy issues, which from a performance perspective influence HPL results but are not explicitly measured in the results. However, from an architectural perspective the TOP500 lists still provide insight into the trends in a useful manner.

 

Source: TOP500.org

Source: TOP500.org

Observations from the June 2015 TOP500 List on system interconnects:

  • After two years of strong growth from 2008 to 2010 InfiniBand-based TOP500 systems plateaued at 40% of the TOP500 while compute performance grew aggressively with the focus on hybrid, accelerated systems.
  • Starting in June 2014 there were signs of an uptick in the focus on system interconnects with InfiniBand-based systems exceeding 50% of the TOP500 list for the first time in June 2015.

From a technology perspective we clearly want to see improvements in computational performance, but if the bottlenecks are shifting to system interconnect, memory and software architectures then we need to look to the developments in those areas to maintain or accelerate progress towards exascale capabilities and the transformational capabilities of HPC in both scientific and enterprise computing.

HPC Transforms: What can we expect in the next few years?

The TOP500

Probably not much that moves the needle significantly any time soon. The next milestone is to exceed the 100 Pflop/s mark. There are systems under development in China that are expected to challenge the 100 Pflop/s barrier in the next twelve months, but informed sources don’t expect that to happen before 2016. From the USA, both the Coral and Trinity initiatives are expected to significantly exceed the 100 Pflop/s limit – targeting 200 Pflop/s, but not before the 2017 time frame. None of these systems are expected to deliver more than one third of exascale capability.

The road to exascale requires a different and more challenging focus. It is a system level development involving processing, networking, storage and software that is beyond the capabilities of any individual company, and quite possibly beyond the capabilities of any single country. In this scenario it is not surprising that the pace of development is slowing and geo-political and commercial economic conditions are not helping. At the same time, this feeds the requirement for a collaborative approach including open standards, open source, and co-design which are impeded by a deteriorating political and economic context.

Vendors

The entire IT industry is in a transformational state, the biggest since the introduction of the IBM PC in 1981, which led to the era of the “industry standard server” and the dominance of the Intel x86 architecture. That era appears to be drawing to a close. Intel still remains a technology powerhouse, but the rules that enabled the company’s success over that 30 plus year period are changing and even Intel needs to continue to adapt and evolve. These are the times when giants fall and new contenders have a chance to emerge.

Some of the best established enterprise IT giants – HP, IBM, Cisco and others are undergoing major restructuring and transformation. Dell is on a path to acquire EMC and gain influence over its satellite federation companies. There is a widely held perception that there is no money to be made in the HPC market, but a quick look at the stock market over the last five years shows a different picture. Certainly the market for specialist HPC technology companies can be volatile, but despite that volatility, market beating growth can be achieved. Cray, perhaps the most iconic hardware company associated with HPC, stands out with exceptional performance over the last five years under Peter Ungaro’s leadership, increasing revenues from $284M in 2009 to $562M in 2014.

Technology

With the market shifting away from a compute-centric focus towards data-centric issues, the spotlight also shifts towards a holistic system design perspective, and we are seeing an increasing interest in convergence, rack scale integration and overall optimization at a system level.

At the processor level we are moving towards parallelism and system-on-a-chip designs integrating FPGAs, DSPs, graphics and other technologies. At the interconnect level, InfiniBand-based systems have broken the 50% penetration level in the TOP500 list for the first time. Although in the wider enterprise market Ethernet rules supreme, the shift towards appliance architectures and cloud-based services provides a significant opportunity for InfiniBand technology to be leveraged while hiding any additional complexity behind abstraction layers, which could accelerate adoption.

 

Source: Yahoo Finance

Source: Yahoo Finance

Over the next five years technologies such as silicon photonics and new memory architectures promise to enable a fundamental rethinking of system level design. These are no longer technologies of curiosity in the labs, but on the cusp of mainstream deployment from established vendors including Intel, Micron, IBM, Mellanox and a host of others.

Silicon photonics promises to improve connectivity and system design with higher bandwidth, lower latency, lower cost, longer distances and reduced energy consumption. New memory architectures such as the Intel/Micron 3D Xpoint, memristor, phase change memory and others promise to close the gap between dynamic RAM and current persistent storage technologies, addressing persistence, performance, durability, and energy consumption.

Although these new architectures have a significant development road ahead, they could materially alter conventional system design criteria and have an even larger impact on software architectures. What will be the effects on software design if all memory is persistent? What will be the effect upon legacy software based on the assumption that dynamic RAM is not persistent and when that assumption no longer holds true?
Markets

If we thought the last five years were disruptive, we may not have seen anything yet, and in many ways the HPC community will continue to lead that transformation, even if it does not always receive recognition for that leadership. The general enterprise market shift towards a data-centric focus, based upon “big-data”, the impending deluge of sensor data from “The Internet of Things”, and real-time analytics using in-memory databases could be the best thing that has happened to the HPC community in decades. There is increasing awareness of the need to reduce data movement and to bring compute capabilities to the data. Not just for efficiency reasons but also for security, compliance and regulatory concerns.

From the market opportunity perspective, the shift towards data-centric system design and an increasing desire for real-time analysis of business information may not be considered to be “HPC” in a classic sense, but the skills and techniques required to deliver results seem to be extremely similar. The distinctions between HPC and enterprise computing are continuing to dissolve. The language used to communicate with enterprise customers may be different, the cultural motivations and approaches may be different, but the technologies and computational techniques required continue to converge, which provides a significant opportunity for HPC to transform business in a material way.

Looking to the future, new disciplines such as robotics and machine learning clearly offer significant opportunity for both business and scientific computing. Perhaps SC15 will point the way to many more.

HPC Transforms: “To the Future and Beyond!”

Does HPC matter? I think yes, probably more so than ever. However, HPC needs to be active, transformational and demonstrably so. Perhaps this is the challenge to be addressed at this year’s conference. What are the paths to the future? What will be truly transformational? What specific results can be demonstrated, how, when and to what purpose?

Does the future lie with esoteric technologies such as quantum computing or quantum annealing and startups such as D-Wave Systems that are pushing the boundaries of achievement? Does it lie with “blue sky” research from companies like Google that can afford such luxuries? It may, but probably not significantly so in this decade. However pioneering companies such as D-Wave Systems and others still lie at the heart of HPC evolution.

Perhaps the biggest question revolves around how the community can collaborate together. If no single company, nor even a single country, can drive the agenda forwards by itself then how is progress to be made? The principles of co-design, open source and open standards are well understood and appreciated, but are also constrained by geo-political and commercial considerations.

The HPC community has a reputation for being the bellwether of technology development, the leading light that points to the future. Perhaps SC15 will be the conference that moves beyond a defensive footing, where HPC merely matters, to be the conference where HPC is demonstrated to transform the world, and leads the industry “To boldly go where no woman has gone before!” If so, we may be able to head home confidently believing that HPC is truly exciting once more.

Peter ffoulkes is a partner at OrionX. On the leading edge of IT transitions since 1980, his experience has been focused on the technologies required to build the agile, automated and adaptable data center environments that are the essential foundation of cloud-ready data centers for both enterprise and high performance scientific computing. Mr. ffoulkes will also be a featured presenter at the StartupHPC Summit, Nov. 16 in Austin.

printnflyAustinThis article was originally published in the Print ‘n Fly Guide to SC15 in Austin. We designed this Guide to be an in-flight magazine custom tailored for your journey to SC15 — the world’s largest gathering of high performance computing professionals.

Table of Contents

Download the Print’nFly Guide to SC15 in Austin