Sign up for our newsletter and get the latest HPC news and analysis.

Andrew's Corner: Personal Supercomputing

You’ve probably seen the rash of offerings from the various computing vendors advertising the “personal supercomputing’ mantra.  GPU-laden workstations, deskside clusters and green-computing stickers have become a normal part of the HPC soup.  However, does anyone really know what “personal supercomputing” really is?  In his latest ZDNet article, Andrew Jones takes a stab at navigating around the various potential pitfalls in defining personal supercomputing.

The first and most obvious definition of personal supercomputing involves pulling the compute closer to the user, eg: desktop/deskside computing.  However, given the general definition of supercomputing, this situation immediately violates what we consider “super.”

No-one really agrees on the precise definition of a supercomputer, but few would deny that it represents the class of computers that are at least a couple orders of magnitude more capable than a prospective user’s desktop machine.

The problem further exacerbates itself when you consider the recent explosion of GPU-based computing.  In this case, there are dozens or hundreds of threads executing concurrently.  This does, in fact, constitute a significant increase in compute power over one’s “normal” commodity desktop.  Can you say, Monkeywrench!?

So, how do I resolve this self-conflict? It’s all relative. For a researcher who has only used desktop computers, experiencing a 10-fold increase in speed on one of these cheap ‘personal HPC’ platforms is a step-change.

And that is the root of HPC — enabling a step-change in the time to solution, or in the size of problem that can be investigated. The higher your starting point — already using clusters? — the more your step-change needs to deliver, for example, multi-thousand node clusters.

The latter definition of personal supercomputing is defined within socio-political terms.  Consider a national laboratory with a large computational resource.  This resource is blessed government to host dozens [if not hundreds] of researchers in order to further their respective projects.  However, we often read about single projects commanding large percentages of the machine runtime, thus achieving a quantum leap in progress.  Given a 70% batch allocation of any one large resource to a single user/project, does this constitute personal supercomputing?  Technically speaking, the current allocation is limited in scope.  For those unfamiliar with national lab computing systems and projects, this situation does occur.  [I'm talking to you Dr. Kerr].

During their active phase, each user might be considered as having a pseudo-personal supercomputer. In fact, many major supercomputer centres can identify a small group of users who consume most of the resource over the course of a year.

However, there are occasionally stories of real personal supercomputers — single users who have a majority share of a facility that is unambiguously a supercomputer, maybe among the top 50 supercomputers in the world. This situation may occur because they take the lead for the modelling activities of their company, or because the nature of their work can justify such a dedicated resource.

Either way, “personal supercomputing” can be debated back and forth until we’re all blue in the face.  As always, Andrew has written a very good evaluation.  I suggest you take a read here at ZDNet/UK.

Resource Links: