Six Myths of HPC for Engineering Simulation

Print Friendly, PDF & Email
Wim Slagter, Lead Product Manager for HPC at ANSYS

Wim Slagter, Lead Product Manager for HPC at ANSYS

Over at the Ansys Blog, Wim Slagter writes that there are six common myths about HPC for engineering simulation.

Myth Number 1. High-Performance Computing is available on supercomputers only. Although engineers usually know that HPC stands for high performance computing, I’ve met quite some engineers who do not realize that HPC is already available on their desktop. A decade ago, HPC may have indeed been primarily associated with big supercomputers. However, the computer industry has delivered enormous increases in computing speed and power at consistently lower costs. Think about more compute cores per CPU, integrated I/O on processor die (yielding higher memory bandwidth), more and faster memory (channels), larger L3 cache size, faster disk storage (like solid-state drives for ANSYS Mechanical), faster interconnects, AVX support, etc. Through these advances made I can counter this myth #1 by stating that HPC is today available throughout the entire computer spectrum, at the entry level from tablets running ANSYS Mechanical to multi-core laptop and desktop computers, as well as the bigger workstations but of course also computer clusters at the other end of the computer spectrum.

Slagter lists the other HPC Myths as follows:

  • HPC is only useful for CFD simulations.
  • I don’t need HPC – my job is running fast enough.
  • Without internal IT support, HPC cluster adoption is undoable.
  • Parallel scalability is all about the same, right?
  • HPC software and hardware are relative expensive.

Read the Full Story.

Sign up for our insideHPC Newsletter.

Comments

  1. I do appreciate this point of view. However, when I started my career in HPC, I understood that HPC (or supercomputers) meant computers that are much more powerful than the desktop. Many of the innovations discussed here are also found in the HPC servers. From my point of view, I still see HPC as still being the supercomputer used for either solving problems more quickly than on a desktop or solving larger problems (that cannot be executed on a desktop) in a reasonable time. I have written a short article about solving problems with ~1 billion degrees of freedom using around 32,000 cores here: http://tinyurl.com/pbe3fgm

  2. I am with Lee here, HPC is by definition “high performance” in the spectrum of computing and since Google and Microsoft are running a business on a million servers or more that would be the simple litmus test of what HPC workloads are.

    As many simple computational mechanics problem can now comfortably be executed on a laptop means that the workload and the use case is no longer taking advantage of the availability of computational capacity and capability. In particular, the CAE industry has been completely outclassed by the web, gaming and media industries who have been moving up the capability curve. Even marketers, when sifting through social media data, leverage more computational power in their day to day work than most engineers. Commercial CAE software is in many cases still using FORTRAN software modules written three decades ago. Instead, we should have scalable, multiphysics engines available to the CAE community that seamlessly connect the laptop/desktop to the ultrascale infrastructures now available.