How HPC is increasing speed and accuracy

Print Friendly, PDF & Email

Sponsored Post

Mark Gunn, Sr. VP, One Stop Systems

Mark Gunn, Sr. VP, One Stop Systems

The overwhelming task of high performance computing today is the processing of huge amounts of data quickly and accurately. Just adding greater numbers of more intensive, sophisticated servers only partially solves the problem. Several interesting fields where applications are encountering the problem of slow and inefficient data processing are financial, medical, research, seismic exploration, and defense. In this article I’ll touch on some of these applications and how technological advances are helping to remedy the issue. In following months I’ll explore each application’s issues in more depth and show how they are solving their problems.

The financial derivatives market is a high-stakes game, where even the slightest error in the valuation of a contract can lead to big losses. Consequently, traders rely on complex mathematical models to arrive at the value and risk sensitivity of the contract. The quick-paced nature of the financial markets makes it imperative that derivative valuations be fast and accurate. One of the most widely used approaches, Monte Carlo models, can simulate millions of scenarios for underlying contract variables such as stock prices, commodity prices, interest rates, etc., but have computation times that are exceedingly long. The complexity of derivative contracts, and the need for both rapid model development and fast, accurate valuations, highlights some of the challenges faced by the derivatives market.

Medical applications like CT (computed tomography) scanning and MRI (magnetic resonance imaging) require quick, accurate results from processing complex algorithms. So reducing the compute time required is a primary challenge to manufacturers of CT and MRI equipment. Other significant challenges include the cost of the computers required to achieve the necessary performance and the space those computers occupy.

Molecular Dynamics (MD) is a field of research in which a computer simulates the physical movements of atoms and molecules. Molecular dynamics relies heavily on computational power. The details of the simulation have to be chosen carefully so that the calculation can finish within a reasonable time period while still being long enough for the results to be useful. In order for the conclusions to be statistically valid, the simulated time span should match the kinetics of the natural process. For example, you can’t draw conclusions about how humans walk by looking at only one step. For many applications of Molecular Dynamics, several CPU-days or even CPU-years are needed to process the simulations. Programs that run algorithms in a parallel manner allow the computations to be distributed among CPUs, allowing for more complicated, time consuming simulations.

Oil and gas are becoming increasingly harder to find. Large reservoirs are now found at greater depths and in sediments that are much harder to analyze, like the recent Jack Field discovery in the Gulf of Mexico, which was found at more than 20,000 feet under the sea floor. To interpret and discover these reservoirs it is necessary to acquire and process huge amounts of seismic data. And due to the complexity of the sediment layers, better resolution is needed in the images, which means acquiring even more data.

Geophysicists apply advanced filters to their data and instantly see results even on multi-terabyte data sets. In addition, geophysicists can analyze the original acquired seismic (“pre-stack”) data in multiple dimensions as part of their daily workflow. Processing of terabyte data sets traditionally required months of manual labor and more months of compute time for number crunching.

Geospatial Intelligence (GeoInt) applications used by the military to create real time mapping of the battlefield require high compute acceleration to provide necessary data quickly. Today these calculations are performed with specialized software running on GPU cards, coprocessors, or FPGA cards.  The military gathers vast amounts of information from a variety of sources that needs to be manipulated to generate the 2D and 3D mapping required by field operations. GPU cards, with thousands of cores each, offload the number crunching and image processing from the CPUs.

Many of these applications use sophisticated software packages to provide effective approaches for meeting these challenges. But in most cases additional hardware advances are required to improve speed and accuracy.  Three such technical advances have significantly helped to overcome these issues: the adoption of PCI Express (PCIe) over cable, the emergence of compute acceleration cards (GPUs) and PCIe Flash storage cards. In following articles, I’ll describe how these advances have helped to increase speed and accuracy of results for many of these applications.

This article was written by Mark Gunn, Senior Vice President, One Stop Systems.