Big Energy Breaks New Ground in Supercomputing

Print Friendly, PDF & Email
Meike Chabowski, Product Marketing Manager for Enterprise Linux Servers at SUSE.

Meike Chabowski, Product Marketing Manager for Enterprise Linux Servers at SUSE

In this special guest feature, Meike Chabowski from SUSE describes how the latest generation of high performance computing keeps us going by finding new energy reserves.

Ever wonder how the world’s energy reserves are discovered? With the global oil supply being consumed four times faster than it is currently being found, supercomputers have become an integral component in the discovery of new energy reserves, making the field of oil exploration and development much more competitive.

The importance of supercomputers has been decades in the making. The past 25 years have seen significant changes in the High Performance Computing. This happened at least in part due to the emergence of open source and new clustering technologies. One to two decades back, UNIX variants such as AIX, HP-UX, Tru64 UNIX, Solaris, Digital UNIX, Irix, etc., ruled. Building supercomputers out of clustering independent, commodity-class machines was a controversial idea as recently as 15-20 years ago.

Today, energy companies mark the world leaders in commercial supercomputing. Companies like Total are utilizing high performance computing (HPC) to deliver an optimal combination of performance, price and efficiency. Supercomputers like “Pangea” deliver 10 times the computing capacity of the system it replaced, helping Total identify and exploit new reserves more effectively. Designed by SGI (Silicon Graphics International), the Pangea supercomputer has a computing capacity of 2.3 Pflops. Its unique computing architecture is based on over 110,000 calculation cores, 7 Pb storage capacity and an innovative cooling system whose circuit is integrated with the processors. Requiring 2.8 MW of electric power, the heat generated by this supercomputer is recovered, making it possible to heat the totality of the Scientific and Technical Centre.

As finding new energy resources becomes more and more complex, challenging and also exigent, the use of HPC technologies and supercomputers – as shown in our example from Total – have turned into being the key to success for enterprises in three critical ways:

  • Discovery. Historically, companies found energy resources through test drillings, “simple” geological analysis and two-dimensional computational calculations. Yet throughout the 20th century, oil and gas production grew exponentially, and the easily accessible reservoirs are primarily exploited. New fields are more difficult to discover and to access – and increasingly newer oil fields are located in more geologically complex areas. Traditional data processing techniques do not work in such environments.
    • Oil and gas companies use HPC technology to minimize time and cost of processing large amounts of data and speed up production time. In the search or exploitation for new energy reserves, the oil and gas industry uses new methods such as “reservoir simulation,” where more information related to a particular location is analyzed and visualized to perceive how the maximum amount of resources can be extracted. From there, supercomputers use new computationally intensive seismic algorithms such as the Kirchhoff time migration and depth migration (KTM or KDM), Wave equation migration (WEM), spectral inversion and Reverse Time Migration (RTM). This requires a massive amount of computing power – for a single analysis to locate a subsea oil reservoir, for instance, approximately 10TB of data are produced, in which each byte needs 10⁶ operations for evaluation. The end result is about 10 times the amount of data that subsequently has to be visualized. Since the processing and visualization of this data is time-critical, high performance computing is essential to the business success of the company.
  • Efficiency. Companies in the energy industry operate under uniquely demanding, competitive conditions where speed is paramount. Analysis that takes months to complete can risk running past the lease term of prospective well sites. Further, delays in analysis at existing sites can produce delays that idle equipment and personnel, increasing operating costs.
    • In this high-stakes environment where energy companies must place informed bets on the presence and character of prospective oil and gas deposits, the ultimate leverage is information. Energy companies conduct detailed, seismic surveys, deploying thousands of arrayed sensors to capture precise data via reflections of seismic waves through subsurface structures. These surveys allow companies to bolster the efficiency of fracking and the unconventional horizontal drilling methods used today. It is one thing to survey a field and capture the acoustic and seismic data that comes back. It’s quite another to transform that raw data into actionable insight that enables oil and gas companies to make good decisions about where and how to drill.
    • By utilizing high performance computers, these companies can quickly and accurately discover better predictions for oil reserve locations and volumes. Knowing the platform was stable and consistent helps crunch as much data as possible. The greater processing power even allows seismic data to offer information on underground areas previously invisible to scientific models, which helps maximize performance while also decreasing any unnecessary and wasteful footprint.
  • Ingenuity. As supercomputing has become integral to the oil and gas market, it has pushed companies to create methods of cooling and other energy efficient techniques to use less energy in their supercomputers. Virtually every industry has adopted Linux clusters to attain the performance improvements needed to deliver on organizational goals –and the oil and gas industry is one of the key drivers for the development and the adoption of new technologies in HPC. They were (and are) using Linux for their HPC requirements because Linux on a cluster of x86 servers is more economical. Linux clusters have also become easy to set up and simple to manage. More importantly, there are a lot of resources available for HPC on Linux –many of them free.
    • More recently, parallel processing and storage have enabled successive generations of seismic imaging and reservoir simulation techniques, together with increased scalability and reduced cost. The challenge now is to increase overall HPC throughput and productivity while overcoming some physical limits, whether these are electrical power, heat dissipation, footprint, cost or a combination of all of these factors. Likewise, increased performance depends on the optimization of the application software to exploit hardware scalability and to generate faster turnaround times. For these reasons, the oil and gas industry is looking closely at the potential use of accelerator technologies to speed up certain parts of the application code which are suitable for parallelization. Many core technologies show strong potential for accelerating these parts of the code, but require programming changes.

HPC’s future continues to get brighter as companies see the benefits of investing in supercomputers. In the coming years supercomputing will become an essential technology across more industries. The U.S. Department of Energy is currently working on developing the ultimate supercomputer to provide exascale (1,000 petaflops per second of sustained performance ) for industries that may adopt HPC in coming decade. For the time being, the current supercomputers of the world provide the power that is necessary in unknown data discovery.

Sign up for our insideHPC Newsletter.