HPC in Seismic Processing and Interpretation and Reservoir Modeling

Print Friendly, PDF & Email
PCIe Gen 4

Sponsored Post

Katie (Garrison) Rivera, Marketing Commuincations, One Stop Solutions

Katie (Garrison) Rivera, Marketing Communications, One Stop Systems

The majority of today’s oil comes from older reservoirs that have already had the easy-to-reach oil extracted. Oil and gas companies are having to go deeper and use more advanced techniques. As sensor technology advances, the amount of seismic data that needs to be processed and interpreted increases. Oil and gas companies are using cutting edge technology, like high performance computing (HPC) servers, compute accelerators and flash storage arrays for applications such as seismic processing, seismic interpretation and reservoir modeling. Oil and gas companies use complex algorithms to process, analyze and visualize huge seismic datasets. In the past, these datasets took lengthy amounts of time to process, slowing down the entire oil and gas operation.

“Oil and gas companies use complex algorithms to process, analyze and visualize huge seismic datasets” states Steve Briggs, former VP of Systems Integration, Headwave, Inc. “Processing these terabyte datasets traditionally required months of manual labor and more months of compute time for number crunching.”

ShipToday, seismic processing and interpretation can be sped up by using GPUs because GPUs perform well for the parallel computations used in these applications. Huge speed increases have been achieved by using NVIDIA Tesla GPUs. Many oil and gas companies have already been using GPUs for parallel processing of the large seismic datasets. The algorithms used were designed with GPUs in mind, so they can take advantage of the additional compute power that comes from adding even more GPUs. The OSS High Density Compute Accelerator can accommodate up to 16 NVIDIA Tesla K80 GPUs, adding 140 Tflops of compute power to servers used for seismic processing and interpretation

servergreenReservoir modeling is another application benefits from the use of GPUs. Reservoir modeling consists of constructing a computer model of a petroleum reservoir in order to improve the estimation of the petroleum in the reservoir and also to aid in the development of the oil and gas field. A reservoir model represents the reservoir’s physical space by an array of distinct compartments, demarcated by either a regular or irregular grid. The model is usually three-dimensional, but sometimes one-dimensional and two-dimensional models are used. Each compartment has many different values that represent aspects of the reservoir such as how porous it is, how permeable, and what the water saturation is. These models are very detailed and contain large amounts of data. A typical server used for reservoir modeling may have multiple CPUs, but for processing as much data as this field needs to, GPUs can process this data much more quickly.

serverbottomSometimes with reservoir modeling there is uncertainty so it is necessary to create several different models. In order to store these huge datasets, the server requires a large amount of storage. The One Stop Systems Flash Storage Array (FSA) has over 200TB of flash storage capacity. Adding the FSA to any server would dramatically increase the storage capacity, allowing for a high volume of reservoir modeling data.

As technology continues to get better, the data sets produced in applications such as seismic processing, seismic interpretation and reservoir modeling will become even larger and require even more compute power to process quickly. HPC build-outs are relying more on GPUs to offload computationally intensive tasks. Luckily, GPU technology continues to advance in order to keep up with increasing computation demands.

This guest article was Katie (Garrison) Rivera, Marketing Communications at One Stop Systems