Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The SGI Data Management Framework for Personalized Medicine

SGI’s Data Management Framework (DMF) software – when used within personalized medicine applications – provides a large-scale, storage virtualization and tiered data management platform specifically engineered to administer the billions of files and petabytes of structured and unstructured fixed content generated by highly scalable and extremely dynamic life sciences applications.

NVIDIA Tesla P100 GPU Review

Accelerated computing continues to gain momentum as the HPC community moves towards Exascale. Our recent Tesla P100 GPU review shows how these accelerators are opening up new worlds of performance vs. traditional CPU-based systems and even vs. NVIDIA’s previous K80 GPU product. We’ve got benchmarks, case studies, and more in the insideHPC Research Report on GPU Accelerators.

FPGA Myths

As data center sprawl is now understood to be expensive and may not deliver performance increases for all types of applications, new technologies are coming to the rescue. A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence “field-programmable”. While the use of GPUs and HPC accelerators are generally understood today, there are a number of misconceptions about FPGAs that need to be understood.

High-Throughput Genomic Sequencing Workflow

A workflow to support genomic sequencing requires a collaborative effort between many research groups and a process from initial sampling to final analysis. Learn the 4 steps involved in pre-processing.

Can FPGAs Help You?

FPGAs will become increasing important for organizations that have a wide range of applications that can benefit from performance increases. Rather than a brute force method to increasing performance in a data center by purchasing and maintaining racks of hardware and associated costs, FPGAs may be able to equal and exceed the performance of additional servers, while reducing costs as well.

GPU Accelerators in Today’s Data Center: Performance & Efficiency

NVIDIA is a leading provider of GPU accelerators that are used in many High Performance Computing environments. This research paper from Gabriel Consulting Group explains the need for this new generation of hardware in today’s data center and looks at what new technologies actual users are looking for.

Exascale – A Race to the Future of HPC

From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.

Next Generation Sequencing

With a massive surge in genomics research, the ability to quickly process very large amounts of data is now required for any organization that is involved in genomics. While the cost has been reduced significantly, the amount of data that is produced is has increased as well. This article describes next generation sequencing and how a combination of hardware and innovative software can decrease the amount of time to sequence genomes.

Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.

Network Co-design as a Gateway to Exascale

Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.