Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Paradigm and DDN: Achieving the Ultimate Efficiency for Seismic Data Analysis

For this report, DDN performed a number of experimental benchmarks to attain optimal IO rates for Paradigm Echos application workloads. It present results from IO intensive Echos micro-benchmarks to illustrate the DDN GRIDScaler performance benefits and provide some detail to aid optimal job packing in 40G Ethernet clusters. To find out the results download this guide.

Parallel File System Delivers Better Strategies, Faster

A parallel file system offers several advantages over a single direct attached file system. By using fast, scalable, external disk systems with massively parallel access to data, researchers can perform analysis against much larger datasets than they can by batching large datasets through memory. To Learn More about the Parallel File Systems download this guide

In-Memory Data Grids

This white paper provides an overview of in-memory computing technology with a focus on in-memory data grids. It discusses the advantages and uses of in-memory data grids and introduces the GridGain In-Memory Data Fabric. Download this guide to learn more.

In Memory Data Grids

This white paper provides an overview of in-memory computing technology with a focus on in-memory data grids. It discusses the advantages and uses of in-memory data grids and introduces the GridGain In-Memory Data Fabric. Download this guide to learn more.

Accelerating the speed and accessibility of artificial intelligence technologies

As AI technologies become even faster and more accessible, the computing community will be positioned to help organizations achieve the desired levels of efficiency that are critically needed in order to resolve the world’s most complex problems, and increase safety, productivity, and prosperity. Learn more about AI Technologies … download this white paper.

insideHPC Research Report on In-Memory Computing

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources. A scale-up design that allows multiple cores to share a large global pool of memory and a scale-out design design that distributes data sets across the memory on separate host systems in a computing cluster. To learn more about In-Memory computing download this guide from IHPC and SGI.

insideHPC Research Report Are FPGAs the answer to the “Compute Gap?”

With the deluge of new data from new sources, it isn’t surprising to find that data centers are running short on compute capacity. In this research report, we explore the world of accelerators, primarily FPGAs, to see if they’re the right answer to fill
the ‘compute gap’.

Drivers and Barriers to Using HPC in the Cloud

This executive briefing is a preliminary report of a larger study on demand-side barriers and drivers of cloud computing adoption for HPC. A more comprehensive report and analysis will be published later in 2016. From June to August 2016, the CloudLightning project surveyed over 170 HPC discrete end users worldwide in the academic, commercial and government sectors on their HPC use, perceived drivers and barriers to using cloud computing, and uses of cloud computing for HPC.

Redefining Scalable OpenMP and MPI Price-to-Performance with Numascale’s NumaConnect

Using commodity hardware and the “plug-and-play” NumaConnect interconnect, Numascale delivers true shared memory programming and simpler administration at standard HPC cluster price points. Download this white paper to learn more.

The Cray XC Supercomputer Series: Energy Efficient Computing

As global energy costs climb, Cray has taken its long-standing expertise in optimizing power and cooling and focused it on developing overall system energy efficiency. The resulting Cray XC supercomputer series integrates into modern datacenters and achieves high levels of efficiency while minimizing system and infrastructure costs. To learn more download this white paper.