Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Accelerate MySQL for Demanding OLAP and OLTP Use Cases with Apache Ignite

MySQL is a widely used, open source relational database management system (RDBMS) which is an excellent solution for many applications, including web-scale applications. To learn more about accelerating MySQL for demanding OLAP and OLTP use cases with Apache Ignite download this guide.

insideBIGDATA Guide to Use of Big Data on an Industrial Scale

In this document, our focus is on “industrializing” big data infrastructure—bringing operational maturity to the Hadoop data ecosystem, making it easier and cost-effective to deploy at enterprise scale, and moving companies from the proof of concept stage into production-ready deployments. Download this Guide to Big Data on an Industrial Scale to learn more.

Nine Critical Features for Object Stores

Object stores represent a simpler, more scalable solution and one that is easily accessed over standard web-based protocols. To learn more about Object Stores download this guide.

Paradigm and DDN: Achieving the Ultimate Efficiency for Seismic Data Analysis

For this report, DDN performed a number of experimental benchmarks to attain optimal IO rates for Paradigm Echos application workloads. It present results from IO intensive Echos micro-benchmarks to illustrate the DDN GRIDScaler performance benefits and provide some detail to aid optimal job packing in 40G Ethernet clusters. To find out the results download this guide.

Parallel File System Delivers Better Strategies, Faster

A parallel file system offers several advantages over a single direct attached file system. By using fast, scalable, external disk systems with massively parallel access to data, researchers can perform analysis against much larger datasets than they can by batching large datasets through memory. To Learn More about the Parallel File Systems download this guide

Exascale: A race to the future of HPC

As exponential data growth reshapes the industry, engineering, and scientific discovery, success has come to depend on the ability to analyze and extract insight from incredibly large data sets. Exascale computing will allow us to process data, run systems, and solve problems at a totally new scale and this will become vitally important as problems grow ever larger, ever more difficult. Our unmatched ability to bring new technology to the mainstream will provide systems that are markedly more affordable, usable, and efficient at handling growing workloads. To learn more download this white paper.

Redefining Scalable OpenMP and MPI Price-to-Performance with Numascale’s NumaConnect

Using commodity hardware and the “plug-and-play” NumaConnect interconnect, Numascale delivers true shared memory programming and simpler administration at standard HPC cluster price points. Download this white paper to learn more.

Satisfying NCSA appetite for performance and insight

When the National Center for Supercomputing Applications (NCSA) was created at the University of Illinois 27 years ago, it had a unique proposition—its computing, data and networking resources were designed for industry as well as academia. Over the years, NCSA’s efforts to serve industry have grown and matured. Download this white paper to learn more

HPC in the data center

The Central Processing Unit (CPU) has been at the heart of High Performance Computing (HPC) for decades. However, in recent years, advances in parallel processing technology mean the landscape has changed dramatically. To learn more download this white paper.

Big Workflow: More than Just Intelligent Workload Management for Big Data

Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. To learn more down load this white paper.