Parallel File System Delivers Better Strategies, Faster

A parallel file system offers several advantages over a single direct attached file system. By using fast, scalable, external disk systems with massively parallel access to data, researchers can perform analysis against much larger datasets than they can by batching large datasets through memory. To Learn More about the Parallel File Systems download this guide

insideHPC Research Report on In-Memory Computing

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources. A scale-up design that allows multiple cores to share a large global pool of memory and a scale-out design design that distributes data sets across the memory on separate host systems in a computing cluster. To learn more about In-Memory computing download this guide from IHPC and SGI.

Redefining Scalable OpenMP and MPI Price-to-Performance with Numascale’s NumaConnect

Using commodity hardware and the “plug-and-play” NumaConnect interconnect, Numascale delivers true shared memory programming and simpler administration at standard HPC cluster price points. Download this white paper to learn more.

Unleash your HPC Performance with Bull

This white paper identifies HPC performance inhibitors, and presents the best practices that can be implemented to avoid them, while optimizing energy efficiency. Download now to learn more.

Five Essential Strategies for Successful HPC Cluster

A successful HPC cluster is a powerful asset for an organization. At the same time, these powerful racks present a multifaceted resource to manage. If not properly managed, software complexity, cluster growth, scalability, and system heterogeneity can introduce project delays and reduce the overall productivity of an organization. At the same time, cloud computing models as well as the processing of Hadoop workloads are emerging challenges that can stifle business agility if not properly implemented. The following essential strategies are guidelines for the effective operation of an HPC cluster resource. Download this IHPC guide to learn more.

The insideHPC Guide to Flexible HPC

Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment. To learn more down load this white paper.

Many-Task Computing for Grids

Many-task computing aims to bridge the gap between two computing paradigms, high throughput computing and high performance computing.

GPU Clusters for High-Performance Computing

Large-scale GPU clusters are gaining popularity in the scientific computing community. However, their deployment and production use are associated with a number of new challenges. In this paper, we present our efforts to address some of the challenges with building and running GPU clusters in HPC environments. We touch upon such issues as balanced cluster […]

Drop Testing with Dell, Intel and Altair

Impact analysis or drop testing is one of the most important stages of product design and development, and software that can simulate this testing accurately yields dramatic cost and time-to-market benefits for manufacturers. Dell, Intel and Altair have collaborated to analyze a virtual drop test solution with integrated simulation and optimization analysis, delivering proven gains in speed and accuracy. With this solution, engineers can explore more design alternatives

PRIMEHPC FX10 Fujitsu Supercomputer

Fujitsu developed the first Japanese supercomputer in 1977. In the thirty-plus years since then, we have been leading the development of supercomputers with the application of advanced technologies. We now introduce the PRIMEHPC FX10, a state-of-the-art supercomputer that makes the petascale computing achieved by the “K computer”(*1) more accessible.