Unlocking the Power of Parallel Coding to Access Better Performance in Multi-Core Environments

A number of different frameworks and standards can be employed for parallel coding. The choice of the most suitable depends on the purpose of the application, its overall requirements and the target execution environment. Selecting the right framework is imperative to obtaining the best possible performance increase. The choice of framework is based on the available memory, overheads, controls and support.

insideHPC Research Report on In-Memory Computing

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources. A scale-up design that allows multiple cores to share a large global pool of memory and a scale-out design design that distributes data sets across the memory on separate host systems in a computing cluster. To learn more about In-Memory computing download this guide from IHPC and SGI.

The Cray CS300 Cluster ‘s Warm Water Cooling Is at the Forefront of an HPC Industry Trend

This Technology Spotlight reviews the liquid cooling trend and the innovative use of warm water cooling in the Cray CS300 cluster supercomputer to reduce capital expense and operating costs. Download this white paper to learn more.

PRIMEHPC FX10 Fujitsu Supercomputer

Fujitsu developed the first Japanese supercomputer in 1977. In the thirty-plus years since then, we have been leading the development of supercomputers with the application of advanced technologies. We now introduce the PRIMEHPC FX10, a state-of-the-art supercomputer that makes the petascale computing achieved by the “K computer”(*1) more accessible.

Parallel Storage Solutions for Better Performance

Using high performance parallel storage solutions, geologists and researchers can now incorporate larger data sets and execute more seismic and reservoir simulations faster than ever before, enabling higher fidelity geological analysis and significantly reduced exploration risk. With high costs of exploration, oil and gas companies are increasingly turning to high performance DDN storage solutions to eliminate I/O bottlenecks, minimize risk and costs, while delivering a larger number of higher fidelity simulations in same time as traditional storage architectures.

SAS Analytics Using Direct Memory Access

Using Remote Direct Memory Access based analytics and fast, scalable,external disk systems with massively parallel access to data, SAS analytics driven organizations can deliver timely and accurate execution for data intensive workflows such as risk management, while incorporating larger datasets than using traditional NAS.

The insideHPC Guide to Co Design Architecture

The use of Co-Design and offloading are important tools in achieving Exascale computing. Application developers and system designers can take advantage of network offload and emerging co-design protocols to accelerate their current applications. Adopting some basic co-design and offloading methods to smaller scale systems can achieve more performance on less hardware resulting in low cost and higher throughput. Learn more by downloading this guide.