While some domains that rely on computing systems are satisfied with the performance gains that are continuously delivered from processor manufacturers, others require performance that can only be attained through massive parallelism. High-performance computing is for those that need the highest performance in order to solve many of today’s most difficult scientific challenges.
In this document, our focus is on “industrializing” big data infrastructure—bringing operational maturity to the Hadoop data ecosystem, making it easier and cost-effective to deploy at enterprise scale, and moving companies from the proof of concept stage into production-ready deployments. Download this Guide to Big Data on an Industrial Scale to learn more.
The high performance networking interconnect landscape is in transition. InfiniBand and Intel Omni-Path will compete for the performance crown, while Ethernet will remain the ubiquitous standard for commercially oriented systems.
To achieve high performance, modern computer systems rely on two basic methodologies to scale resources. A scale-up design that allows multiple cores to share a large global pool of memory and a scale-out design design that distributes data sets across the memory on separate host systems in a computing cluster. To learn more about In-Memory computing download this guide from IHPC and SGI.
With the deluge of new data from new sources, it isn’t surprising to find that data centers are running short on compute capacity. In this research report, we explore the world of accelerators, primarily FPGAs, to see if they’re the right answer to fill
the ‘compute gap’.
In this research report, we reveal recent research showing that customers are feeling the need for speed—i.e. they’re looking for more processing cores. Not surprisingly, we found that they’re investing more money in accelerators like GPUs and moreover are seeing solid positive results from using GPUs. In the balance of this report, we take a look at the newest GPU tech from NVIDIA and how it performs vs. traditional servers and earlier GPU products. Download this guide to learn more.
A successful HPC cluster is a powerful asset for an organization. At the same time, these powerful racks present a multifaceted resource to manage. If not properly managed, software complexity, cluster growth, scalability, and system heterogeneity can introduce project delays and reduce the overall productivity of an organization. At the same time, cloud computing models as well as the processing of Hadoop workloads are emerging challenges that can stifle business agility if not properly implemented. The following essential strategies are guidelines for the effective operation of an HPC cluster resource. Download this IHPC guide to learn more.
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. This insideHPC special report explores the technologies, components and software required for creating successful deep learning environments.
The use of Co-Design and offloading are important tools in achieving Exascale computing. Application developers and system designers can take advantage of network offload and emerging co-design protocols to accelerate their current applications. Adopting some basic co-design and offloading methods to smaller scale systems can achieve more performance on less hardware resulting in low cost and higher throughput. Learn more by downloading this guide.
The insideHPC Guide to Personalized Medicine and Genomics explains how genomics will accelerate personalized medicine including several case studies.