Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Cloud Solutions for HPC Efficiency

This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos.

Cloud Computing Guide

IT organizations are facing increasing pressure to deliver critical services to their users while their budgets are either reduced or maintained at current levels. New technologies have the potential to deliver industry-changing information to users who need data in real time, but only if the IT infrastructure is designed and implemented to do so. While computing power continues to decline in cost, the management of large data centers, together with the associated costs of running these data centers, increases. The server administration over the life of the computer asset will consume about 75 percent of the total cost.

High Performance Data Analysis

The traditional HPC and commercial markets have been converging as established HPC users increase their use of newer analytics methods and commercial firms turn to HPC for mission-critical analytics problems that enterprise technology alone can’t handle adequately. Key verticals exploiting HPDA include financial services, healthcare/bioinformatics, energy, cybersecurity/fraud, manufacturing, online retailers and service providers, digital content creation, telecommunications, government, and academia. Key horizontal applications include simulation, fraud and anomaly detection, business intelligence/business analytics, machine learning/deep learning, affinity marketing, and advanced visualization.

Platform Computing of IBM

IBM® Platform ComputingTM cluster, grid and high-performance computing (HPC) cloud management software can help transform your environment to deliver results better, faster and at less expense. IBM Platform Computing products are designed to save money by making an organization’s existing infrastructure work better.

Cloud Adoption in Your Community

In conference rooms worldwide, enterprise IT departments are evaluating entry into ‘the cloud’. Armed with media reports and marketing materials, they are considering questions like, “Is the cloud appropriate for critical workloads? Will the cloud really save time and money? Does the cloud pose a security risk?”
There’s only one problem with such due diligence: there’s no such thing as ‘the cloud’. Instead, there are multiple clouds, with different configurations, offered by different providers and representing different degrees of benefit and risk.

Managing High Performance GPU Clusters

To fully take advantage of NVIDIA GPUs requires several sound strategies. The goal of any HPC resource should be to increase the productivity of researchers and engineers because minimizing time to solution is the goal of many leading HPC installations. Keeping users and developers focused on applications is one of the way to increase productivity and minimize wasted time.

IBM Storage with OpenStack

This paper reviews the increasingly popular OpenStack cloud platform and the abilities that IBM storage solutions provide to enable and enhance OpenStack deployments. But before addressing those specifics, it is useful to remind ourselves of the “whys and wherefores” of cloud computing.

Guide to Scientific Research Evolution

The rapid evolution of big data technology in the past few years has changed forever the pursuit of scientific exploration and discovery. Along with traditional experiment and theory, computational modeling and simulation is a third paradigm for science. Its value lies in exploring areas of science in which physical experimentation is unfeasible and insights cannot be revealed analytically, such as in climate modeling, seismology and galaxy formation. More recently, big data has been called the “the fourth paradigm” of science.

Extreme-scale Graph Analysis on Blue Waters

George Slota presented this talk at the Blue Waters Symposium. “In recent years, many graph processing frameworks have been introduced with the goal to simplify analysis of real-world graphs on commodity hardware. However, these popular frameworks lack scalability to modern massive-scale datasets. This work introduces a methodology for graph processing on distributed HPC systems that is simple to implement, generalizable to broad classes of graph algorithms, and scales to systems with hundreds of thousands of cores and graphs of billions of vertices and trillions of edges.”