The Open Compute Project is a way for organization to increase computing power while lowering associated costs with hyper-scale computing. This article is the 4th in a series from insideHPC that showcases the benefits of open computing to specific industries.
With higher density computing available, more discoveries in the areas of genomics and personalized medicine will be possible. Open Compute Solution’s more efficient power and performance designs will accelerate these computations and lead to faster discovery, whether the computational algorithms are designed to run serially or in parallel.
Computer-aided engineering (CAE) encompasses structural design (FEA) and simulation as well as computational fluid dynamics (CFD). While “large-scale deformation finite element analysis” and various CFD codes scale to hundreds of cores, there are other application algorithms that do not lend themselves to massive scaling. The Open Compute environment benefits manufacturing organizations by supporting a mixture of scalable and non-scalable applications that can be run on high-density servers in a single rack. An entire design department could now be able to use one or two racks of Open Compute servers to design (CAD) and simulate the behavior of new and optimized products.
Exploring new energy sources and determining optimal methods for energy extraction require tremendous raw compute power. Regardless of volatile crude markets, increasing the accuracy of the discovery phase and applying the best extraction techniques will have financial benefits. Higher density computing will lead to better — and more informed — decisions for energy extraction.
Milliseconds count when trading highly volatile financial assets. However, the insight that drives trading algorithms takes considerable computational simulation. Myriad historical trends may play into financial instruments pricing. The more computational simulation firms can support, the better their understanding is of past and current market conditions, leading to faster and better decisions.
Research and discovery is driven, in part, by computational capability. As investigators gain access to more computation and algorithmic performance, research outcomes in domains such as climate change, astrophysics, disease containment, nuclear readiness and medicine will advance. As architectural innovation, chip speeds and cores-per-die increase, more compute power will be available for many of these domains. But without optimized compute density, the benefits are limited. By optimizing compute density with Open Compute servers and associated systems, new discoveries can be made in a wide range of industries and research areas.
HPC techniques are now being used to analyze tremendous amounts of structured and unstructured data. Higher density computing helps users rapidly derive more insight from this data. Whether the application runtime is distributed across many cores and servers or runs on a single server, increased compute density allows for a deeper analysis by supporting larger data sets. By increasing the number of servers that are easily accessible, more data can be analyzed in less time.
Next week’s article will look at the open computing vendor landscape. If you prefer, you can download the complete ‘insideHPC Guide to Open Computing’ from the insideHPC White Paper Library courtesy of Penguin Computing.