Even though it’s a new generation fabric, Intel OPA is still backwards compatible with the many applications in the HPC community that were written using the OpenFabrics Alliance* software stack for InfiniBand. So, existing InfiniBand users will be able to run their codes that are based on the OpenFabrics Enterprise Distribution (OFED) software on Intel OPA. Additionally, Intel has open sourced the key software elements of their fabric to allow integration of Intel OPA into the OFED stack, which several Linux* distributions include in their packages.
In this week’s Sponsored Post, Katie Garrison of One Stop Systems explains how Flash storage arrays are becoming more accessible as the economics of Flash becomes more attractive. “Comprised of a unique combination of a Haswell-based engine and 200TB Flash arrays, the FSA-SAN can be increased to a petabyte of storage with additional Flash arrays. Each 200TB array delivers 16 million IOPS, making it the ideal platform for high-speed data recording and processing with lightning fast data response time, high-availability and flexibility in the cloud.”
In this video, Bill Wagner of Bright Computing describes what attracted him to join the company as CEO and what’s ahead for system management software. “Bright addresses the exploding demand to manage increasingly complex IT infrastructures with a simple yet powerful ‘single pane of glass’ management platform that can extend across the datacenter and the cloud. I am excited to join Bright’s talented team and eager to build on the company’s upward growth trajectory.”
The Integrative Model for Parallelism at TACC is a new development in parallel programming. It allows for high level expression of parallel algorithms, giving efficient execution in multiple parallelism modes. We caught up with its creator, Victor Eijkhout, to learn more. “If you realize that both task dependencies and messages are really the dependency arcs in a dataflow formulation, you now have an intermediate representation, automatically derived, that can be interpreted in multiple parallelism modes.”
“The findings of a recent IDC study on the cybersecurity practices of U.S. businesses reveal a wide spectrum of attitudes and approaches to the growing challenge of keeping corporate data safe. While the minority of cybersecurity “best practitioners” set an admirable example, the study findings indicate that most U.S. companies today are underprepared to deal effectively with potential security breaches from outside or inside their firewalls.”
Manufacturing is enjoying an economic and technological resurgence with the help of high performance computing. In this insideHPC webinar, you’ll learn how the power of CAE and simulation is transforming the industry with faster time to solution, better quality, and reduced costs.
In this week’s industry Perspective, Katie Garrison of One Stop Systems explains how GPUltima allows HPC professionals to create a highly dense compute platform that delivers a petaflop of performance at greatly reduced cost and space requirements.compute power needed to quickly process the amount of data generated in intensive applications.
Although liquid cooling is considered by many to be the future for data centers, the fact remains that there are some who do not yet need to make a full transformation to liquid cooling. Others are restricted until the next budget cycle. Whatever the reason, new technologies like Internal Loop are more affordable than liquid cooling and can replaces less efficient air coolers. This enables HPC data centers to still utilize the highest performing CPUs and GPUs.
Data accumulation is just one of the challenges facing today weather and climatology researchers and scientists. To understand and predict Earth’s weather and climate, they rely on increasingly complex computer models and simulations based on a constantly growing body of data from around the globe. “It turns out that in today’s HPC technology, the moving of data in and out of the processing units is more demanding in time than the computations performed. To be effective, systems working with weather forecasting and climate modeling require high memory bandwidth and fast interconnect across the system, as well as a robust parallel file system.”
Dr. Lewey Anton reports on who’s moving on up in High Peformance Computing. Familiar names in this edition include: Sharan Kalwani, John Lee, Jay Muelhoefer, Brian Sparks, and Ed Turkel. And be sure to let us know of HPC folks in new positions!