Kao Data Talks HPC-class Colocation Data Center Requirements in the UK’s ‘Innovation Corridor’

UK colocation provider Kao Data talks about the changing data center requirements of HPC organizations in the “Innovation Corridor,” between London and Cambridge. Kao Data Vice President Spencer Lamb talks with the late Rich Brueckner about his company’s adoption of hyperscalers’ infrastructure principles to offer state-of-the-art, high density data centers. For more insights from Kao […]

In-Network Computing Technology to Enable Data-Centric HPC and AI Platforms

Mellanox Technologies’ Gilad Shainer explores one of the biggest tech transitions over the past 20 years: the transition from CPU-centric data centers to data-centric data centers, and the role of in-network computing in this shift. “The latest technology transition is the result of a co-design approach, a collaborative effort to reach Exascale performance by taking a holistic system-level approach to fundamental performance improvements. As the CPU-centric approach has reached the limits of performance and scalability, the data center architecture focus has shifted to the data, and how to bring compute to the data instead of moving data to the compute.”

Why the Choice of DRAM in the Data Center is so Critical

By working with a manufacturer of DRAM products that deliver a diversity of components, data center operators can be assured that they will not have a patchwork of suppliers to work with. This guest post from Kingston Technology explores different storage structures and why DRAM performance is so crucial to data center success. 

Performance in the Datacenter

Many modern applications are being developed with so called run-time languages, which are compiled at execution time. The performance of these applications in cloud data centers is important for anyone considering moving their applications and workloads to the cloud. Download Intel Distribution for Python for free today to supercharge your applications.

Managed Implementation of Liquid Cooling

The need for high reliability cooling without reducing the rack density to handle high wattage nodes is no easy task. “Asetek’s direct-to-chip liquid cooling provides a distributed cooling architecture to address the full range of heat rejection scenarios. It is based on low pressure, redundant pumps and sealed liquid path cooling within each server node.”

The Role of Quick Disconnect Couplings in Liquid Cooling: 5 Attributes that Contribute to Connector Reliability

The use of liquid cooling to mitigate heat generated by electronics can be found in a wide variety of applications, including gaming computers, supercomputers and medical equipment. Download the new report, courtesy of CPC, “The Role of Quick Disconnect Couplings in Liquid Cooling: Five Attributes that Contribute to Connector Reliability,”  that explores the role of quick disconnect couplings in liquid cooling. 

Rack Scale Composable Infrastructure for Mixed Workload Data Centers

A more flexible, application-centric, datacenter architecture is required to meet the needs of rapidly changing HPC applications and hardware. In this guest post, Katie Rivera of One Stop Systems explores how rack-scale composable infrastructure can be utilized for mixed workload data centers. 

Critical Liquid Cooling Considerations in Electronics

As data centers and high performance computing continue to drive demand for higher densities and increased efficiency, liquid cooling is expanding as a method of thermal management. Download the new white paper from CPC that provides a technical guide for connectors when considering a liquid cooling system for your HPC and data center environments.

Thinking about Colocation? Do These 7 Things First

While the merits of in-house vs. cloud vs. colo have been vigorously debated — and will likely continue for some time — a new report is aimed at those already sold on the colo concept. Download the white paper from Instor for considerations, tips and potential landmines to watch out for those looking to utilize colocation in 2018.  

Data Center Workloads: How to Successfully Manage Convergence

Mixing workloads rather than creating separate application domains is key to efficiency and productivity. Specific software is typically needed only in certain phases of product development, leaving systems idle the rest of the time. Download the insideHPC guide that explores how a powerful scheduling and resource management solution — such as Bright Cluster Manager — can slot other workloads into those idle clusters, thereby gaining maximum value from the hardware and software investment, and rewarding IT administrators with satisfied users.