Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Open Compute Solutions

The Open Compute Project partners with leading CPU vendors such as Intel, AMD and ARM-based vendors to create reference designs that may be used by board and system vendors. These designs are bare-bones systems, with expansion options designed in for other types of I/O and storage. The reference design from Intel (REF) is 6.5 inches wide and 20 inches deep. These dimensions allow for three servers to be placed side by side in a newly designed Open Compute rack, increasing density as discussed previously. For a similar floor space, compute density is increased by three times.

This the third article in a series, from the insideHPC Guide to Open Computing.

 

 

The density of a compute infrastructure can easily be calculated by determining the number of sockets (and thus cores) that can be contained in a given physical volume.  Knowing the compute density manages cost by maximizing the use of the floor space and reducing rack purchases, power delivery and networking costs. Based on standard two-socket servers in a traditional 42U-high rack, the compute density would maximize at 84 sockets (42U x two sockets/U). Within a standard rack that is 482.6 millimeters wide x 736 millimeters deep, the computing density would be 84/(42*44.45 millimeters * 482.6 millimeters * 736 millimeters).  Converting to cubic feet, this would be one socket per 4.5 cubic feet, or 0.22 cubic feet per socket.

However, a two-socket system in every 1U is not an optimal design. When a server is designed with only the components required to satisfy an end user’s needs, computing density can increase. By increasing the number of servers per 1U to N, and thus the number of sockets from two to N*2, peak density will be increased by a factor of N. This is even more important than simply increasing the number of sockets.

A server also contains communication capability to the rest of the compute environment. Designing N smaller servers with the same amount of compute power increases the reliability by a factor of N as well. If one of the N servers goes down, the other N-1 will remain in operation. This increases the reliability of the entire cluster.

Traditional servers are designed for many applications and environments. Because general-purpose servers must be designed to serve a wide range of users, components for general-purpose functions are added, whether these functions will be used or not. However, when designing for a high-density environment, just the basic features must be built in — CPUs, memory and limited I/O.  All other components may be omitted to maximize compute density. As a result, Open Compute Project reference designs will typically be minimalistic with the option of adding more features.

As discussed earlier, adding up all the sockets of all the servers in a rack and dividing by the volume of the rack provides socket density and cores per rack. However, there will be some percentage of the total volume taken up by power shelves, Keyboard, video, mouse (KVMs), storage and possibly other related hardware that reduces the compute density per rack.  It is important when comparing different designs of the entire rack enclosure to account for a realistic operating environment rather than just looking at the maximum servers that can be fit into a rack of a given size.

Power Distribution

A major issue when installing servers in a rack is the multiple power conversion steps on the way to each server.  Typically, power is delivered to the rack at the bottom and is then distributed to the servers via power strips installed within the rack structure. In addition, each server must have its own power supply. This arrangement can lead to power loss during multiple conversions. By not specifying how power is delivered to each server, vendors have the flexibility to discover the best way to minimize power loss in the entire system. This technique can eliminate multiple voltage steps, which will increase overall system efficiency (in servers, storage, network systems, etc.).

Next week’s article will explore Open Computing for Different Industry Segments. If you prefer, you can download the complete ‘insideHPC Guide to Open Computing’  from the insideHPC White Paper Library courtesy of Penguin Computing.

Resource Links: