Open Computing: Vendor Landscape

Print Friendly, PDF & Email

The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. Several vendors have released, or will soon release, servers based on Open Compute specifications. The complete list with contact information appears at:  http://www.opencompute.org/about/open-compute-project-solution-providers/.

The list includes (listed in order of joining the Open Computing Project):

  • Hyve Solutions
  • AMAX
  • Penguin Computing
  • Racklive
  • Quanta
  • CTC
  • StackVelocity

Open Computing Vendor Profile: Penguin Computing 

While the initial specifications for Open Computing were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. As part of the Open Compute Project’s “grid to gates” philosophy, Penguin offers a number of innovative solutions. Penguin is developing a range of products that adhere to the Open Compute specification, with additional IP to optimize for HPC. The first of these is the Tundra product family, an innovative set of hardware solutions that reduce customer capital and operational expenditure while promoting future innovation. A Tundra solution can completely address an organization’s computational requirements without costly or redundant infrastructure.

This the final article in a series, from the insideHPC Guide to Open Computing.

            Tundra Compute Sled

The Tundra OpenHPC ™ server, based on the latest Intel Xeon E5-26xx V3 processors, is an innovative, high-density design. This unique high-density server uses two V3 processors in sled, measuring just 48 millimeters high by 173 millimeters wide.  This sled design allows for three OpenHPC servers in a Tundra Tray, side by side.  The Tundra OpenHPC server offers up to 1TB of memory and may include:

  • A single hard-disk drive (HDD) or a solid-state drive
  • Dual 10GbE
  • A PCIe expansion slot for high-speed communications through Mellanox InfiniBand

Any interconnect can be installed as long as it adheres to the PCIe X16 MD2 form factor. The Tundra OpenHPC achieves three times the compute density by squeezing three servers into a volume that typically houses just one server. Figure 1 shows a Tundra Sled with a dual-socket implementation.

In addition to compute sleds, Tundra OpenHPC supports optimized storage sleds. The Tundra storage sled can contain up to four HDDs in addition to networking ports. Soon, other types of Tundra sleds will be developed, including sleds that are optimized for accelerators. That way, the same rack infrastructure can be used (sleds, trays and power shelves) for a variety of computing requirements. By creating this family of sleds with the same physical dimensions, different sleds can be mixed and matched to create the ideal environment for an organization’s requirements.

With this forward-thinking design approach, companies can significantly reduce their capital and operational expenditure infrastructure costs. For instance, with Tundra OpenHPC systems, newer generations of CPUs and memory can be installed via a simple board exchange. In the past, the entire server (including all of the sheet metal) would have to be replaced. But with Tundra, component exchanges can be optimized for cost and computational requirements.

            Tundra Tray

The Tundra Tray works with the Tundra Sled to deliver extremely high-density computing. The tray is quite simple, holding just three servers. This design allows for a variety of Tundra sleds to work side by side, in any combination that the IT department desires. This simple approach increases rack density by bypassing costly and space-consuming server enclosures. The Tundra Tray is the key to the Tundra ecosystem, allowing various sleds to be mixed and matched. As an example of this flexibility, a typical 2U four-node server requires that all four nodes be homogeneous. With the Tundra Sled and Tundra Tray design approach, each node can house different sockets, allowing for more flexibility through a tuned, cost-effective solution.

tundra sled

            Tundra Rack and Power Shelves

The Tundra Rack does not come with a power distribution mechanism. Instead, it relies on a flexible method for delivering power to servers and storage within the rack. This reduces the cost of delivering power to the servers and eliminates a conversion step. The power shelves can be fed with 120/208V, 230/400V or 277/480V circuits. These higher voltages mean lower current, with 12 volts delivered to each component. With this solution, optional A/B redundant feeds can be delivered, as well as N+1 redundant power supplies. Figure 2 shows the power zones created with three Tundra Power Shelves.

The benefits of using power shelves as compared with delivering power directly to the individual servers can be summarized as follows:

  • Density – The server (Tundra Sled) does not have to contain its own power supply, allowing for more computing/storage hardware rack space.
  • Flexibility – Power can be delivered where needed, avoiding overdesigned power systems for the entire rack.

The Tundra Rack is an “Open Bridge Rack” that Penguin Computing has supercharged with the Emerson Power Shelf. This dual-power shelf design supports all Tundra Compute Sleds with a 220V AC input and up to 36 kw of distributed power.  Each power shelf has 9x 3 kw rectifiers, and is N+1 redundant rated at 36 kw. Of the 9x rectifiers per shelf, six are active and three are on standby. The Open Compute Project concept is to disaggregate the power supply from the individual servers into a lower but much more robust power supply.

Power efficiency is achieved when the active rectifiers are running at greater than 70 percent of possible load. The Open Compute Project design has a power management tool that can shut off different rectifiers as needed. For example, if a shelf has six active rectifiers operating at 50 percent load, the power management tool will turn off one rectifier and spread the additional load over the other five, increasing the load to 70 percent for maximum efficiency.

The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.

You can download the complete ‘insideHPC Guide to Open Computing’  from the insideHPC White Paper Library courtesy of Penguin Computing.