Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Deploy AI Anywhere with ScaleMatrix DDC – No Data Center Required

At SC19, ScaleMatrix and its DDC subsidiary announced that it is collaborating with NVIDIA and Microway to deliver SKUs for an 8 petaFLOPS and a 13 petaFLOPS ‘Supercomputer Anywhere’ solution using its DDC S-Series cabinets.

DDC Cabinet Technology, purpose built for scaling dense computing, enables a modular ‘deploy anywhere at any scale’ approach to computing through its S-Series platform, a pressurized ‘clean room quality’ air conditioning system combined with a closed-loop, water-chilled liquid cooling system, all encased in a ruggedized cabinet complete with biometric security, air filtration, and fire suppression capabilities. The modular S-Series cabinets can be erected anywhere power and a roof exists. Through this system, ScaleMatrix and Microway will deliver a flexible SKU option, for customizable ‘Supercomputer Anywhere’ systems powered by NVIDIA DGX systems, that can be deployed virtually anywhere, regardless of data center resource availability, to meet the HPC and AI needs of any organization.

The Dynamic Density Control S-Series cabinet technology has been used in our ScaleMatrix cloud and colocation data centers since 2010, from which we offer customers infinite scalability and density for computing deployments,” said Chris Orlando, co-founder and principal of ScaleMatrix and DDC. “DDC technology is a mature and proven system, which solves the density challenges other complex liquid cooling systems are trying to solve, but without the mess and hassle of immersion cooling or risky hardware modifications to expensive chips.”

DCC technology offers a familiar “plug-and-play” approach to manipulating computing at the rack level, giving familiarity and peace of mind to IT managers and opening up the world of possibility to where you can place and procure computing power. In addition, DDC provides surgical control of supply side airflow and temperature management, delivering an ideal operating environment which ensures the best performance for critical AI and enterprise hardware. Artificial Intelligence (AI) will one day be looked at as a broad service in the same way that Internet and mobile access technology are looked at today across industries. Through the creation and delivery of these systems with our partners at NVIDIA and Microway, DDC is taking a big step towards making powerful computing at immense scales possible wherever it is needed, without the hassles associated with traditional data center facilities.

With AI – ANYWHERE, you can deploy advanced AI capabilities faster, in nearly any environment, and can scale the platform capability as needed to meet changing business demands.

See our complete coverage of SC19

The ’AI Anywhere‘ composable SKU will offer a design configuration based on the NVIDIA DGX-1 system, which will consist of a single rack containing 13 DGX-1 units, delivering a computing payload of 13 petaFLOPS. Additional configuration options are based on NVIDIA DGX-2 systems, which will house a DGX POD configuration including four DGX-2 systems and which deliver 8 petaFLOPS of compute power. The composable ’AI Anywhere‘ SKU, will operate between 42kW and 49kW fully loaded, within the precision tuned temperature and airflow management system provided by the DDC S-Series cabinet system. The units will be sold complete with storage and networking following DGX POD reference architecture designs such as NetApp’s ONTAP AI solution. Microway will offer services and expertise in integrating the hardware and software within the DDC cabinet solution integrating all hardware and software, including the full NVIDIA DGX software stack, deep learning and AI framework containers, with the DGX systems, NetApp ONTAP storage, and Mellanox switching prior to delivery. The customer experience for end users is simply to physically install the DDC cabinet platform, connect network interfaces, power on the system(s), and begin to load data and start training runs.

Quickly building enterprise-grade AI infrastructure can be a challenge for some organizations which may not have an AI-ready data center,” said Charlie Boyle, vice president and general manager of DGX Systems at NVIDIA. “NVIDIA DGX systems provide world-leading AI compute performance, and DDC technology extends the value of DGX systems in a ‘deploy-anywhere’ form-factor that overcomes the challenge of finding the right facilities to host the infrastructure.”

Sign up for our insideHPC Newsletter

Leave a Comment

*

Resource Links: