NCI Powers Research in the Cloud with Mellanox, RDMA, and OpenStack

Print Friendly, PDF & Email

logo_nciToday Mellanox announced that the National Computational Infrastructure (NCI) at the Australia National University has selected the company’s interconnect technologies to support the nation’s researchers.

The NCI deployment combines the Mellanox CloudX solution with Red Hat OpenStack software to support high performance workloads on a scalable and easy to manage cloud platform. CloudX simplifies and automates the orchestration of cloud platforms and reduces deployment time from days to hours. The NCI deployment is based on Mellanox 40/56 Gb/s Virtual Protocol Interconnect adapters and switches supporting both InfiniBand and Ethernet. The advanced NCI cloud also utilizes RoCE (RDMA over Converged Ethernet) to implement a full fat-tree Ethernet configuration on OpenStack.

We are pleased to partner with the NCI as they build a scalable, world-class, and efficient cloud platform based on our CloudX interconnect,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “NCI is the first CloudX deployment to take full advantage of RDMA, OpenStack plugins, and Hypervisors offloads delivered by our end-to-end 40GbE Ethernet and 56Gb/s InfiniBand interconnect solution.”

As Australia’s national research computing service, NCI has a mission to raise the ambition, impact, and outcomes of Australian research through access to advanced computational and data-intensive methods, support, and high-performance infrastructure.