Today Mellanox announced that Monash University in Australia has selected the company’s CloudX platform based on Mellanox’s Spectrum SN2700 Open Ethernet switches, ConnectX-4 NICs and Mellanox’s LinkX cables to provide the network for the world’s first 100Gb/s end-to-end OpenStack cloud.
Cloud computing as a service has been gaining traction in education and public sectors to provide users with software-defined on-demand access to computing resources, boost Big Data driven innovation and achieve maximum infrastructure efficiency. Monash University has demonstrated great vision and leadership and is in the vanguard in building the most efficient OpenStack cloud to support a broad range of compute, networks and storage with Research @ Cloud Monash (R@CMon).
“Monash University has a strategic mandate to deliver world-class research platforms that is essential to our rapid growth in world rankings despite our relatively young age. Our researchers are building the 21st century equivalence of microscopes to inspect and make sense of large amounts of data to derive insights, and our cloud-based eResearch platform must be able to support their initiatives in every possible way. Whether it be accelerating next generation sequencing to the clinic or making our cities smarter and more energy efficient – we need a mixture of scale, speed and flexible integration in our computing. Mellanox 100Gb/s Open Ethernet solutions enable the most efficient use of data for fast analytics and intelligence, and gives us the ultimate freedom to grow and support our current and future workloads,” said Steve Quenette, deputy director of the Monash eResearch Centre. “In addition, as Big Data becomes increasingly prevalent in research, business, and society; the experience we gather from building cutting-edge, high-performance cloud infrastructure can provide invaluable guidance to the rest of the world.”
Monash’s first venture into OpenStack was initiated with Mellanox, utilizing an Ethernet fabric built on Mellanox’s industry-leading 56GbE SwitchX-2 and CloudX technology. Following the success of the initial deployment targeting traditional cloud workloads, Monash University is expanding R@CMon to support additional High Performance Computing (HPC) and High Throughput Computing (HTC) workloads, and handle the explosive growth of data, users and applications on this platform. Mellanox’s industry-leading 100Gb/s CloudX Ethernet fabric with native SR-IOV, RDMA over Converged Ethernet (RoCE), and advanced network automation and monitoring capabilities helps Monash address R@CMon’s heightened requirements for cloud network performance, scalability and efficiency. The debut of the world’s first 100Gb/s end-to-end cloud at Monash University signifies the start of a new era where data movement stays in lock-step with the speed of human imagination and innovation.
We are extremely pleased with our current cloud fabric based on Mellanox CloudX. Therefore we are continuing our partnership with Mellanox to incorporate their 100Gb/s end-to-end interconnect into R@CMon,” said Blair Bethwaite, senior cloud architect at the Monash eResearch Centre. “On top of R@CMon, we’ve established numerous “Virtual Laboratories” for data-intensive characterization and analysis. They are virtual desktops and Docker-based tools that are already linked up to the data sources and the computing resources. They are becoming the standard operating environment for the modern day researcher to support general purpose HPC and HTC (including GPGPU capabilities and Hadoop), interactive visualization and analysis. Only Mellanox can support the intensity and variety of our workload and provide guaranteed performance and user experience to our research community.”
“Mellanox CloudX is a recipe for building the most efficient cloud infrastructure for modern workloads such as Big Data, Machine Learning and Artificial Intelligence,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “As a key element of our CloudX architecture, our Spectrum Open Ethernet switch delivers non-blocking 6.4Tb/s full wire speed switching and routing capacity with industry-leading latency and power efficiency. Spectrum is the world’s first non-blocking 100 Gigabit Ethernet switch, and its deterministic zero packet loss performance and mega scale make it the most efficient building block for cloud and HPC applications, processing and fulfilling requests in real-time. Using Spectrum for 100Gb/s backbone together with leaf and spine switching, empowers Monash to have high-performance storage connected at Spectrum’s full 100Gb/s, with 50Gb/s connected hosts for HPC and 25Gb/s connected hosts for general-purpose cloud to more effectively support the University’s growth and research prowess.”