“What is important to researchers is ‘time to science,’ not the length of time a job takes to compute. ‘If you can wait in line at a national supercomputing center and it takes five days in the queue for your job to run, and then you get 50,000 cores and your job runs in a few hours, that’s great. But what if you could get those 50,000 cores right now, no waiting, and your job takes longer to run but it would still finish before your other job would start on the big iron machine.”
Today Mellanox announced that Monash University in Australia has selected the company’s CloudX platform based on Mellanox’s Spectrum SN2700 Open Ethernet™ switches, ConnectX-4 NICs and Mellanox’s LinkX cables to provide the network for the world’s first 100Gb/s end-to-end OpenStack cloud.
While there is much discussion and products in the market regarding cloud computing and the ability to spin up a virtual machines quickly and efficiently, the fact remains that without planning for cloud based storage, the data will get lost. Simply put, without storage, there is no data.
Today Mellanox announced that it is the first certified End-To-End interconnect vendor for OpenStack. Leveraging Mellanox 10/40GbE or FDR 56Gb/s adapters and switches and the OpenStack Cinder block storage and Neutron plug-ins, cloud vendors can significantly improve storage access performance and run virtual machine traffic with bare-metal performance, while enjoying hardened security and QoS; all delivered in a simple and tightly integrated package.