Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Interview: Cray in Azure Steps up with Dedicated Supercomputing in the Cloud

Cray recently announced three new cloud offerings for Cray in Azure. To learn more, we caught up with Joseph George, Executive Director of Strategic Alliances at Cray.

insideHPC: When we talk about Cray in Azure, are we really talking about sizable Cray HPC clusters installed at a Microsoft datacenter? Can you tell us how big these systems are and how many there are?

Joseph George from Cray

Joseph George: The Cray and Microsoft Azure teams have been actively working on numerous customer engagements together, in an effort to clearly understand the optimal cloud model for large scale HPC workloads, and the best consumption model for HPC users. With all that research as a foundation, it has become clear to us that a dedicated and reserved instance cloud model allows for an ideal HPC experience at large scale. As a result of this learning, Cray began working specifically with the Azure Dedicated organization within Microsoft Azure, to build market offers in this dedicated, reserved fashion.

The three new Cray in Azure offers that we’ve announced – Cray ClusterStor in Azure offer, the Cray in Azure for Manufacturing offer, and the Cray in Azure for Electronic Design Automation (EDA) offer – are provided to customers in this exact model. It enables customers to have their own dedicated Cray system, like they would in their own datacenter, but now with the benefit of Azure services available to them as well. Additionally, if they have existing workloads already running in Azure, these jobs can now have access to a dedicated Cray!

The three solutions vary in configuration based on the use case they support and are being offered in small, medium and large configs so customers are only paying for what they need, not what they could potentially need in a simple burst. But the configurations can span to over a thousand compute nodes or 45PB of storage – and be customized to scale even further, as the customer requires.

So yes, we are really talking about sizable Cray clusters and storage in Microsoft Azure datacenters!

insideHPC: Cray first partnered with Azure back in late 2017. Can you characterize the kinds of customers that are taking advantage of this offering?

Joseph George: Throughout 2018, Cray and Microsoft spent numerous cycles on market research, engaging customers, and testing configurations, all with the goal of truly understanding how to best serve the HPC market in the cloud. To that end, we made great strides in learning about customer decision-making criteria, we tested various cloud transaction models, and we introduced an early-access program for running mission-critical supercomputing workloads to enable proofs-of-concepts and the like.

These three offers are brand new to market, so we’re actively speaking to a number of customers about how Cray and Azure can help them with their HPC jobs. We’ve tailored the for commercial HPC customers, while also still being able to provide a solution to the traditional large scale research and lab customers. And we have seen interest across a wide variety of verticals from automotive players to energy companies to research institutes.

insideHPC: Cray acquired the assets of ClusterStor back in 2017 as well. How does the new Cray ClusterStor in Azure differ from previously available instances with Lustre?

Joseph George: We are very excited about the tremendous value Cray ClusterStor in Azure is going to provide, especially to customers who have already committed some HPC workloads in the Azure cloud, since it provides Azure HPC customers the most scalable, competitively priced storage option for their high-performance storage needs. In fact, Cray ClusterStor in Azure provides more than 3X the throughput in GB/s per Lustre object storage servers (OSS) than the currently available Lustre offer.

And in Azure, you get the best of both worlds. Cray ClusterStor in Azure is a dedicated bare metal offer to each customer, but it is also fully integrates with the Azure fabric and gives customers access to a large selection of other Azure services, like Azure blob storage and cold archive storage. And to make things simpler, customers have easy-to-consume small, medium, and large configurations.

We’re expecting it to be a big hit with today’s Azure HPC customer. And to the HPC customer that’s thinking about the cloud, this is a great incentive to finally check it out.

insideHPC: Another new offering is Cray in Azure for Manufacturing. What does this instance look like and how does it help streamline innovation? How does the new Cray in Azure for EDA offering meet the needs of electronic design automation?

Joseph George: Both the Cray in Azure for Manufacturing offer and the Cray in Azure for EDA offer were both built based on direct customer interaction we had with players in the space, specifically tailoring the configuration for with the right mix of processing power, memory, and storage capacity for the use case.

Cray in Azure for Manufacturing provides customers a dedicated, non-virtualized system, featuring a robust AMD processor, with an attached ClusterStor Lustre file system, designed and optimized to run tightly coupled Crash or CFD workloads at scale in Azure. This system can be attached to a customer provided Azure subscription, as well as the customer’s CFD applications, whether commercial or open source applications such as OpenFoam, that are designed and optimized to scale on Cray systems.

Similarly, the Cray in Azure for EDA offer is a Cray system optimized to run workloads require high clock frequency and high memory per node, a common trait among EDA applications, so this system features a powerful Intel processor that better enables performance for EDA. Similar to the Cray in Azure for Manufacturing offer, this system will be attached to a customer’s Azure subscription, granting them access to a variety of other Azure services.

Sign up for our insideHPC Newsletter

 

Leave a Comment

*

Resource Links: