Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

GPUs Power New AWS P2 Instances for Science & Engineering in the Cloud

p2Today Amazon Web Services announced the availability of P2 instances, a new GPU instance type for Amazon Elastic Compute Cloud designed for compute-intensive applications that require massive parallel floating point performance, including artificial intelligence, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering. With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud.

Two years ago, we launched G2 instances to support customers running graphics and compute-intensive applications,” said Matt Garman, Vice President, Amazon EC2. “Today, as customers embrace heavier GPU compute workloads such as artificial intelligence, high-performance computing, and big data processing, they need even higher GPU performance than what was previously available. P2 instances offer seven times the computational capacity for single precision floating point calculations and 60 times more for double precision floating point calculations than the largest G2 instance, providing the best performance for compute-intensive workloads such as financial simulations, energy exploration and scientific computing.”

P2 instances allow customers to build and deploy compute-intensive applications using the CUDA parallel computing platform or the OpenCL framework without up-front capital investments. To offer the best performance for these high performance computing applications, the largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor.

Altair Engineering empowers innovation and decision-making through technology that optimizes the analysis, management, and visualization of business and engineering information. “Simulation technology is at the core of Altair’s business and with our GPU solver partner FluiDyna GmbH, we’ve made significant investments in domain decomposition to optimize our computational fluid dynamics (CFD) software, nanoFluidX, for multi-GPU scaling for increased performance and reduced cost,” said Stephen Cosgrove, Director of CFD, Altair. “We’re able to leverage the massive amount of aggregate GPU memory and double precision floating point performance in Amazon EC2 P2 instances to fit more simulations into a single node, significantly reduce customer simulation times, and reduce the cost of running large simulations.”

MathWorks, the leading developer of mathematical computing software, helps millions of engineers, scientists, researchers, and students around the world analyze and design systems and products that are transforming the world. “MATLAB users moving their analytics and simulation workloads onto the AWS Cloud require their analyses to be processed quickly,” said Silvina Grad-Freilich, Senior Product Manager, MathWorks. “The massive parallel floating point performance of Amazon EC2 P2 instances, combined with up to 64 vCPUs and 732 GB host memory, will enable customers to realize results faster and process larger datasets than was previously possible.”

MapD is a GPU database for interactive SQL querying and visualization of multi-billion record datasets. “As the leader in GPU-powered databases and visual analytics applications, we are deeply invested in the emergence of large, cloud-based GPU instances and P2 is the most powerful we have seen,” said Todd Mostak, CEO and Founder, MapD. “Our performance on Amazon EC2 P2 instances is exceptional. On a dollar-to-dollar basis across a set of standard SQL benchmarks, MapD is 78 times faster on Amazon EC2 P2 instances than CPU-based solutions. Furthermore, these speedups were seen over multi-billion row datasets, speaking directly to our ability to deliver performance at scale with these instances. With this launch, our customers can now query and visualize billions of rows of data within milliseconds while enjoying the flexibility, scalability and reliability they have come to expect from AWS.”

Sonus delivers intelligent and secure, cloud optimized solutions for real time communications used by the world’s leading service providers and enterprises. “Real time communications are rapidly evolving, and they require transcoding between formats for use on multiple devices,” said Mykola Konrad, Vice President, Product Management and Marketing, Sonus. “GPUs are becoming more of a disruptor for transcoding services and they offer a cost effective solution for scaling our Session Border Controller application in the cloud. Because of our collaboration with AWS, Sonus has developed the industry’s first GPU optimized session border controller by leveraging the GPU parallel computing power and Enhanced Networking of Amazon EC2 P2 instances, which decreases network costs for our customers.”

Customers can launch P2 instances using the AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, and third-party libraries. P2 instances are available in three instance sizes: p2.16xlarge with 16 GPUs, p2.8xlarge with 8 GPUs, and p2.xlarge with 1 GPU. P2 instances are available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions.

Amazon Machine Images (AMIs) from AWS, NVIDIA, and other sellers are available in the AWS Marketplace to help customers get started within minutes. The AWS Deep Learning AMI comes preinstalled with MXNet and Caffe deep learning frameworks to enable customers to reduce model training time from weeks to hours. It also lets them experiment with artificial intelligence without making large upfront capital expenditures. The AMI from NVIDIA includes pre-installed drivers and the CUDA toolkit. It’s designed for developers working on a range of GPU-intensive workloads.

Sign up for our insideHPC Newsletter

Leave a Comment


Resource Links: