“The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.”
Today vScaler announced plans to showcase their HPC cloud platform March 15-16 at the upcoming Cloud Expo Europe Conference in London. Supported by two of its strategic technology partners – Aegis Data and Global Cloud Xchange, vScaler will showcase its application specific cloud platform, with experts on hand to discuss use cases such as HPC, Broadcast & Media, Big Data, Finance and Storage, as well as data centre innovation and co-location. “We provide full application stacks for a range of verticals as well as on-demand consultancy from our expert team,” said David Power, vScaler CTO. “Our tailor-made, software-defined infrastructure cuts away time wasted on the distractions of setup and enables our users to concentrate on the task at hand.”
In this podcast, the Radio Free HPC team looks at a set of IT and Science stories. Microsoft Azure is making a big move to GPUs and the OCP Platform as part of their Project Olympus. Meanwhile, Huawei is gaining market share in the server market and IBM is bringing storage to the atomic level.
“Cybersecurity is a cat-and-mouse game where the mouse always has long had the upper hand because it’s so easy for new malware to go undetected. Dr. Eli David, an expert in computational intelligence and CTO of Deep Instinct, wants to use AI to change that, bringing the GPU-powered deep learning techniques underpinning modern speech and image recognition to the vexing world of cybersecurity.”
“This video is from the opening session of the “Introduction to Programming Pascal (P100) with CUDA 8″ workshop at CSCS in Lugano, Switzerland. The three-day course is intended to offer an introduction to Pascal computing using CUDA 8.”
Today, Microsoft, NVIDIA, and Ingrasys announced a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. “Powered by eight NVIDIA Tesla P100 GPUs in each chassis, HGX-1 features an innovative switching design based on NVIDIA NVLink interconnect technology and the PCIe standard, enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations.”
“GPUs potentially offer exceptionally high memory bandwidth and performance for a wide range of applications. The challenge in utilizing such accelerators has been the difficulty in programming them. Enter GPU Hackathons; Our mentors come from national laboratories, universities and vendors, and besides having extensive experience in programming GPUs, many of them develop the GPU-capable compilers and help define standards such as OpenACC and OpenMP.”
Today Fujitsu announced that it has received RIKEN’s order for the “Deep learning system,” one of the largest supercomputers in Japan specializing in AI research. “NVIDIA DGX-1, the world’s first all-in-one AI supercomputer, is designed to meet the enormous computational needs of AI researchers,” said Jim McHugh, VP & GM at Nvidia. “Powered by 24 DGX-1s, the RIKEN Center for Advanced Intelligence Project’s system will be the most powerful DGX-1 customer installation in the world. Its breakthrough performance will dramatically speed up deep learning research in Japan, and become a platform for solving complex problems in healthcare, manufacturing and public safety.”