Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


AMAX.AI Unveils [SMART]Rack Machine Learning Cluster

Today AMAX.AI launched the [SMART]Rack AI Machine Learning cluster, an all-inclusive rackscale platform is maximized for performance featuring up to 96x NVIDIA Tesla P40, P100 or V100 GPU cards, providing well over 1 PetaFLOP of compute power per rack. “The [SMART]Rack AI is revolutionary to Deep Learning data centers,” said Dr. Rene Meyer, VP of Technology, AMAX. “Because it not only provides the most powerful application-based computing power, but it expedites DL model training cycles by improving efficiency and manageability through integrated management, network, battery and cooling all in one enclosure.”

IBM Moves Data Science Forward with Integrated Analytics System

Today IBM announced the Integrated Analytics System, a new unified data system designed to give users fast, easy access to advanced data science capabilities and the ability to work with their data across private, public or hybrid cloud environments. “Today’s announcement is a continuation of our aggressive strategy to make data science and machine learning more accessible than ever before and to help organizations like AMC, begin harvesting their massive data volumes – across infrastructures – for insight and intelligence,” said Rob Thomas, General Manager, IBM Analytics.

Server Vendors Announce NVIDIA Volta Systems for Accelerated AI

Today NVIDIA and its systems partners Dell EMC, Hewlett Packard Enterprise, IBM and Supermicro today unveiled more than 10 servers featuring NVIDIA Volta architecture-based Tesla V100 GPU accelerators — the world’s most advanced GPUs for AI and other compute-intensive workloads. “Volta systems built by our partners will ensure that enterprises around the world can access the technology they need to accelerate their AI research and deliver powerful new AI products and services,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA.

Accelerate Innovation and Insights with HPC and AI

Vineeth Ram from HPE gave this talk at the HPC User Forum in Milwaukee. “Organizations across all sectors are putting Big Data to work. They are optimizing their IT operations and enhancing the way they communicate, learn, and grow their businesses in order to harness the full power of artificial intelligence (AI). Backed by high performance computing technologies, AI is revolutionizing the world as we know it—from web searches, digital assistants, and translations; to diagnosing and treating diseases; to powering breakthroughs in agriculture, manufacturing, and electronic design automation.”

Bright Computing Announces Integration with IBM Power Systems

Today Bright Computing announced that Bright Cluster Manager 8.0 now integrates with IBM Power Systems. “The integration of Bright Cluster Manager 8.0 with IBM Power Systems has created an important new option for users running complex workloads involving high-performance data analytics,” said Sumit Gupta, VP, HPC, AI & Machine Learning, IBM Cognitive Systems. “Bright Computing’s emphasis on ease-of-use for Linux-based clusters within public, private and hybrid cloud environments speaks to its understanding that while data is becoming more complicated, the management of its workloads must remain accessible to a changing workforce.”

NVIDIA Brings Deep Learning to Hyperscale at GTC China

Today GTC China, NVIDIA made a series of announcements around Deep Learning, and GPU-accelerated computing for Hyperscale datacenters. “Demand is surging for technology that can accelerate the delivery of AI services of all kinds. And NVIDIA’s deep learning platform — which the company updated Tuesday with new inferencing software — promises to be the fastest, most efficient way to deliver these services.”

NVIDIA P100 GPUs come to Google Cloud Platform

Today the good folks at the Google Compute Platform announced the availability of NVIDIA GPUs in the Cloud for multiple geographies. Cloud GPUs can accelerate workloads such as machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases. “Today, we’re happy to make some massively parallel announcements for Cloud GPUs. First, Google Cloud Platform (GCP) gets another performance boost with the public launch of NVIDIA P100 GPUs in beta.

Preferred Networks in Japan Deploys 4.7 Petaflop Supercomputer for Deep Learning

Today Preferred Networks announced the launch of a private supercomputer designed to facilitate research and development of deep learning, including autonomous driving and cancer diagnosis. The new 4.7 Petaflop machine is one of the most powerful to be developed by the private sector in Japan and is equipped with NTT Com and NTTPC’s GPU platform, and contains 1,024 NVIDIA Tesla P100 GPUs.

DeepL Deployes 5 Petaflop Supercomputer at Verne Global in Iceland

Today Verne Global announced that DeepL has deployed its 5.1 petaFLOPS supercomputer in its campus in Iceland. Designed to support DeepL’s artificial intelligence driven, neural network translation service, this supercomputer is viewed by many as the world’s most accurate and natural-sounding machine translation service. “We are seeing growing interest from companies using AI tools, such as deep neural network (DNN) applications, to revolutionize how they move their businesses forward, create change, and elevate how we work, live and communicate.”

Intel offers up AI Developer Resources in the Cloud

Today at the O’Reilly Artificial Intelligence Conference in San Francisco, Intel’s Lisa Spelman announced the Intel Nervana DevCloud, a cloud-hosted hardware and software platform for developers, data scientists, researchers, academics and startups to learn, sandbox and accelerate development of AI solutions with free compute cloud access powered by Intel Xeon Scalable processors. By providing compute resources for machine learning and deep learning training and inference compute needs, Intel is enabling users to start exploring AI innovation without making their own investments in compute resources up front. In addition to cloud compute resources, frameworks, tools and support are provided.