ARM has taken a step into the artificial intelligence market with the announcement of a new micro-architecture – DynamIQ – specifically designed for artificial intelligence (AI). “DynamIQ technology is a monumental shift in multi-core microarchitecture for the industry and the foundation for future ARM Cortex-A processors. The flexibility and versatility of DynamIQ will redefine the multi-core experience across a greater range of devices from edge to cloud across a secure, common platform.”
“Intel sees the huge potential in AI and are moving mountains to take full advantage of it,” said Patrick Moorhead from Moor Insights & Strategy. “They have acquired Altera, Nervana Systems and other IP, need to connect to their home-grown IP and now it’s time to accelerate the delivery of it. That’s where today’s organization comes in play, a centralized organization, reporting directly to CEO Brian Krzanich, to make that happen. This is classic organizational strategy, accelerating delivery by organizing a cross-product group directly reporting to the CEO.”
“As the founding lead of the Google Brain project, and more recently through my role at Baidu, I have played a role in the transformation of two leading technology companies into “AI companies.” But AI’s potential is far bigger than its impact on technology companies. I will continue my work to shepherd in this important societal change. In addition to transforming large companies to use AI, there are also rich opportunities for entrepreneurship as well as further AI research.”
We are sad to report that HPC vendor Scalable Informatics has gone out of business. Headed up by CEO Joe Landman, Scalable Informatics spent the last 12 years building ‘Simply Faster” software-defined storage and compute solutions to the financial, research, scientific, and big data analysis markets. “There are days when this reporter wishes he wasn’t in the news business. Today is one of those days.”
“The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.”
Today ISC 2017 announced a day-long Deep Learning track on June 21 as part of its technical program. The full conference takes place June 18-21 in Frankfurt, Germany. “The overwhelming success of deep learning has triggered a race to build larger artificial neural networks, using growing amounts of training data in order to allow computers to take on more complex tasks. Such work will challenge the computational feasibility of deep learning of this magnitude, requiring massive data throughput and compute power. Hence, implementing deep learning at scale has become an emerging topic for the high performance computing community.”
In this podcast, the Radio Free HPC team looks at a set of IT and Science stories. Microsoft Azure is making a big move to GPUs and the OCP Platform as part of their Project Olympus. Meanwhile, Huawei is gaining market share in the server market and IBM is bringing storage to the atomic level.
“Cybersecurity is a cat-and-mouse game where the mouse always has long had the upper hand because it’s so easy for new malware to go undetected. Dr. Eli David, an expert in computational intelligence and CTO of Deep Instinct, wants to use AI to change that, bringing the GPU-powered deep learning techniques underpinning modern speech and image recognition to the vexing world of cybersecurity.”
The Data Science with Spark Workshop addresses high-level parallelization for data analytics workloads using the Apache Spark framework. Participants will learn how to prototype with Spark and how to exploit large HPC machines like the Piz Daint CSCS flagship system.
Today, Microsoft, NVIDIA, and Ingrasys announced a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. “Powered by eight NVIDIA Tesla P100 GPUs in each chassis, HGX-1 features an innovative switching design based on NVIDIA NVLink interconnect technology and the PCIe standard, enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations.”