High performance computing cloud platform Nimbix today announced availability of Nimbix Cloud Everywhere, a single pane of glass to HPC and supercomputing applications converged with Kubernetes across HPC infrastructures. The hybrid service and software offering is designed to let customers to deploy HPC on Kubernetes-enabled infrastructures, including their own HPC clusters, any cloud provider or […]
Cloud HPC Platform Nimbix Offers Hybrid Software and Services on Kubernetes Infrastructures
2nd Generation Intel® Xeon® Scalable Processors Demonstrate Amazing HPC Performance
In this guest article, our friends at Intel discuss how benchmarks show key workloads average 31% better on Intel Xeon Platinum 9282 than AMD EYPC “Rome” 7742. Intel analysis provides strong evidence that the 2nd Generation Intel Xeon Scalable Processor (Cascade Lake “CLX”) architecture provides dramatic performance for real-world workloads. An impressive array of benchmarks shows 2S systems built with Intel’s 56 core processors (Intel Xeon Platinum 9282 processor) solidly ahead of systems built with AMD’s 64 core processors (AMD EYPC 7742).
Leadership Performance with 2nd-Generation Intel Xeon Scalable Processors
According to Intel, its new 2nd generation Intel Xeon Scalable Processor family includes Intel Deep Learning Boost for AI deep learning inference acceleration, fresh features and support for Intel Octane DC (data center) persistent memory, and more. Learn more about the offerings in a new issue of Parallel Universe Magazine.
Supermicro Steps up to HPC & AI Workloads at ISC 2018
In this video from ISC 2018, Perry Hayes and Martin Galle from Supermicro describe the company’s latest innovations for HPC and AI workloads. “Supermicro delivers the industry’s fastest, most powerful selection of HPC solutions offering even higher density compute clusters to deliver maximum parallel computing performance for any science and engineering, simulation, modeling, or analytics applications,” said Charles Liang, president and CEO of Supermicro.
Video: How Intel will deliver on the Promise of AI
In this video, Naveen Rao keynotes the Intel AI DevCon event in San Francisco. “One of the important updates we’re discussing today is optimizations to Intel Xeon Scalable processors. These optimizations deliver significant performance improvements on both training and inference as compared to previous generations, which is beneficial to the many companies that want to use existing infrastructure they already own to achieve the related TCO benefits along their first steps toward AI.”
Intel HPC Technology: Fueling Discovery and Insight with a Common Foundation
To remain competitive, companies, academic institutions, and government agencies must tap the data available to them to empower scientific breakthroughs and drive greater business agility. This guest post explores how Intel’s scalable and efficient HPC technology portfolio accelerates today’s diverse workloads.
Intel FPGAs Goes Mainstream for Enterprise Workloads
Today Intel announced top-tier OEM adoption of Intel’s field programmable gate array (FPGA) acceleration in their server lineup. This is the first major use of reprogrammable silicon chips to help speed up mainstream applications for the modern data center. “We are at the horizon of a new era of data center computing as Dell EMC and Fujitsu put the power and flexibility of Intel FPGAs in mainstream server products,” said Reynette Au, vice president of marketing for the Intel Programmable Solutions Group. “We’re enabling our customers and partners to create a rich set of high-performance solutions at scale by delivering the benefits of hardware performance, all in a software development environment.”
Cray rolls out new Cray Artificial Intelligence Offerings
Today Cray announced it is adding new options to its line of CS-Storm GPU-accelerated servers as well as improved fast-start AI configurations, making it easier for organizations implementing AI to get started on their journey with AI proof-of-concept projects and pilot-to-production use. “As companies approach AI projects, choices in system size and configuration play a crucial role,” said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “Our customers look to Cray Accel AI offerings to leverage our supercomputing expertise, technologies and best practices. Whether an organization wants a starter system for model development and testing, or a complete system for data preparation, model development, training, validation and inference, Cray Accel AI configurations provide customers a complete supercomputer system.”
New BOXX Deep Learning Workstation has 4 NVIDIA GPUs and 18-core Xeon Processors
Today BOXX Technologies announced the new APEXX W3 compact workstation featuring an Intel Xeon W processor, four dual slot NVIDIA GPUs, and other innovative features for accelerating HPC applications. “Available with an Intel Xeon W CPU (up to 18 cores) in a compact chassis, the remarkably quiet APEXX W3 is ideal for data scientists, enabling deep learning development at the user’s deskside. Capable of supporting up to four NVIDIA Quadro GV100 graphics cards, the workstation helps users rapidly iterate and test code prior to large-scale DL deployments while also being ideal for GPU-accelerated rendering. At GTC, APEXX W3 will demonstrate V-Ray rendering with NVIDIA OptiX AI-accelerated denoiser technology.”
One Stop Systems Launches Rack Scale GPU Accelerator System
Today One Stop Systems expanded its line of rack scale NVIDIA GPU accelerator products with the introduction of GPUltima-CI. “The GPUltima-CI power-optimized rack can be configured with up to 32 dual Intel Xeon Scalable Architecture compute nodes, 64 network adapters, 48 NVIDIA Volta GPUs, and 32 NVMe drives on a 128Gb PCIe switched fabric, and can support tens of thousands of composable server configurations per rack. Using one or many racks, the OSS solution contains the necessary resources to compose any combination of GPU, NIC and storage resources as may be required in today’s mixed workload data center.”