Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Making AI Accessible to Any Size Enterprise

In this sponsored post, our friends over at Lenovo and NetApp have teamed up with NVIDIA to discuss how the companies are helping to drive Artificial Intelligence (AI) into smaller organizations and hopefully seed that creative garden. Experience tells us that there is a relationship between organizational size and technology adoption:  Larger, more resource-rich, enterprises generally adopt new technologies first, while smaller, more resource constrained organizations follow afterward, (provided that the small organization isn’t in the technology business). 

Dell, NetApp, IBM Lead Coldago Research File Storage Map for 2020

Coldago Research, a market research and analysis firm, released today its 2020 edition of its map for file storage (see below) in which 31 vendors were examined and seven emerged as leaders: Dell, DDN, IBM, NetApp, Pure Storage, Qumulo and VAST Data.​ The report studied the market and vendors  since Coldago’s previous edition 12 months […]

NetApp Deploys Iguazio’s Data Science Platform for Optimized Storage Management

Previously built on Hadoop, NetApp said it was also looking to modernize the service infrastructure “to reduce the complexities of deploying new AI services and the costs of running large-scale analytics. In addition, the shift was needed to enable real-time predictive AI, and to abstract deployment, allowing the technology to run on multi-cloud or on premises seamlessly.”

UKRI Awards ARCHER2 Supercomputer Services Contract

UKRI has awarded contracts to run elements of the next national supercomputer, ARCHER2, which will represent a significant step forward in capability for the UK’s science community. ARCHER2 is provided by UKRI, EPCC, Cray (an HPE company) and the University of Edinburgh. “ARCHER2 will be a Cray Shasta system with an estimated peak performance of 28 PFLOP/s. The machine will have 5,848 compute nodes, each with dual AMD EPYC Zen2 (Rome) 64 core CPUs at 2.2GHz, giving 748,544 cores in total and 1.57 PBytes of total system memory.”

Simula Research Lab to Manage Heterogeneous HPC Platform with Bright Computing

Today, Bright Computing announced that Simula Research Laboratory has chosen Bright Cluster Manager to manage its multi-architecture, multi-OS HPC environment. “After a careful evaluation, Simula chose Bright Cluster Manager to provide comprehensive management of eX³, enabling the organization to administer its HPC platform as a single entity; provisioning the hardware, operating systems, and workload managers from a unified interface. Further, the intuitive Bright management console will allow Simula to see and respond to what’s happening in their cluster anywhere, at any time.”

LANL’s EMC3 Consortium enjoys rapid growth in its first year

Just over a year after Los Alamos National Laboratory launched the Efficient Mission Centric Computing Consortium (EMC3), 15 companies, universities and federal organizations are now working together to explore new ways to make extreme-scale computers more efficient. “In the first year of EMC3 we have already seen efficiency improvements to HPC in a number of areas, including the world’s first NVMe-based hardware-accelerated compressed parallel filesystem, in-situ analysis enabled on network adapters for a real simulation code, identifying issues with file system metadata performance in the Linux Kernel, record-setting in situ simulation output indexing, demonstrating file-system metadata indexing, and more.”

NetApp looks to BeeGFS for High Speed Storage

ThinkParQ is expanding its global reach of BeeGFS by partnering with NetApp. The new partnership will provide an easy to deploy, cost effective and easy-to-manage high performance turnkey solution, that incorporates the NetApp’s E-Series storage (including the new EF600 system) powered by BeeGFS, with enhanced technical support from NetApp. The NetApp E-Series storage with BeeGFS will accelerate workloads and provide customers with consistent, near-real-time access to their data whilst lowering TCO.

NetApp EF600 Storage Array Speeds HPC and Analytics

Today NetApp announced the NetApp EF600 storage array. The EF600 is an end-to-end NVMe midrange array that accelerates access to data and empowers companies to rapidly develop new insights for performance-sensitive workloads. “The storage industry is currently transitioning from the SAS to the NVMe protocol, which significantly increases the speed of access to data,” said Tim Stammers, senior analyst, 451 Research. “But conventional storage systems do not fully exploit NVMe performance, because of latencies imposed by their main controllers. NetApp’s E-Series systems were designed to address this architectural issue and are already used widely in performance-sensitive applications. The EF600 sets a new level of performance for the E-Series by introducing end-to-end support for NVMe, and should be considered by IT organizations looking for high-speed storage to serve analytics and other data-intensive applications.”

Report: Machine Learning Workloads Bolstering HPC Market

Intersect360 Research has completed its market sizing and new five-year forecast for the High Performance Computing industry. “AI workloads are sweeping through all types of enterprises and research, which is affecting both the HPC and Hyperscale markets. In HPC, we’re seeing not only a measurable affect on top-line budget expectations, but also different technology choices being made to accommodate machine learning workloads.”

Ohio Supercomputer Center hosts Statewide User Group

On April 19, researchers gathered at the Ohio Supercomputer Center for the Statewide Users Group (SUG) spring conference to collaborate and share ideas with peers and OSC staff. “SUG encompasses all OSC clients and receives direction from the SUG executive committee, a volunteer group composed of the Ohio university faculty who provide OSC’s leadership with program and policy advice and direction to ensure a productive environment for research.”