Datacenter service providers currently face a confluence of challenges, which require that they adapt and modernize to cope with upcoming requirements of their principal customers. These include the growth and proliferation of new compute-intensive and storage-intensive workloads and applications, such as those that leverage AI. They also include applications whose viability and success depends on […]
Powering Innovation: Private AI Infrastructure in the Enterprise
AI is rapidly transforming industries, becoming a critical driver of innovation. As AI’s influence expands, organizations are increasingly turning to private AI solutions to maintain control over their data, ensure regulatory compliance, and customize AI models to meet specific needs. According to IDC’s Spotlight report Powering Innovation: Private AI Infrastructure in the Enterprise, experienced organizations […]
Why IT Must Have an Influential Role in Strategic Decisions About Sustainability
In today’s climate-aware era, there is an increased focus on sustainability for business decision-makers. With technology being the catalyst for growth in many organizations, IT leaders have a crucial role in making decisions that will positively impact future generations. In this whitepaper, written by International Data Corporation (IDC), the premier global provider of market intelligence, […]
It’s Time to Resolve the Root Cause of Congestion
Today, every high-performance computing (HPC) workload running globally faces the same crippling issue: Congestion in the network.
Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now.
In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion, specifically:
– Why today’s network architectures are not a sustainable approach to HPC workloads
– How HPC workload congestion and latency issues are directly tied to the network architecture
– Why a direct interconnect network architecture minimizes congestion and tail latency
Azure HBv3 VMs and Excelero NVMesh Performance Results
Azure offers Virtual Machines (VMs) with local NVMe drives that deliver a tremendous amount of performance. These local NVMe drives are ephemeral, so if the VM fails or is deallocated, the data on the drives will no longer be available. Excelero NVMesh provides a means of protecting and sharing data on these drives, making their performance readily available, without risking data longevity. This eBook from Microsoft Azure and AMD in coordination with Excelero provides in-depth technical information about the performance and scalability of volumes generated on Azure HBv3 VMs with this software-defined-storage layer.
HPE Reference Architecture for SAS 9.4 on HPE Superdome Flex 280 and HPE Primera Storage
This Reference Architecture highlights the key findings and demonstrated scalability when running SAS® 9.4 using the Mixed Analytics Workload running on HPE Superdome Flex 280 Server and HPE Primera Storage. These results demonstrate that the combination of the HPE Superdome Flex 280 Server and HPE Primera Storage with SAS 9.4 delivers up to 20GB/s of sustained throughput, up to a 2x performance improvement from the previous server and storage generation testing.
Things to Know When Assessing, Piloting, and Deploying GPUs
In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.
Simplifying Persistent Container Storage for the Open Hybrid Cloud
This ESG Technical Validation documents remote testing of Red Hat OpenShift Container Storage with a focus on the ease of use and breadth of data services. Containers have become an important part of data center modernization. They simplify building, packaging, and deploying applications, and are hardware agnostic and designed for agility—they can run on physical, virtual, or cloud infrastructure and can be moved around as needed.
Massive Scalable Cloud Storage for Cloud Native Applications
In this comprehensive technology white paper, written by Evaluator Group, Inc. on behalf of Lenovo, we delve into OpenShift, a key component of Red Hat’s portfolio of products designed for cloud native applications. It is built on top of Kubernetes, along with numerous other open source components, to deliver a consistent developer and operator platform that can run across a hybrid environment and scale to meet the demands of enterprises. Ceph open source storage technology is utliized by Red Hat to provide a data plane for Red Hat’s OpenShift environment.
insideHPC Guide to HPC/AI for Energy
In this technology guide, we take a deep dive into how the team of Dell Technologies and AMD is working to provide solutions for a wide array of needs for more strategic cultivation of oil and gas energy reserves. We’ll start with a series of compelling use-case examples, and then introduce a number of important pain-points solved with HPC and AI. We’ll continue with some specific solutions for the energy industry by Dell and AMD. Then we’ll take a look at a case study examining how geophysical services and equipment company CGG successfully deployed HPC technology for competitive advantage. Finally, we’ll leave you with a short-list of valuable resources available from Dell to help guide you along the path with HPC and AI.