Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

HPE Reference Architecture for SAS 9.4 on HPE Superdome Flex 280 and HPE Primera Storage

This Reference Architecture highlights the key findings and demonstrated scalability when running SAS® 9.4 using the Mixed Analytics Workload running on HPE Superdome Flex 280 Server and HPE Primera Storage. These results demonstrate that the combination of the HPE Superdome Flex 280 Server and HPE Primera Storage with SAS 9.4 delivers up to 20GB/s of sustained throughput, up to a 2x performance improvement from the previous server and storage generation testing.

How to Integrate GPUs into your Business Analytics Ecosystem

This whitepaper discusses how GPU technology can augment data analytics performance, enabling data  warehouses and other solutions to better respond to new, yet common, database limitations that are the  result of increasing data set sizes, increasing user concurrency and demand, and increased use of interactive analytics. The way in which the analytics market has evolved […]

Things to Know When Assessing, Piloting, and Deploying GPUs

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new  environment’s required components, implementing a pilot program to learn about the system’s future  performance, and considering eventual scaling to production levels.

Simplifying Persistent Container Storage for the Open Hybrid Cloud

This ESG Technical Validation documents remote testing of Red Hat OpenShift Container Storage with a focus on the ease of use and breadth of data services. Containers have become an important part of data center modernization. They simplify building, packaging, and deploying applications, and are hardware agnostic and designed for agility—they can run on physical, virtual, or cloud infrastructure and can be moved around as needed.

Massive Scalable Cloud Storage for Cloud Native Applications

In this comprehensive technology white paper, written by Evaluator Group, Inc. on behalf of Lenovo, we delve into OpenShift, a key component of Red Hat’s portfolio of products designed for cloud native applications. It is built on top of Kubernetes, along with numerous other open source components, to deliver a consistent developer and operator platform that can run across a hybrid environment and scale to meet the demands of enterprises. Ceph open source storage technology is utliized by Red Hat to provide a data plane for Red Hat’s OpenShift environment.

insideHPC Guide to HPC/AI for Energy

In this technology guide, we take a deep dive into how the team of Dell Technologies and AMD is working to  provide solutions for a wide array of needs for more strategic cultivation of oil and  gas energy reserves. We’ll start with a series of compelling use-case examples, and then introduce a number  of important pain-points solved with HPC and AI. We’ll continue with some specific solutions for the energy  industry by Dell and AMD. Then we’ll take a look at a case study examining how geophysical services and  equipment company CGG successfully deployed HPC technology for competitive advantage. Finally, we’ll  leave you with a short-list of valuable resources available from Dell to help guide you along the path with HPC and AI.

The Race for a Unified Analytics Warehouse

This white paper from our friends over at Vertica discusses how the race for a unified analytics warehouse is on. The data warehouse has been around for almost three  decades. Shortly after big data platforms were introduced in the late 2000s, there was talk that the data  warehouse was dead—but it never went away. When big data platform vendors realized that the data warehouse was here to stay, they started building databases on top of their file system and conceptualizing a  data lake that would replace the data warehouse. It never did.

insideHPC Special Report Accelerate WRF Performance – Expedite Predictions with In-Depth Workload Characterization Knowledge

A popular application that simulates climate change is the Weather and Research Forecasting (WRF) model. This white paper discusses how QCT can work with leading research and commercial organizations to lower the Total Cost of Ownership by supplying highly tuned applications that are optimized to work on leading-edge infrastructure.

Panasas PanFS 8: Architectural Overview

The PanFS® parallel file system delivers the highest performance among competitive HPC storage systems at any capacity, and takes the complexity and unreliability of typical high-performance computing (HPC) storage systems off your hands, and it does so using commodity hardware at competitive price points. In this white paper, we’re going to take a “breadth-first” tour of the architecture of PanFS, looking at its key components then diving deep into the main benefits.

Paradigm Change: Reinventing HPC Architectures with In-Package Optical I/O

In this white paper, our friends over at Ayar Labs discuss an important paradigm change: reinventing HPC architectures with in-package optical I/O. The introduction of in-package optical I/O technology helps HPC centers accelerate the slope of compute progress needed to tackle ever-growing scientific problem sizes and HPC/AI convergence. Ayar Labs expects to not only see its technology extend the traditional type of architecture to put the HPC industry back on track, but also result in an inflection point that fundamentally changes the  slope of the compute performance efficiency curve. The key will be enabling converged HPC/AI centers to  build systems with disaggregated CPUs, GPUs, FPGAs and custom ASICs interconnected on equal footing.