Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Elbencho – A New Storage Benchmark for AI

Germany, Feb 03, 2021 — Elbencho, a new open-source storage benchmark tool, is now available to help organizations that demand high performance and need to evaluate performance of modern storage systems, optionally including GPUs in the storage access. Elbencho is available for download at: https://github.com/breuner/elbencho Traditionally, storage system vendors published numbers primarily based on simple […]

The Dell Technologies HPC Community Interviews: From Narrow to General AI – Decoding the Brain to Train Neural Networks

HPC veteran Luke Wilson lives at the forefront AI research. In this interview he talks about research whose goal is to move computer intelligence from “narrow” (one task at a time) to “general” (more than one task simultaneously), a key to which is “context neuron switching.” One research strategy involves brain decoding by reverse mapping brain activity using “functional MRI activation maps,” said Wilson, who is chief data scientist and distinguished engineer at Dell’s HPC & AI Innovation Lab. Of the research conducted with McGill University, the Montreal Neurological Institute and Intel (using Dell’s Intel Xeon-powered Zenith cluster), he said: “What we’re trying to do is take that image of an activated brain and infer, using a neural network, what the patient was being asked to do.”

OEMs Join Nvidia-Certified Program – Systems Pre-tested for AI Workloads

Dell Technologies, GIGABYTE, Hewlett Packard Enterprise (HPE), Inspur and Supermicro are among 11 systems makers engaged in an Nvidia certification program, announced this morning, designed to test hardware across a range of AI and data analytics workloads, including jobs that require multiple compute nodes and tasks that only need part of the power of one GPU. […]

12th JLESC Workshop to Be Held Feb. 24-26

The 12th annual Joint Laboratory for Extreme Scale Computing (JLESC) will be held virtually from Wednesday, Feb. 24 to Friday, Feb. 26. It will bring together researchers in high performance computing from the JLESC partners INRIA, the University of Illinois, Argonne National Laboratory, Barcelona Supercomputing Center, Jülich Supercomputing Centre, RIKEN R-CCS and The University of Tennessee to explore […]

Modern HPC and Big Data Design Strategies for Data Centers – Part 3

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.

Modern HPC and Big Data Design Strategies for Data Centers – Part 2

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.

Modern HPC and Big Data Design Strategies for Data Centers

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions  available for organizations and enterprise customers.

QPM Address Medical Life Sciences Challenges

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), review how QCT brings out the concept called QCT Platform on Demand (QCT POD), which is a converged framework with a flexible infrastructure for customers running different workloads. Under this concept, QCT develops the QCT POD for Medical (QPM) that is an on-premise rack-level system with common building blocks designed to provide greater flexibility and scalability, aimed to meet different medical workload demands using HPC and DL technologies, including Next Generation Sequencing (NGS), Molecular Dynamics (MD), and Medical Image Recognition.

Modern HPC and Big Data Design Strategies for Data Centers

This insideHPC Special Research Report provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions  available for organizations and enterprise customers.

University of Stuttgart’s Hawk HPC System to Go CPU-GPU for Deep Learning Workloads

Add the High Performance Computing Center at the University of Stuttgart (HLRS) to the list of supercomputing organizations going from CPU-only to CPU-GPU architectures. HLRS announced this morning it will add Nvidia graphic processing units to its Hawk supercomputer, a Hewlett Packard Enterprise Apollo system installed last February. One of Europe’s most powerful HPC systems, […]