insideHPC Guide to HPC Fusion Computing Model – A Reference Architecture for Liberating Data (Part 2)

This insideHPC technology guide, “insideHPC Guide to HPC Fusion Computing Model – A Reference Architecture for Liberating Data,” discusses how organizations need to adopt a Fusion Computing Model to meet the needs of processing, analyzing, and storing the data to no longer be static. Fusion computing provides a reference architecture with multiple configurations. “We went back to evaluate the first principles of why we store and move data. The Fusion Computing Model looks at a  broader integration of capabilities to put agility back at the data center model.”

insideHPC Guide to HPC Fusion Computing Model – A Reference Architecture for Liberating Data

This insideHPC technology guide, insideHPC Guide to HPC Fusion Computing Model – A Reference Architecture for Liberating Data, discusses how organizations need to adopt a Fusion Computing Model to meet the needs of processing, analyzing, and storing the data to no longer be static.

insideHPC Guide to HPC Fusion Computing Model – A Reference Architecture for Liberating Data

This insideHPC technology guide discusses how organizations need to adopt a Fusion Computing Model to meet the needs of processing, analyzing, and storing the data to no longer be static. This guide (i) provides an overview of the Fusion Computing Model; (ii) describes how Seagate Technology PLC (Seagate) and Intel Corporation technologies can meet fusion […]

IBM Releases AI Toolkit for Deep Learning Uncertainties

Deep learning is smart — show off smart. It loves connecting dots no one else can see and being the smartest one in the room. But that’s when deep learning can go wrong – when it thinks it knows everything. What deep learning needs is a touch of humility, to not just be smart but […]

5 Considerations When Building an AI / GPU Cluster

AI continues to change the way many organizations conduct their work and research. Deep learning applications are constantly evolving and organizations are adapting to new technologies, improving their performance and capabilities. Companies that fail to adapt to these emerging technologies run the risk of falling behind the competition. At PSSC Labs we want to make sure that doesn’t happen to you. There is a lot going on in the world of AI and even more to think about when building a GPU-heavy AI server or cluster system. This article offers five essential elements to an AI/GPU computing environment.

MIT: Researchers’ Algorithm Designs Soft Robots that Sense

CAMBRIDGE, MA — March 22, 2021 — There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their […]

Elbencho – A New Storage Benchmark for AI

Germany, Feb 03, 2021 — Elbencho, a new open-source storage benchmark tool, is now available to help organizations that demand high performance and need to evaluate performance of modern storage systems, optionally including GPUs in the storage access. Elbencho is available for download at: https://github.com/breuner/elbencho Traditionally, storage system vendors published numbers primarily based on simple […]

The Dell Technologies HPC Community Interviews: From Narrow to General AI – Decoding the Brain to Train Neural Networks

HPC veteran Luke Wilson lives at the forefront AI research. In this interview he talks about research whose goal is to move computer intelligence from “narrow” (one task at a time) to “general” (more than one task simultaneously), a key to which is “context neuron switching.” One research strategy involves brain decoding by reverse mapping brain activity using “functional MRI activation maps,” said Wilson, who is chief data scientist and distinguished engineer at Dell’s HPC & AI Innovation Lab. Of the research conducted with McGill University, the Montreal Neurological Institute and Intel (using Dell’s Intel Xeon-powered Zenith cluster), he said: “What we’re trying to do is take that image of an activated brain and infer, using a neural network, what the patient was being asked to do.”

OEMs Join Nvidia-Certified Program – Systems Pre-tested for AI Workloads

Dell Technologies, GIGABYTE, Hewlett Packard Enterprise (HPE), Inspur and Supermicro are among 11 systems makers engaged in an Nvidia certification program, announced this morning, designed to test hardware across a range of AI and data analytics workloads, including jobs that require multiple compute nodes and tasks that only need part of the power of one GPU. […]

12th JLESC Workshop to Be Held Feb. 24-26

The 12th annual Joint Laboratory for Extreme Scale Computing (JLESC) will be held virtually from Wednesday, Feb. 24 to Friday, Feb. 26. It will bring together researchers in high performance computing from the JLESC partners INRIA, the University of Illinois, Argonne National Laboratory, Barcelona Supercomputing Center, Jülich Supercomputing Centre, RIKEN R-CCS and The University of Tennessee to explore […]