Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Radio Free HPC: Liqid Gets Hot, NSF Billion for AI

We have our full staff for this show – first time in a long time. We start out with introductions….Henry still lives in a survivalist compound in Las Cruces, NM, Jessi is still on crutches, and Shahin is still living in the smokie Silicon Valley. Download the MP3 Jumping right into our topic:  DOD is getting two […]

Get Your HPC Cluster Productive Faster

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), we see that by simplifying the deployment process from weeks or longer to days and preparing pre-built software packages, organizations can become productive in a much shorter time. Resources can be used to provide more valuable services to enable more research, rather than bringing up an HPC cluster. By using the services that QCT offers, HPC systems can achieve a better Return on Investment (ROI).

MemVerge Says Memory Machine Offers Big Memory That’s DRAM-Fast and Highly Available

Big Memory software specialist MemVerge today announced GA of Memory Machine software that, used with Intel Optane persistent memory, “fundamentally changes in-memory computing infrastructure,” according to the company. MemVerge said the offering offers the benefits of high-performance DRAM along with lower-cost and persistent memory, “the industry’s first software-defined pools of DRAM and persistent memory that […]

The Race for a Unified Analytics Warehouse

This white paper, “The Race for a Unified Analytics Warehouse,” from our friends over at Vertica discusses how the race for a unified analytics warehouse is on. The data warehouse has been around for almost three  decades. Shortly after big data platforms were introduced in the late 2000s, there was talk that the data  warehouse was dead—but it never went away. When big data platform vendors realized that the data warehouse was here to stay, they started building databases on top of their file system and conceptualizing a  data lake that would replace the data warehouse. It never did.

ScienceLogic Named Top AIOps Provider by EMA

Reston, VA – Sept. 10, 2020 – ScienceLogic, provider of monitoring solutions for multi-cloud management and hybrid IT infrastructure, has been recognized for AIOps leadership in Enterprise Management Associates (EMA) AIOps Radar Report. From among 17 AIOps vendors, EMA ranked ScienceLogic no. 1 “Value Leader” in Incident, Performance, & Availability Management for Product Strength and “Strong Value” […]

Teradata Expands Data Science Collaboration Capabilities

Cloud data analytics specialist Teradata has released collaborative features to its Vantage platform designed to reduce the friction between data scientists, business analysts, data engineers and business managers – some of whom may use different tools and languages. Enhancements include expanded native support for R and Python, with the ability to call more Vantage-native analytic […]

Quantum Makes LTO-9 Tape Drives Available for Scalar Tape Libraries

SAN JOSE — Sept. 9, 2020 — Quantum Corp. (NASDAQ: QMCO), a global leader in unstructured data and video solutions, today announced that LTO Ultrium format generation 9 technology will be available in its Scalar i6 and Scalar i6000 tape libraries, and StorNext AEL archive systems beginning in December 2020. By combining the high capacity of LTO-9 tape technology with Quantum Scalar tape libraries, […]

Video: GigaIO on Optimizing Compute Resources for ML, HPDA and other Advanced Workloads

In this interview, GigaIO CEO Alan Benjamin talks about systems performance problems and wasted compute resources when implementing ML, HPDA and other high demand workloads that involve high data volumes. At issue, Benjamin explains, is today’s rack architecture, which is decades old and unsuited for combinations of CPUs, GPUs and other accelerators needed for advanced computing strategies. The answer: the “composable disaggregated infrastructure.”

Composable Computing at SDSC

In this Q&A, SDSC Chief Data Science Officer Ilkay Altintas explains the rationale for composable systems and the approach taken with the new Expanse supercomputer. With its new Expanse supercomputer, San Diego Supercomputer Center (SDSC) is pioneering composable HPC systems to enable the dynamic allocation of resources tailored to individual workloads. One of the critical innovations in the SDSC’s new Expanse supercomputer from Dell Technologies, is the ability to support composable systems with dynamic capabilities.

DOD Inks $32M HPC Deal with Liqid; Forms AI Partnership with DOE, Microsoft

The Department of Defense has made HPC news twice in the last few days – in one, the Army will spend $32 million on supercomputing technology from composable infrastructure vendor Liqid; in the other, DOD will partner with the Department of Energy and Microsoft to develop AI algorithms to support natural disaster first responders. In […]