Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Dell Technologies HPC Community Interviews: Dell Strategist Jay Boisseau on the Convergence of AI and HPC – Or Is It?

Industry veteran Jay Boisseau – former director of the Texas Advanced Computing Center (TACC) and now AI & HPC technology strategist at Dell, among other career stops – is an HPC enthusiast with an outsized personality, a natural-born tech evangelist. Talk with him for 10 minutes and you want to go build a new high-speed interconnect that puts a dent in the universe. In this interview, he talks about his role at Dell aligning customers’ workload objectives with technology solutions, along with his background in HPC going back to the early 1990s.

Oqton Inc. Raises $40+M Series A for Industry 4.0 and AI Manufacturing Platform

Ghent, Belgium / San Francisco – January 15, 2021 – Oqton, Inc., the U.S.- and Belgium-based software company specialising in AI-based manufacturing, today announces it has raised over $40M in a Series A financing round, led by Fortino Capital, a B2B software investor, by PMV, the regional Flemish investment fund, and by Sandvik, a global […]

Rescale Named to Y Combinator’s Top Companies List

San Francisco – January 14, 2021 – Rescale, the hybrid HPC cloud platform for intelligent computing for digital R&D, was recently named to Y Combinator’s Top Companies list, reaffirming their position among startups. “We’re honored to be recognized by Y Combinator on their Top Companies list for the third year in a row,” said Joris […]

GigaIO and Microchip Power Native PCI Express Network Fabric for Composable Disaggregated Infrastructure

Carlsbad, CA – January 17, 2021 – GigaIO, maker of data center network architecture and connectivity solutions, has announced a collaboration with Microchip Technology Inc. to power GigaIO’s FabreX, the native PCI Express (PCIe) Gen4 network fabric, which supports GDR, MPI, TCP/IP and NVMe-oF. FabreX technology revolutionizes rack-scale architectures, enabling software-defined, dynamically reconfigurable rack-scale systems, […]

Things to Know When Assessing, Piloting, and Deploying GPUs

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.

The Graphcore Second Generation IPU

Our friends over at Graphcore, the U.K.-based startup that launched the Intelligence Processing Unit (IPU) for AI acceleration in 2018, has released a new whitepaper introducing the IPU-Machine. This second-generation platform has greater processing power, more memory and built-in scalability for handling extremely large parallel processing workloads. This paper explores the new platform and assesses its strengths and weaknesses compared to the growing cadre of potential competitors.

Modern HPC and Big Data Design Strategies for Data Centers – Part 3

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.

Radio Free HPC: Digital D-Day – SolarWinds Hack

Our main topic for this episode is the SolarWinds hack. This is the worst digital hacking incident to date and will have repercussions for many years. As many as 18,000 SolarWinds’ customers may have downloaded the hacked update and opened themselves up to digital mayhem and thievery. This includes many government accounts, some of the […]

Workload Portability Enabled by a Modern Storage Platform

In this sponsored post, Shailesh Manjrekar, Head of AI and Strategic Alliances, WekaIO, explores what is meant by “data portability,” and why it’s important. Looking at a customer pipeline, the customer context could be a software defined car, any IoT edge point, a drone, a smart home, a 5G tower, etc. In essence, we’re describing an AI pipeline which runs over an edge, runs over a core, and runs over a cloud. Therefore we have three high-level components for this pipeline.

The Dell Technologies HPC Community Interviews: BioTeam’s Ari Berman Talks HPC-Driven Life Sciences Research

Both Dr. Ari Berman and the consulting company of which he is CEO, BioTeam, stand at the crossroads of scientific research and HPC. As the company says of itself: “BioTeam is primarily a group of scientists who were forced to learn IT, software development and high performance computing to get their research done.” Why “forced”? […]