Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Quantum in the News: IBM Touts ‘Entanglement Forging’ Simulation; Multiverse Launches Quantum-based Stock Valuations

Quantum is in the news this week, with IBM today announcing new research on its “entanglement forging” simulation method and Multiverse Computing launching  a quantum-based method for financial institutions to calculate stock valuations. At IBM, “entanglement forging” creates a “remarkably accurate” simulation of a water molecule using half as many qubits on IBM’s 27-qubit Falcon […]

Exascale: Rumors Circulate HPC Community Regarding Frontier’s Status

By now you may have expected a triumphant announcement from the U.S. Department of Energy that the Frontier supercomputer, slated to be installed by the end of 2021 as the first U.S. exascale-class system, has been stood up with all systems go. But as of now, DOE (whose Oak Ridge National Laboratory will house Frontier) […]

Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers

If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs  the mainstream, baseline processor used in HPC server clusters? Again, in part it gets […]

Intel Changes Leadership, Structure of Data Platforms Group

Intel CEO Pat Gelsinger has announced the addition of two new technology managers to its executive leadership team, as well as several changes to Intel business units. Current Intel executives Sandra Rivera and Raja Koduri will take on senior leadership roles, and technology industry veterans Nick McKeown and Greg Lavender will join the company. Navin […]

Supermicro Delivers World Record Performance

Supermicro’s latest range of H12 Generation A+ Systems and Building Block Solutions®, optimized for AMD EPYC™ processors, offers new levels of application-optimized performance per watt and per dollar, delivering outstanding core density, superior memory bandwidth, and unparalleled I/O capacity.

Gelsinger Speaks: Intel’s New CEO Debuts Today – What Will He Say?

Speculation abounds about Pat Gelsinger’s first public appearance as CEO of Intel at a webinar (5 pm Eastern Time) today that will capture close attention from a host of the company’s core audiences: customers, business partners, employees, industry and financial analysts – and the HPC community. The webinar, confidently called “Intel Unleashed: Engineering the Future,” […]

Things to Know When Assessing, Piloting, and Deploying GPUs

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.

The Graphcore Second Generation IPU

Our friends over at Graphcore, the U.K.-based startup that launched the Intelligence Processing Unit (IPU) for AI acceleration in 2018, has released a new whitepaper introducing the IPU-Machine. This second-generation platform has greater processing power, more memory and built-in scalability for handling extremely large parallel processing workloads. This paper explores the new platform and assesses its strengths and weaknesses compared to the growing cadre of potential competitors.

Modern HPC and Big Data Design Strategies for Data Centers – Part 3

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.

Workload Portability Enabled by a Modern Storage Platform

In this sponsored post, Shailesh Manjrekar, Head of AI and Strategic Alliances, WekaIO, explores what is meant by “data portability,” and why it’s important. Looking at a customer pipeline, the customer context could be a software defined car, any IoT edge point, a drone, a smart home, a 5G tower, etc. In essence, we’re describing an AI pipeline which runs over an edge, runs over a core, and runs over a cloud. Therefore we have three high-level components for this pipeline.