Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

NVIDIA in Partnership to Build Taiwan’s 1st HPC System for Medical Research

NVIDIA and Taiwan-based Asustek Computer have announced a partnership with Taiwan’s National Health Research Institutes (NHRI) to develop Taiwan’s first AI biomedical research supercomputer. According to an article in Digitimes Asia, NHRI will provide health research data while Asustek and NVIDIA will deliver AI cloud technology and high-performance computing servers. Digitimes Asia reported that Asustek […]

ISC 2022: Registration Open, 3,000 On-site Attendees Expected, Updated COVID19 Rules

The International Supercomputer Conference, ISC 2022, is now open for registration and conference organizers report that exhibit space for the event — to be held at the Congress Center Hamburg, Germany from Sunday, May 29 to Thursday, June 2 — has been expanded to accommodate all 125 companies and organizations reserving booth space in the […]

Oak Ridge: Frontier Exascale to Deliver ‘Full User Operations’ on Jan. 1, 2023; ‘Crusher’ Test System Now Running Code

“Crusher,” a partial form of the planned 100+-cabinet Frontier supercomputer at the U.S. Department of Energy’s Oak Ridge National Laboratory, is now running principal scientific codes at the Oak Ridge Leadership Computing Facility. The Crusher test system is comprised of 1.5 cabinets powered by 3rd Gen AMD EPYC CPUs and AMD Instinct MI250x GPU accelerators. […]

Report: In Wake of NVIDIA’s Failed Acquisition Bid, Goldman to Take Arm Public with $60B Valuation

In the end, Arm Ltd. may generate more cash by going public than it would have if the attempted NVIDIA acquisition had gone through. A report from the Reuters news service stated that SoftBank Group, the owner of Arm Lt., will select Goldman Sachs “as the lead underwriter on the initial public offering of Arm […]

Classiq Collaborates with NVIDIA on Quantum

TEL AVIV, Israel — Classiq, provider of a platform for creating quantum software, today announced a collaboration with NVIDIA to bring large-scale quantum circuits to customers. Now, businesses and other organizations can prepare for and explore the benefits of larger quantum circuits before the hardware is available. “Working with Classiq allows customers to expedite the […]

Hyperion Research: 2021 HPC Market Growth Up 13.8% Over Previous Year

HPC industry analyst firm Hyperion Research today announced overall spending for HPC grew from $30.6 billion (USD) in 2020 to $34.8 billion (USD) in 2021, counting both on-premises and cloud spending, an increase of 13.8 percent. Total on-prem HPC server sales for 2021 reached $14.8 billion (USD) reflecting a strong 9.1 percent growth over the […]

@HPCpodcast: Dan Reed on the Challenges to U.S. Global Supercomputing Competitiveness

In a recently published paper, “Reinventing High Performance Computing: Challenges and Opportunities,” three HPC luminaries have started an important discussion about the future of HPC and its impact on American competitiveness. In this episode of the @HPCpodcast, we talk with one of the authors, Dan Reed of the University of Utah, on the challenges facing the United States as it strives to compete globally in high-end supercomputing.

Alphabet Spinoff: AI and Quantum Startup Sandbox Launches

Palo Alto, CA, March 22, 2022 – Sandbox AQ, an enterprise SaaS company delivering solutions that leverage quantum tech and AI, officially launched today and announced its investors, board chair, partners, advisors and initial customers. AQ stands for AI and Quantum, two key tools Sandbox uses to address pressing global challenges. Founded by serial entrepreneur […]

Why HPC Clusters Require Ultra-Low Latency Network Monitoring

High performance computing (HPC) requires an extremely high-powered network with ultra-low latency to move large files between HPC nodes quickly. IT and network operations (NetOps) teams in industries such as financial service, oil and gas, animation/3D rendering and pharmaceutical research need to monitor their networks in exacting detail to ensure they can support HPC workloads. But monitoring latency and other metrics at HPC-class performance levels creates a new set of challenges, including monitoring packets at 40Gbps and 100Gbps speeds, measuring latency at millisecond and nanosecond intervals, and detecting miniscule “microbursts” of traffic before they cause performance issues.

Exascale: Preparing PETSc/TAO Software for Scientific Applications

In this episode of the Let’s Talk Exascale podcast, produced by  DOE’s Exascale Computing Project, the topic is PETSc—the Portable, Extensible Toolkit for Scientific Computation. It’s a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. A team within ECP is preparing PETSc/TAO for exascale […]