Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Cambridge Quantum Releases Quantum NLP Toolkit and Library

CAMBRIDGE, UK — Cambridge Quantum today announced the release of what it said is the world’s first toolkit and library for Quantum Natural Language Processing (QNLP). The toolkit is called lambeq, named after the late mathematician and linguist Joachim Lambek. CQ said lambeq converts sentences into a quantum circuit and is designed to accelerate the development […]

Quantum Accelerator Duality Announces First Corporate Supporters

Oct. 13, 2021 — Duality, the nation’s first accelerator focused on supporting quantum science and technology companies, has announced that Amazon Web Services (AWS) is among its first corporate supporters, along with Caruso Ventures, Lathrop GPM LLP, McDonnell Boehnen Hulbert & Berghoff (MBHB), Silicon Valley Bank, and Toptica Photonics to support its inaugural cohort of six startups, and help fuel quantum innovation in […]

Eni Upgrades HPE HPC Infrastructure via GreenLake

Hewlett Packard Enterprise (NYSE: HPE) today announced the upgrade of the supercomputer system of Eni, the Italian multinational supermajor energy company. The upgrade of the company’s supercomputer, HPC4, will be delivered as a service through the HPE GreenLake edge-to-cloud platform and is intended increase performance and double storage capacity to improve accuracy of image-intensive modeling […]

HPC: Stop Scaling the Hard Way

…today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.

Hailo Claims Record AI Chip Venture Round with $136M Series C 

Artificial intelligence chipmaker Hailo today announced it has raised$136 million in a Series C funding round led by Poalim Equity and Gil Agmon. The company said the round brings Hailo’s total funding to $224 million and will be used to further develop the Hailo-8 AI Processor for Edge Devices and for expansion into new and […]

IBM and Deloitte Launch Offering for AI in Hybrid Cloud Environments

NEW YORK AND ARMONK, N.Y., Oct. 11, 2021 – IBM (NYSE: IBM) and Deloitte today announced a new offering—DAPPER, an AI-enabled managed analytics solution. The solution reinforces the two organizations’ 21-year global alliance—which helps organizations accelerate the adoption of hybrid cloud and AI across the enterprise—and 10 years of experience implementing the Deloitte Analytics Platform. DAPPER’s end-to-end […]

Exxact Corporation Releases NVIDIA HGX A100-powered Servers for AI and HPC

FREMONT, Calif., Oct. 6, 2021 — Exxact Corporation, a provider of high-performance computing (HPC), artificial intelligence (AI), and data center solutions, announced that it is now offering a new line of TENSOREX servers featuring the NVIDIA HGX A100 platform. This new line of GPU-accelerated systems allows researchers and scientists to combine simulation, data analytics, and AI […]

3rd Annual HPC-AI Advisory Council and STFC DiRAC Conference, Oct. 13-14, to Explore the UK’s Science Superpower Agenda

SUNNYVALE, Calif. — The for community benefit HPC-AI Advisory Council, in collaboration with the UK’s Science & Technology Facilities Council (STFC) DiRAC Facility, has announced the 2021 UK Conference will take place, virtually, 13 and 14 October. Hosting from the BST time zone (UTC +1), the second all digital delivery presents a condensed agenda in two (3.5hr) sessions from […]

Exascale Hardware Evaluation: Workflow Analysis for Supercomputer Procurements

It is well known in the high-performance computing (HPC) community that many (perhaps most) HPC workloads exhibit dynamic performance envelopes that can stress the memory, compute, network, and storage capabilities of modern supercomputers. Optimizing HPC workloads to run efficiently on existing hardware systems is challenging, but attempting to quantify the performance envelopes of HPC workloads to extrapolate performance predictions for HPC workloads on new system architectures is even more challenging, albeit essential. This predictive analysis is beneficial because it helps each data center’s supercomputer procurement team extrapolate to the new machines and system architectures that will deliver the most performance for production workloads at their datacenter. However, once a supercomputer is installed, configured, made available to users, and benchmarked, it is too late to consider fundamental architectural changes.

Univ. of Nebraska Seeks $50M Fed Funding for HPC Center Expansion

The University of Nebraska–Lincoln is seeking $50 million in federal dollars via the American Rescue Plan to fund supercomputing expansion at the university’s Holland Computing Center. According to an article on its Nebraska Today news site, the university’s proposals seek $75 million in total — $50 million for the computing center and $25 million for […]