Stony Brook Univ. to Deploy HPE Supercomputer Powered by Intel
New York’s Stony Brook University has announced it will soon deploy an Intel-powered HPE supercomputer for science and engineering research across multidisciplinary fields, including engineering, physics, the social sciences and bioscience. The new solution is expected to be in production this summer and in operation sometime during the first semester of the 2023-24 academic year. The new solution is built using HPE ProLiant DL360 Gen11 servers for scientific workloads involving modeling, simulation, AI and analytics. Stony Brook said HPE designed the solution with fourth Gen Intel Xeon scalable processors featuring the Intel Xeon CPU Max Series for more memory bandwidth [READ MORE…]
Featured Stories
TSMC Opens Backend Fab 6 for Expansion of 3DFabric System Integration
HSINCHU, Taiwan, R.O.C, Jun. 8, 2023—TSMC (TWSE: 2330, NYSE: TSM) today announced the opening of its Advanced Backend Fab 6, the company’s first all-in-one automated advanced packaging and testing fab to realize 3DFabric integration of front-end to back-end process and testing services. The fab is prepared for mass production of TSMC-SoIC (System on Integrated Chips) process technology. Advanced Backend Fab 6 enables TSMC to flexibly allocate capacity for TSMC 3DFabric [READ MORE…]
Quantum Brilliance Releases Open-Source Software for Miniature Quantum Computers
SYDNEY, June 8, 2023 — Quantum Brilliance, developer of miniaturised, room-temperature quantum computing products, today announced the release of Qristal SDK, an open-source software development kit for researching applications that integrate the company’s portable, diamond-based quantum accelerators. Previously in beta, the Quantum Brilliance Qristal SDK is now available to develop and test novel quantum algorithms for applications designed for quantum accelerators rather than quantum mainframes. Use cases include classical-quantum hybrid [READ MORE…]
IBM to Open European Quantum Data Center in 2024
IBM today announced plans to open its first Europe-based quantum data center next year in Ehningen, Germany, providing access to quantum computing for companies, research institutions and government agencies. The data center is expected to offer multiple IBM quantum systems, each with utility scale quantum processors, i.e., those of more than 100 qubits, according to the company. The data center will be located at an IBM facility in Ehningen and [READ MORE…]
@HPCpodcast: A Look Back at ISC 2023 with a Look Ahead to the Uncertain Future of Leadership-Class Supercomputing
In this Lenovo-sponsored episode of the @HPCpodcast, Shahin and Doug discuss the recent annual ISC 2023 HPC confab in Hamburg, a conference that this year showed growth in attendees and number of exhibitors, if not a return to pre-pandemic totals. Among the topics we cover: quantum computing, more discussion of the TOP500 ranking of the world’s most powerful supercomputers, the rapid growth of HPC in Europe, and whether we live [READ MORE…]
Featured Resource

Sponsored by: Google CloudRewiring the Customer Experience across Asia Pacific with Data and AI
Data success depends on a clearly articulated strategy with defined objectives, data prioritization, and the right analytical tools. With this in place, data projects can secure the impact companies are looking for. After identifying safety as a core mission, Japanese automotive manufacturer Subaru is pursuing a goal of zero fatal traffic accidents1 by 2030. It […]
HPC Newsline
- Applause Announces Generative AI Training, Testing and Validation through Crowdtesting Services
- Monster API Says Its Platform Cuts AI Development Costs Up to 90%
- Multicore Processor Design Pioneer Prof. Kunle Olukotun Receives ACM-IEEE CS Eckert-Mauchly Award
- Stony Brook Univ. to Deploy HPE Supercomputer Powered by Intel
- For DoD Only: AWS Announces Snowblade Edge Device
- DDN Customer CINECA Recognized for Bandwidth Score on IO500 List for Production Systems
- HPE Announces GreenLake Sustainability Dashboard for Carbon Footprint Reduction
- Verne Global Receives $100M Loan from Digital 9
- Report: CoreWeave Wins Miscrosoft Deal for GPU Cloud Services Worth Billions
- RKVST Granted Patent Enabling ‘Practical Scale for Blockchain’
- Power Grid Modeling Tool Launched on Frontier Exascale Supercomputer
- Photonics Compute and Interconnect Startup Lightmatter Raises $154M Series C Funding, Targets HPC and AI Workloads
- LLNL’s Lori Diachin Named Director of Exascale Computing Project
- GPU Cloud Provider CoreWeave Secures $200M Series B Extension
- PCI-SIG Certifies Achronix VectorPath Accelerator Card for PCIe Gen5 x16 @ 32 GT/s
- Alces Flight and Deep Green Partner on UK HPC Carbon Output
- HSBC and Quantinuum Explore Quantum Computing in Financial Services
- AlixPartners and NAX Group Partner on AI and Corporate Data Sets
- ArangoDB Announces Release of ArangoDB 3.11 for Search, Graph and Analytics
- Sponsored by: HuaweiNWP Development with Nearly 10x Higher Performance Provided by Huawei OceanStor Pacific Scale-Out Storage
- ASCR: Exascale to Burst Bubbles that Block Carbon Capture
- IBM Launches $100M Partnership with Tokyo and Chicago Universities to Develop 100,000-Qubit Quantum-Centric Supercomputer
- CoolIT Systems Introduces Direct Liquid Cooling-Enabled Rear Door Heat Exchanger
- At ISC 2023: Hyperion Reports HPC Industry Grew 4% in 2022; AI to Drive Stronger Growth Next and Following Years
- NVIDIA Teams with Microsoft on Enterprise-Ready Generative AI
- Ayar Labs Adds $25M to its Series C
- Dell and NVIDIA in Generative AI Initiative
- AMD-Powered LUMI Supercomputer: In the Vanguard of HPC Performance and Energy Efficiency
- @HPCpodcast: A Breakdown of the ‘Treasure Trove’ TOP500 List
- New TOP500 HPC List: Frontier Extends Lead with Performance Upgrade
Industry Perspectives
…today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.
New, Open DPC++ Extensions Complement SYCL and C++
In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.
Featured from insideBIGDATA
- Why FinOps Needs DataOps ObservabilityIn this special guest feature, Chris Santiago, Vice President/Solutions Engineering, Unravel Data, talks about controlling cloud spend through three phases of the FinOps lifecycle.
News from insideBIGDATA
- The Science and Practical Applications of Word Embeddings
- Small Business Owners Beginning to Turn to AI for Help with Everyday Tasks
- TOP 10 insideBIGDATA Articles for May 2023
- Busting Data Observability Myths
- The Importance of Data Quality in Benefits
- insideBIGDATA Latest News – 6/6/2023
- Heard on the Street – 6/5/2023
Editor’s Choice
@HPCpodcast: Silicon Photonics – Columbia Prof. Keren Bergman on the Why, How and When of a Technology that Could Transform HPC
Silicon photonics has the potential to transform HPC: it’s a dual-threat interconnect technology that could – if and when it is wrestled into commercial, cost-effective form – move data within chips and systems much faster than conventional, copper-based interconnects while also delivering far greater energy efficiency. Venture-backed start-ups and established tech companies (HPC, NVIDIA, AMD and Intel, to name four) have mounted significant R&D efforts. In this episode of the @HPCpodcast, Shahin and Doug spoke with a leading silicon photonics expert, Keren Bergmen, Columbia University’s Charles Batchelor Professor of Electrical Engineering, Faculty Director of the Columbia Nano Initiative, and Principal [READ MORE…]
Azure, AMD and the Power of Cloud-based HPC for Sustainability R&D Projects
[SPONSORED GUEST ARTICLE] Sustainability – both in the way it operates and in its support for the development of sustainable technologies and products – is a theme that permeates the Microsoft Azure public cloud platform and its end-user community. Azure, in combination with advanced and ultra-efficient CPUs from AMD and other HPC-class technologies, is a hothouse for sustainability R&D projects ranging from electric vehicles to wind turbine design. Before we look in detail at an example of those projects, let’s start with Azure’s operational efficiencies….
Frontier Pushes Boundaries: 86% of Nodes Engaged on Reactor Simulation Runs
Details have trickled out of the Oak Ridge Leadership Computing Facility (OLCF) indicating progress in preparing Frontier, the exascale-class supercomputer ranked the world’s most powerful system, for full user operations. Earlier this week, the Exascale Computing Project released an article on its web site entitled “Predicting the Future of Fission Power” discussing the ExaSMR (Exa for exascale; SMR for small modular reactors) toolkit for running nuclear reactor design simulations on Frontier. Toward the end of the article, we learn that ExaSMR performed simulations on 8,192 of Frontier’s 9,472 nodes, involving more than 250 billion neutron histories per iteration, according to [READ MORE…]
Conventional Wisdom Watch: Matsuoka & Co. Take on 12 Myths of HPC
A group of HPC thinkers, including the estimable Satoshi Matsuoka of the RIKEN Center for Computational Science in Japan, have come together to challenge common lines of thought they say have become, to varying degrees, accepted wisdom in HPC. In a paper entitled “Myths and Legends of High-Performance Computing” appearing this week on the Arvix site, Matsuoka and four colleagues (three from the RIKEN Center – see author list below) offer opinions and analysis on such issues as quantum replacing classical HPC, the zettascale timeline, disaggregated computing, domain-specific languages (DSLs) vs. Fortran and cloud subsuming HPC, among other topics. “We [READ MORE…]
SC22: CXL3.0, the Future of HPC Interconnects and Frontier vs. Fugaku
HPC luminary Jack Dongarra’s fascinating comments at SC22 on the low efficiency of leadership-class supercomputers highlighted by the latest High Performance Conjugate Gradients (HPCG) benchmark results will, I believe, influence the next generation of supercomputer architectures to optimize for sparse matrix computations. The upcoming technology that will help address this problem is CXL. Next generation architectures will use CXL3.0 switches to connect processing nodes, pooled memory and I/O resources into very large, coherent fabrics within a rack, and use Ethernet between racks. I call this a “Petalith” architecture (explanation below), and I think CXL will play a significant and growing [READ MORE…]