Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Verifying the Universe with Exascale Supercomputers

 The ExaSky project, one of the critical Earth and Space Science applications being solved by the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP), is preparing to use the nation’s forthcoming exascale supercomputers. Exascale machines will enable the ExaSky team to verify the gravitational influences, gas dynamics, and astrophysical inputs that they use to model the universe at unprecedented fidelity, as well as address forthcoming challenge problems to predict and replicate high-accuracy sky survey data.

Atos Acquires HPC Cloud Platform Provider Nimbix

Last month, Agnès Boudot, SVP, head of HPC & Quantum at Atos, told us — without sharing details — that the company’s global strategy includes expansion into the U.S. market. At least part of that strategy was revealed today with the news that Atos has acquired long-time high-performance computing cloud platform provider Nimbix. Established in 2010 […]

PsiQuantum Closes $450M Venture Round

PALO ALTO — PsiQuantum has raised $450 million in Series D funding to build what the company said will be the world’s first commercially viable quantum computer. The funding round was led by BlackRock, along with participation from insiders including Baillie Gifford and M12 – Microsoft’s venture fund – and new investors including Blackbird Ventures […]

Intel Releases Foundational Technology Roadmap, Launches New Naming Structure for Process Nodes

Intel Corporation today announced a detailed process and packaging technology roadmaps of “foundational innovations” for products through 2025 and beyond. In addition to announcing RibbonFET, its first new transistor architecture in more than 10 years, and PowerVia, a backside power delivery method, the company highlighted its planned adoption of next-generation extreme ultraviolet lithography (EUV), referred […]

Enhancing Security with High Performance AI Capability Deployed at the Rugged Edge

In this sponsored post from One Stop Systems, we see that whether surviving in a fast-moving battlefield situation, protecting sensitive industrial or transportation hub assets, or ensuring uninterrupted operation of critical national infrastructure, intelligent long-range surveillance is critical.  The ability to provide 24/7 remote long range threat detection and situational awareness coupled with human-machine control allows for the fast and appropriate threat response that is fundamental to addressing these security imperatives.

NERSC Honors 8 Early Career Scientists with Annual HPC Achievement Awards

The National Energy Research Scientific Computing Center (NERSC) recently announced the recipients of its annual High Performance Computing Achievement Awards, recognizing eight early-career scientists who have made significant contributions to scientific computation using NERSC resources. The NERSC awards pay tribute to the accomplishments of young researchers in scientific fields supported by the U.S. Department of […]

Spend Less on HPC/AI Storage (and more on CPU/GPU compute)

[SPONSORED POST] In this whitepaper courtesy of HPE, you’ll learn about the three approaches that can help you to feed your CPU- and GPU-accelerated compute nodes without I/O bottlenecks while creating efficiencies in Gartner’s Run category. As the market share leader in HPC servers, HPE saw the convergence of classic modeling and simulation with AI methods such as machine learning and deep learning coming and now offers you a new portfolio of parallel HPC/AI storage systems that are purpose engineered to address all of the previously mentioned challenges—in a cost-effective way.

D-Wave, Los Alamos Isolate Emergent Magnetic Monopoles Using Quantum-annealing Computer

Using a D-Wave quantum-annealing computer as a testbed, scientists at Los Alamos National Laboratory have shown that it is possible to isolate so-called emergent magnetic monopoles, a class of quasiparticles, creating a new approach to developing “materials by design.” “We wanted to study emergent magnetic monopoles by exploiting the collective dynamics of qubits,” said Cristiano […]

HPC in the News: Data Management Automation and Faster Processor Gates; Intel and TSMC in Arizona, Europe

The everlasting rat’s nest that is scientific computing data management, the permanent quest for more advanced-level processing power, and investments in new fabs for advanced chips are HPC topics in the news this week. Taking on data management at the upper reaches of data-intensive workloads is Harvard University Associate Professor Stratos Idreos, who two years […]

LLVM Holds the Keys to Exascale Supercomputing

The recent proliferation of new hardware technologies has galvanized the high-performance computing (HPC) community and created the ability to deliver the nation’s forthcoming exascale-capable supercomputers and data centers. It has also made LLVM-based compiler technology the default gatekeeper to these new systems. LLVM, an open-source collection of compiler and toolchain technologies, serves as a test bed for proposed parallelization extensions (e.g., the interoperability directive in OpenMP 5.1) and as a vehicle to provide production-quality parallel compiler implementations. Johannes Doerfert, a researcher at Argonne National Laboratory, notes that “LLVM is a vehicle to provide performant implementations of OpenMP….