Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


insideHPC Special Report: Citizens Benefit from Public/Private Partnerships – Part 3

This special report sponsored by Dell Technologies, takes a look at how now more than ever, agencies from all levels of government are teaming with private Information Technology (IT) organizations to leverage AI and HPC to create and implement solutions that not only increase safety for all, but also provide a more streamlined and modern experience for citizens.

The Hyperion-insideHPC Interviews: Rich Brueckner Talks with Jack Collins about the Emergence of ‘HPC Everywhere’ and Its Impact on Science

HPC industry veteran Jack Collins, long-time fixture in the scientific supercomputing community, has seen it all in HPC, from the days when his input/output device for storing integrals was nine-track tape to today’s 750-GPU monster systems. So he has a full appreciation for how far HPC has come. At the same time, he’s concerned about the power of HPC for dark purposes, such as deepfakes: “’Seeing is believing,’ is what people used to say,” he told the late Rich Brueckner. “I can hack a video and make it look like anything in an afternoon. That’s potentially societal altering. We have to be very careful with that.”

Let’s Talk Exascale: Forecasting Water Resources and Severe Weather with Greater Confidence

In this episode of Let’s Talk Exascale, Mark Taylor of Sandia National Laboratories talks about using exascale supercomputers for severe weather and water resource forecasting. A sub-project within the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP) called E3SM-MMF is working to improve the ability to simulate the water cycle and the processes around precipitation. Our guest on the latest episode of ECP’s podcast, Let’s Talk Exascale, is Mark Taylor of Sandia National Laboratories, principal investigator of the E3SM-MMF project.

Quantum Superiority: How Far Away?

Some technologies, it’s said, are “always 10 years away” – we hear this in reference to autonomous vehicles and quantum computing. Of course, how far away we think they are has a lot to do with how they’re defined. Semi-autonomous cars are here today and becoming smarter with each new model year. As for quantum […]

Intel, NSF Name Winners of Wireless Machine Learning Research Funding

Intel and the National Science Foundation (NSF), joint funders of the Machine Learning for Wireless Networking Systems (MLWiNS) program, today announced recipients of awards for research projects into ultra-dense wireless systems that deliver the throughput, latency and reliability requirements of future applications – including distributed machine learning computations over wireless edge networks. Here are the […]

SeRC Turns to oneAPI Multi-Chip Programming Model for Accelerated Research

At ISC 2020 Digital, the Swedish e-Science Research Center (SeRC), Stockholm, has announced plans to use Intel’s oneAPI unified programming language by researchers conducting  massive simulations powered by CPUs and GPUs. The center said it chose the oneAPI programming model, designed to span CPUs, GPUs, FPGAs and other architectures and silicon,  to accelerate compute for research using GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics software, developed by SeRC and first released in 1991

ISC 2020 Student Cluster Competition: The Winner Is….

Students from China’s University of Science and Technology (USTC) won first place in this year’s annual Student Cluster Competition at the ISC 2020 Digital conference. This year’s competition focused on the global fight against Covid-19 by including applications that address education and applied learning towards accelerating bioscience research and discovery. The teams, totaling 80 students, were tasked to test several applications used by scientists and researchers for finding a cure for the pandemic.

Empowering Edge Cloud in the 5G & IoT Hyper-Connected Era

It is well documented that the amount of data that is being produced on a daily/monthly/yearly basis is growing at astronomical rates. IDC have estimated that by 2025, 175 zettabytes of data will be created each year and will continue to grow. The data will be in both structured and unstructured forms and there will be major logistical challenges in moving this data from the devices that create the data to where the data is acted upon and decisions made.

Car as ‘Computing Device’: Mercedes-Benz and Nvidia Team to Build Software-defined Vehicles for 2024

Nvidia and Mercedes-Benz today said they plan to create an in-vehicle computing system and AI infrastructure for 2024 Mercedes-Benz vehicles equipped with “upgradable automated driving functions.” The resulting cars and trucks will be capable of automated address-to-address driving of regular routes, such as commutes and repeat deliveries, according to the companies.

The Hyperion-insideHPC Interviews: Rich Brueckner Talks with Paul Muzio about His Hopes, Concerns for HPC and AI

  Industry luminary Paul Muzio, holder of prominent positions in academia and private industry over a multi-decade career in HPC, is bullish on supercomputing – and deeply concerned. In this video, Muzio spoke with the late Rich Brueckner about the past, present and future of supercomputing. He sees a future in which compute power is […]