Sponsored by: LenovoLenovo HPC Powers SPEChpc™ 2021 with AMD 3rd Generation EPYC™ Processors
As a leader in high performance computing, Lenovo continually supports the Standard Performance Evaluation Corporation (SPEC) benchmarks, that would help customers make better-informed decisions for their HPC workloads. SPEChpc™ 2021 is a newly released benchmark suite from SPEC that produces industry-standard benchmarks for the newest generation of computer systems. What separates SPEChpc™ 2021 from SPEC CPU® 2017, SPEC MPI® 2007 or the other SPEC benchmark suites is that SPEChpc™ 2021 is one-of-a-kind benchmark suite which uses real-world applications that support “multiple programming models and offloading” to evaluate the performance of state-of-the-art heterogenetic HPC systems.
Featured Stories
HPC-AI Chips in the News: NVIDIA, AMD Ensnared in US-China Trade War; Arm Sues Qualcomm
NVIDIA and AMD, makers of advanced GPUs used in HPC-AI workloads, became embroiled this week in the deteriorating relations and ongoing trade war between the US and the People’s Republic of China. Yesterday, Nvidia said it has been prohibited by the US government from selling to the PRC its A100 Tensor Core GPU, on the market since 2020, as well as its forthcoming H100 Tensor Core GPU, scheduled for availability [READ MORE…]
OLCF’s Doug Kothe on Pushing Frontier Across the Exascale Line and the Future of Leadership Supercomputers
Everyone involved in the Frontier supercomputer project got a taste of what a moonshot is like. Granted, lives were not on the line with Frontier as they were when Armstrong and Aldrin went to the moon in 1969. But in other ways there are parallels between the space mission and standing up Frontier, the world’s first exascale HPC system. Both were decade-plus-long efforts involving thousands of people across the public [READ MORE…]
Los Alamos, PNNL, Univ. of New Mexico Researchers to Lead $70M DOE HPC Climate Model Projects
The U.S. Department of Energy (DOE) today announced $70 million in funding for seven projects intended to improve climate prediction and aid in the fight against climate change. The research will be used to accelerate development of DOE’s Energy Exascale Earth System Model (E3SM), enabling scientific discovery through collaborations between climate scientists, computer scientists and applied mathematicians. The projects will be led by researchers at DOE’s Los Alamos National Laboratory [READ MORE…]
Sponsored by: PNY TechnologiesAccelerating the Modern Data Center – Gear Up for AI
Modern applications are transforming every business. From AI for better customer engagement, to data analytics for forecasting, to advanced visualization for product innovation, the need for accelerated computing is rapidly increasing. But enterprises face challenges with using existing infrastructure to power these applications.
Featured Resource
Sponsored by: VMwareVirtualizing HPC Throughput Computing Environments
This pioneering study focuses primarily on the virtual performance of throughput workloads. Download the new white paper from VMWare that explores the possibilities of virtualizing HPC throughput in computing environments.
HPC Newsline
- DOE: $23.9M for Data Management and Scientific Data Visualization Research
- SiMa.ai Ships ML SoC Platform for Embedded Edge Applications
- UC San Diego Students Team for SC22 HPC Student Cluster Competition
- Sponsored by: LenovoLenovo HPC Powers SPEChpc™ 2021 with AMD 3rd Generation EPYC™ Processors
- NVIDIA and Dell Technologies Launch Data Center Solution for Zero-Trust Security and AI
- AMD: Pensando DPUs Enable Accelerated Data Centers with VMware vSphere 8
- ALCF to Offer 8-Week Intro to AI-driven Science on Supercomputers — Student Training Series
- Fujitsu, Riken Partner to Deliver Quantum Computing in Japan Next Year
- You’re Invited! Take the insideHPC Reader Survey
- Seeking a Piece of $50B CHIPS Act Funds? Commerce Department Launches ‘CHIPS.gov’
- Multiverse Computing Releases New Version of Singularity SDK for Portfolio Optimization with Quantum
- SC22: COVID-19 Vaccination No Longer Required for Attendance
- HighPoint Expands NVMe Solution with Gen3 and Gen4 Host Connectivity Adapter Series
- Sponsored by: Silicon MechanicsImproving AI Inference Performance with GPU Acceleration in Aerospace and Defense
- Untether AI Unveils At-Memory Compute Architecture at Hot Chips
- Ansys and AMD Team on Simulation of Large Structural Mechanical Models
- Los Alamos Claims Quantum Machine Learning Breakthrough: Training with Small Amounts of Data
- Sylabs and Anchore Collaborate to Bring SBOM Support for Singularity Containers
- Photonics Company Lightmatter Names Google TPU Engineer Richard Ho VP of Hardware Engineering
- ALCF to Hold Annual Simulation, Data and Learning Workshop Oct. 4-6
- Quantum Company Q-CTRL Names Alex Shih Head of Product
- HPC4EI to Celebrate National Manufacturing Day Oct. 7 with Virtual Event
- LLNL and Korea Institute of Science and Technology to collaborate
- Frontier Exascale Unveiling: ‘Breathtaking…, a Huge Leap Forward for Science, for Our Country”
- Rice Team Wins $1.5 NSF Award for Biomolecule-based Data Storage
- 10th Annual MVAPICH User Group (MUG) Meeting August 22-24, 2022 Columbus, OH
- OpenSSF Day to Be Held Sept. 13 at Open Source Summit Europe
- Multiverse Computing and IQM Quantum Partner on Application-Specific Processors
- IonQ Aria Available on Azure Quantum Platform
- Multiverse Computing and IKERLAN Detect Defects in Manufacturing with Quantum Computing Vision
Industry Perspectives
…today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.
New, Open DPC++ Extensions Complement SYCL and C++
In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.
Featured from insideBIGDATA
- Research Highlights: Interactive continual learning for robots: a neuromorphicapproachIn this regular column we take a look at highlights for breaking research topics of the day in the areas of big data, data science, machine learning, AI and deep learning. For data scientists, it’s important to keep connected with the research arm of the field in order to understand where the technology is headed. […]
News from insideBIGDATA
- AMAX Launches GPU Servers Powered by Intel’s Newest Data Center GPU Flex Series for AI, Gaming, & Media Streaming
- Research Highlights: Interactive continual learning for robots: a neuromorphicapproach
- Introduction to Quantum
- insideBIGDATA Latest News – 9/1/2022
- The Missing Puzzle Piece of the Modern-Day Enterprise: Responsible AI
- SiMa.ai Ships Purpose-built Machine Learning SoC Platform to Customers for Embedded Edge Applications
- Heard on the Street – 8/30/2022
Editor’s Choice
Frontier Named No. 1 Supercomputer on TOP500 List and ‘First True Exascale Machine’
Hamburg — This morning, AMD’s long comeback from trampled HPC also-ran – a comeback that began in 2017 when company executives told skeptical press and industry analysts to expect price/performance chip superiority over Intel – reached a high point (not to say an end point) with the news that the U.S. Department of Energy’s Frontier supercomputer, an HPE-Cray EX system powered by AMD CPUs and GPUs, has not only been named the world’s most powerful supercomputer, it also is the first system to exceed the exascale (1018 calculations/second) milestone. This may not come as a surprise to many in the [READ MORE…]
Chip Geopolitics: If China Invades, Make Taiwan ‘Unwantable’ by Destroying TSMC, Military Paper Suggests
US military planners are taking notice of a suggestion by two military scholars calling for the destruction of semiconductor foundry company Taiwan Semiconductor Manufacturing Co. (TSMC), whose fabs produce advanced microprocessors used in HPC and AI, in the event China invades the island nation A news story in today’ edition of Data Center Times cites the Nikkei Asia news service and a paper in the U.S. Army War College’s scholarly journal, Parameters, discussing the possibility of Taiwan adopting “’a scorched earth policy’ and wipe out its own semiconductor foundries in the wake of any Chinese invasion as a deterrent, U.S. [READ MORE…]
How Machine Learning Is Revolutionizing HPC Simulations
Physics-based simulations, that staple of traditional HPC, may be evolving toward an emerging, AI-based technique that could radically accelerate simulation runs while cutting costs. Called “surrogate machine learning models,” the topic was a focal point in a keynote on Tuesday at the International Conference on Parallel Processing by Argonne National Lab’s Rick Stevens. Stevens, ANL’s associate laboratory director for computing, environment and life sciences, said early work in “surrogates,” as the technique is called, shows tens of thousands of times (and more) speed-ups and could “potentially replace simulations.” Surrogates can be looked at as an end-around to two big problems [READ MORE…]
Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers
If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs the mainstream, baseline processor used in HPC server clusters? Again, in part it gets back to precision. In many workload types, particularly traditional HPC workloads, GPUs aren’t precise enough. Final question: So if GPUs and AI are inextricably linked, particularly for training machine learning models, and if GPUs are less precise than CPUs, does that mean AI is imprecise? [READ MORE…]
6,000 GPUs: Perlmutter to Deliver 4 Exaflops, Top Spot in AI Supercomputing
The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system is being installed in two phases – today’s unveiling is Phase 1, which includes the system’s GPU-accelerated nodes and scratch file system. Phase 2 will add CPU-only nodes later in 2021. “That makes Perlmutter the fastest system on the planet on the 16- and 32-bit [READ MORE…]



