In HPC news from CEA in France, the EoCoE (Oriented Energy Center of Excellence) project officially launched earlier this month. Pronounced “Echo,” the EoCoE has a mission to create a new, long lasting and sustainable community around computational energy science.
The Open Compute Project got a major endorsement in the HPC space news of NNSA’s pending deployment of Tundra clusters from Penguin Computing. To learn more, we caught up with Dan Dowling, Penguin’s VP of Engineering Services.
Today the National Nuclear Security Administration (NNSA) announced a contract with Penguin Computing for a set of large-scale Open Compute HPC clusters. With 7-to-9 Petaflops of aggregate peak performance, the systems will be installed as part of NNSA’s tri-laboratory Commodity Technology Systems program. Scheduled for installation starting next year, the systems will bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories.
Pacific Northwest National Laboratory has opened the CENATE Center for Advanced Technology Evaluation, a first-of-its-kind computing proving ground. Designed to shape future extreme-scale computing systems, CENATE evaluations will mostly concern processors; memory; networks; storage; input/output; and the physical aspects of certain systems, such as sizing and thermal effects.
PEZY Computing from Japan has earned the top three rankings on the Green500 list, using a 3M Fluorinert Electronic Liquid in an immersive cooling system built by ExaScaler Inc. The Green500 is a biannual ranking of the most energy-efficient supercomputers in the world. The triple win indicates growing progress and adoption in the field of immersion cooling with engineered dielectric fluids and its potential to transform the high performance computing (HPC) industry with step-change improvements in energy efficiency and compute performance.
Today E4 Computer Engineering announced the results of tests carried out independently on a GPU cluster provided to EnginSoft Italy, a premier global consulting firm in the field of Simulation Based Engineering Science (SBES).
A team at Oak Ridge has developed a set of automated calibration techniques for tuning residential and commercial building energy efficiency software models to match measured data. Their open source Autotune code is now available on GitHub.
“Supercomputing should be available for everyone who wants it. With that mission in mind, a team of engineers created Parallella, an 18-core supercomputer that’s a little bigger than a credit card. Parallella is open source hardware; the circuit diagrams are on GitHub and the machine runs Linux. Icing on the cake: Parallella is the most energy efficient computer on the planet, and you can buy one for a hundred bucks. Why does parallel computing matter? How can developers use parallel computing to deliver better results for clients? Let’s explore these questions together.”
In the first of what will likely be a series of announcements from the Hot Chips conference this week, Phytium Technologies revealed details of its Mars 64-core ARMv8 processors.
“SUPER builds on past successes and now includes research into performance auto-tuning, energy efficiency, resilience, multi-objective optimization, and end-to-end tool integration. Leading the project dovetails neatly with Oliker’s research interests, which include optimization of scientific methods on emerging multi-core systems, ultra-efficient designs of domain-optimized computational platforms and performance evaluation of extreme-scale applications on leading supercomputers.”