A team of researchers are using the Frontera supercomputer at the Texas Advanced Computing Center (TACC) to crack open the proton, a fundamental building block of the atomic nucleus that is used, among other ways, as a medical probe in magnetic resonance imaging. Frontera, the world’s fifth-ranked HPC system on the Top500 list and the […]
TACC’s Frontera Supports Investigation of Subatomic Protons – ‘the Origin of the Mass of Objects’
How Ceph powers exciting research with Open Source
“As researchers seek scalable, high performance methods for storing data, Ceph is a powerful technology that needs to be at the top of their list. Ceph is an open-source software-defined storage platform. While it’s not often in the spotlight, it’s working hard behind the scenes, playing a crucial role in enabling ambitious, world-renowned projects such as CERN’s particle physics research, Immunity Bio’s cancer research, The Human Brain Project, MeerKat radio telescope, and more.”
Video: An Update on HPC at CSCS
Thomas Schulthess from CSCS gave this talk at the HPC User Forum. “CSCS has a strong track record in supporting the processing, analysis and storage of scientific data, and is investing heavily in new tools and computing systems to support data science applications. For more than a decade, CSCS has been involved in the analysis of the many petabytes of data produced by scientific instruments such as the Large Hadron Collider (LHC) at CERN. Supporting scientists in extracting knowledge from structured and unstructured data is a key priority for CSCS.”
FPGAs and the Road to Reprogrammable HPC
In this special guest feature from Scientific Computing World, Robert Roe writes that FPGAs provide an early insight into possibile architectural specialization options for HPC and machine learning. “Architectural specialization is one option to continue to improve performance beyond the limits imposed by the slow down in Moore’s Law. Using application-specific hardware to accelerate an application or part of one, allows the use of hardware that can be much more efficient, both in terms of power usage and performance.”
Converging Workflows Pushing Converged Software onto HPC Platforms
Are we witnessing the convergence of HPC, big data analytics, and AI? Once, these were separate domains, each with its own system architecture and software stack, but the data deluge is driving their convergence. Traditional big science HPC is looking more like big data analytics and AI, while analytics and AI are taking on the flavor of HPC.
In a boon for HPC, Founding Members Sign SKA Observatory Treaty
Earlier this week, countries involved in the Square Kilometre Array (SKA) Project came together in Rome to sign an international treaty establishing the intergovernmental organization that will oversee the delivery of the world’s largest radio telescope. “Two of the world’s fastest supercomputers will be needed to process the unprecedented amounts of data emanating from the telescopes, with some 600 petabytes expected to be stored and distributed worldwide to the science community every year, or the equivalent of over half a million laptops worth of data.”
Argonne Looks to Singularity for HPC Code Portability
Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code almost anywhere. “Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced.”
Fast Simulation with Generative Adversarial Networks
In this video from the Intel User Forum at SC18, Dr. Sofia Vallecorsa from CERN openlab presents: Fast Simulation with Generative Adversarial Networks. “This talk presents an approach based on generative adversarial networks (GANs) to train them over multiple nodes using TensorFlow deep learning framework with Uber Engineering Horovod communication library. Preliminary results on scaling of training time demonstrate how HPC centers could be used to globally optimize AI-based models to meet a growing community need.”
Micron Joins CERN openlab
Last week at SC18, Micron announced that the company has joined CERN openlab, a unique public-private partnership, by signing a three-year agreement. Under the agreement, Micron will provide CERN with advanced next-generation memory solutions to further machine learning capabilities for high-energy physics experiments at the laboratory. Micron’s memory solutions that combine neural network capabilities will be tested in the data-acquisition systems of experiments at CERN.
Video: How Ai is helping Scientists with the Large Hadron Collider
In this video from SC18 in Dallas, Dr. Sofia Vallecorsa from CERN OpenLab describes how Ai is being used in design of experiments for the Large Hadron Collider. “An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learning. The CERN team demonstrated that AI-based models have the potential to act as orders-of-magnitude-faster replacements for computationally expensive tasks in simulation, while maintaining a remarkable level of accuracy.”