The Intel Omni-Path Architecture (Intel® OPA) whitepaper goes through the multitude of improvements that Intel OPA technology provides to the HPC community. In particular, HPC readers will appreciate how collective operations can be optimized based on message size, collective communicator size and topology using the point-to-point send and receive primitives.
“Researchers at the U.S. Department of Energy’s Argonne National Laboratory will be testing the limits of computing horsepower this year with a new simulation project from the Virtual Engine Research Institute and Fuels Initiative (VERIFI) that will harness 60 million computer core hours to dispel those uncertainties and pave the way to more effective engine simulations.”
Many Universities, private research labs and government research agencies have begun using High Performance Computing (HPC) servers, compute accelerators and flash storage arrays to accelerate a wide array of research among disciplines in math, science and engineering. These labs utilize GPUs for parallel processing and flash memory for storing large datasets. Many universities have HPC labs that are available for students and researchers to share resources in order to analyze and store vast amounts of data more quickly.
Companies already using High-performance Computing (HPC) with a Lustre file system for simulations, such as those in the financial, oil and gas, and manufacturing sectors, want to convert some of their HPC cycles to Big Data analytics. This puts Lustre at the core of the convergence of Big Data and HPC.
Dr. Eng Lim Goh from SGI discusses important trends in HPC including pending changes coming to processors/accelerators, memory hierarchy, and interconnects. “SGI, the trusted leader in high performance computing, is focused on helping customers solve their most demanding business and technology challenges by delivering technical computing, Big Data analytics, cloud computing, and petascale storage solutions that accelerate time to discovery, innovation, and profitability.”
“Univa is a workload optimization company. Our core product, Grid Engine software, creates a single virtual high throughput, high performance and hyper-scale compute pool out of distributed data center resources. Our customers efficiently run large quantities of mission-critical compute-intensive applications faster with lower overall costs.”
Today AMD unveiled innovation in heterogeneous HPC at the Centre of New Technologies at the University of Warsaw. In a new cluster deployment called Orion, the Next Generation Sequencing Centre in Warsaw is powering bioinformatics research with 1.5 petaFLOPS of AMD FirePro S9150 server GPU performance.
At the Centre for High-Performance Computing (CHPC) in South Africa, the mission is to enable cutting-edge research by supporting the highest levels of HPC available. That means ensuring researchers – who often are not experienced with computers let alone HPC systems – to get their work done with the HPC getting in the way.