Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

HPC Connects: The Search for Elusive Proteins that perform Gene Editing

In this video from the SC17 HPC Connects series, David Paez-Espino from the Joint Genome Institute describes how researchers are using supercomputing to search for elusive proteins that perform gene editing. “This revolutionary work requires petaflops of computing power to sift through billions of DNA sequences in the JGI data portals to identify proteins like Cas9 that, combined with the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR), can “edit” a genome.”

Building Fast Data Compression Code with Intel Integrated Performance Primitives (Intel IPP) 2018

Intel® Integrated Performance Primitives (Intel IPP) is a highly optimized, production-ready, library for lossless data compression/decompression targeting image, signal, and data processing, and cryptography applications. Intel IPP includes more than 2,500 image processing, 1,300 signal processing, 500 computer vision, and 300 cryptography optimized functions for creating digital media, enterprise data, embedded, communications, and scientific, technical, and security applications.

Cavium ThunderX Cluster to Crunch Big Data at University of Michigan

Today Cavium announced a new partnership that will position the University of Michigan as a leader in data-intensive scientific research by creating a powerful Big Data computing cluster using dual socket servers powered by Cavium’s ThunderX ARMv8-A workload optimized processors. The cluster consists of 40 servers each containing 96 ARMv8 cores and 512 GB of RAM per server. 

Podcast: Optimizing Cosmos Code on Intel Xeon Phi

In this TACC podcast, Cosmos code developer Chris Fragile joins host Jorge Salazar for a discussion on how researchers are using supercomputers to simulate the inner workings of Black holes. “For this simulation, the manycore architecture of KNL presents new challenges for researchers trying to get the best compute performance. This is a computer chip that has lots of cores compared to some of the other chips one might have interacted with on other systems,” McDougall explained. “More attention needs to be paid to the design of software to run effectively on those types of chips.”

Video: 25 Years of Supercomputing at Oak Ridge

Since its early days, the OLCF has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of DOE—contributing to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system for open science, the science community’s first petaflop system, and two top-ranked machines on the TOP500 list. The next chapter in the OLCF’s legacy is set to begin with the deployment of Summit, a pre-exascale system capable of more than five times the performance of Titan.”

Advanced Clustering Technologies to build “Pistol Pete” Supercomputer at Oklahoma State

Today Oklahoma State University announced that Advanced Clustering Technologies has been contracted to build and install its newest supercomputer to support a broad range of Science, Technology, Engineering and Mathematics (STEM) disciplines. “The new supercomputer, which will be named after the university’s mascot, Pistol Pete, will serve as a campus-wide shared resource, available at no charge to all OSU faculty, staff, postdocs, graduate students and undergraduates, as well as to researchers and educators across Oklahoma.”

Cray Deploys Pair of Supercomputers in Canada for Weather Forecasting

Today Shared Services Canada (SSC) dedicated a pair of Cray supercomputers in Quebec. The new HPC systems will be used by the Environment and Climate Change Canada (ECCC) to improve the accuracy and timeliness of weather warnings and forecasts. “Accurate and timely weather forecasting helps us protect our homes and businesses in the face of extreme storms and tornadoes, which are getting worse due to climate change. By supporting quality weather forecasts and warnings, the new High Performance Computers will help protect Canadians for years to come.”

High School Team to Compete in SC17 Student Cluster Competition

For the first time, a team high school of high schoolers will compete in the Student Cluster Competition next week at SC17 in Denver. The team hails from Harrison High School in West Lafayette, Indiana. “In this real-time, non-stop, 48-hour challenge, teams of undergraduate and/or high school students assemble a small cluster on the exhibit floor and race to complete a real-world workload across a series of applications and impress HPC industry judges.”

Video: System Interconnects for HPC

In this video from the 2017 Argonne Training Program on Extreme-Scale Computing, Pavan Balaji from Argonne presents an overview of system interconnects for HPC. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Pagoda Project Rolls Out First Software Libraries for Exascale

The Pagoda Project—a three-year Exascale Computing Project software development program based at Lawrence Berkeley National Laboratory—has successfully reached a major milestone: making its open source software libraries publicly available as of September 30, 2017. “Our job is to ensure that the exascale applications reach key performance parameters defined by the DOE,” said Baden.