Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Job of the Week: HPC System Administrator at D.E. Shaw Research

D.E. Shaw Research is seeking and HPC System Administrator in our Job of the Week. “Exceptional sysadmins sought to manage systems, storage, and network infrastructure for a New York–based interdisciplinary research group. Ideal candidates should have strong fundamental knowledge of Linux concepts such as file systems, networking, and processes in addition to practical experience administering Linux systems. Relevant areas of expertise might include large-installation systems administration experience and strong programming and scripting ability, but specific knowledge of and level of experience in any of these areas is less critical than exceptional intellectual ability.”

Making Python Fly: Accelerate Performance Without Recoding

Developers are increasingly besieged by the big data deluge. Intel Distribution for Python uses tried-and-true libraries like the Intel Math Kernel Library (Intel MKL)and the Intel Data Analytics Acceleration Library to make Python code scream right out of the box – no recoding required. Intel highlights some of the benefits dev teams can expect in this sponsored post.

Dr. Lin Gan Reflects on the SC19 Theme: HPC is Now

In this special guest feature from the SC19 Blog, Charity Plata from Brookhaven National Lab catches up with Dr. Lin Gan from Tsinghua University, who’s outstanding work in HPC has been recognized with a number of awards including the Gordon Bell Prize. As a highly awarded young researcher who already has been acknowledged for “outstanding, influential, and potentially long-lasting contributions” in HPC, Gan shares his thoughts on future supercomputers and what it means to say, “HPC Is Now.”

Evolving NASA’s Data and Information Systems for Earth Science

Rahul Ramachandran from NASA gave this talk at the HPC User Forum. “NASA’s Earth Science Division (ESD) missions help us to understand our planet’s interconnected systems, from a global scale down to minute processes. ESD delivers the technology, expertise and global observations that help us to map the myriad connections between our planet’s vital processes and the effects of ongoing natural and human-caused changes.”

Video: Managing large-scale cosmology simulations with Parsl and Singularity

Rick Wagner from Globus gave this talk at the Singularity User Group “We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer.”

Video: The Human Side of AI

 In this video from the GPU Technology Conference, Dan Olds from OrionX discusses the human impact of AI with Greg Schmidt from HPE. The industry buzz about artificial intelligence and deep learning typically focuses on hardware, software, frameworks,  performance, and the lofty business plans that will be enabled by this new technology. What we don’t […]

Sign up for ISC STEM Student Day

Young people looking to further their careers in HPC are encouraged to sign up for the ISC STEM Student Day program. As part of the ISC High Performance Conference coming to Frankfurt in June, this program offers undergraduate and graduate students an early insight into the field of high performance computing as well as an opportunity to meet the important players in the sector.

Adaptive Deep Reuse Technique cuts AI Training Time by more than 60 Percent

North Carolina State University researchers have developed a technique that reduces training time for deep learning networks by more than 60 percent without sacrificing accuracy, accelerating the development of new artificial intelligence applications. “One of the biggest challenges facing the development of new AI tools is the amount of time and computing power it takes to train deep learning networks to identify and respond to the data patterns that are relevant to their applications. We’ve come up with a way to expedite that process, which we call Adaptive Deep Reuse. We have demonstrated that it can reduce training times by up to 69 percent without accuracy loss.”

Spectra Logic and Arcitecta team up for Genomics Data Management

Spectra Logic is teaming with Arcitecta for tackling the massive datasets used in life sciences. The two companies will showcase their joint solutions at the BioIT World conference this week in Boston. “Addressing the needs of the life sciences market with reliable data storage lies at the heart of the Spectra and Arcitecta relationship,” said Spectra CTO Matt Starr. “This joint solution enables customers to better manage their data and metadata by optimizing multiple storage targets, retrieving data efficiently and tracking content and resources.”

Jack Dongarra Named a Foreign Fellow of the Royal Society

Jack Dongarra from the University of Tennessee has been named a Foreign Fellow of the Royal Society, joining previously inducted icons of science such as Isaac Newton, Charles Darwin, Albert Einstein, and Stephen Hawking. “This honor is both humbling because of others who have been so recognized and gratifying for the acknowledgement of the research and work I have done,” Dongarra said. “I’m deeply grateful for this recognition.”