Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Sunita Chandrasekaran Receives NSF Grant to Create Powerful Software Framework

Over at the University of Delaware, Julie Stewart writes that assistant professor Sunita Chandrasekaran has received an NSF grant to develop frameworks to adapt code for GPU supercomputers. She is working with complex patterns known as wavefronts, which are commonly found in scientific codes used in analyzing the flow of neutrons in a nuclear reactor, extracting patterns from biomedical data or predicting atmospheric patterns.

Big Data over Big Distance: Zettar Moves a Petabyte over 5000 Miles in 29 Hours

Today AIC announced a world-record in data transfer: one petabyte in 29 hours encrypted data transfer, with data integrity checksum unconditionally enabled, over a distance of 5000 miles. The average transfer rate is 75Gbps, or 94% utilization of the available bandwidth of 80Gbps. “Even with massive amounts of data, this test confirmed once more that it’s completely feasible to carry out long distance, fully encrypted and checksum-ed data transfer at nearly the line-rate, over a shared and production network.”

Using Ai to Automatically Diagnose Alzheimer’s Disease

Researchers from Stanford University have developed a deep learning based system that can automatically detect Alzheimer’s disease and its biomarkers from MRIs, with 94 percent accuracy. “Our method uses minimal preprocessing of MRIs (imposing minimum preprocessing artifacts) and utilizes a simple data augmentation strategy of downsampled MR images for training purposes,” the researchers stated in their paper.

300K-Core SuperMUC-NG System Launches at LRZ in Germany

LRZ in Germany dedicated their new SuperMUC-NG (“next generation”) supercomputer last week in Munich. Built by Lenovo, the massive system uses innovative hot-water cooling to achieve unprecedented computational power for large-scale scientific and engineering simulations.

Why Rich Brueckner @insideHPC is Making a Documentary about Dogs and the People Who Love Them

I would like to interrupt whatever it is that normally happens here to make an announcement about a personal project that does not involve HPC. “I, Richard Brueckner, am making a film and I want to share the details with all my readers who happen to be Dog lovers.”

Argonne is Supercomputing Big Data from the Large Hadron Collider

Over at Argonne, Madeleine O’Keefe writes that the Lab is supporting CERN researchers working to interpret Big Data from the Large Hadron Collider (LHC), the world’s largest particle accelerator. The LHC is expected to output 50 petabytes of data this year alone, the equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

HPC Server Market Jumps 27.6% in 2Q 2018

Hyperion Research reports that worldwide factory revenue for the high-performance computing (HPC) technical server market jumped 27.6% to $3.7 billion in the second quarter of 2018 (2Q18), up from $2.9 billion in the same period of 2017, according to the newly released Hyperion Research Worldwide High-Performance Technical Server QView. Sequentially, second-quarter 2018 HPC server revenue grew 16.7% over the $3.2 billion figure in the first quarter of 2018.

Microsoft is Making Progress on Quantum Computing as a Service

Just one year after launch, Microsoft is touting their progress towards building Quantum Computing as a Service. “The Microsoft Quantum Development Kit is the fastest path to quantum development. Available for Linux, MacOS, and Windows, you now are just a few steps away from accessing local quantum simulators or in Azure.”

Debugging for Success and Accelerated Platform Bring-Up

Debugging can prove a substantial challenge, even for experienced engineers. In this video, Soflen Shih, a technical consulting engineer at Intel, discusses the benefits of Intel System Studio and how its built-in functionality can make the debugging process much easier. “Intel System Studio contains libraries, performance analyzers, and compilers as well as profiling and debugging tools. The combination of these elements provides a complete developer solution to assist with platform bring-up, power optimization, thermal tuning, and system performance profiling.”

Survey Foretells Explosive Growth in Machine Learning Projects Over Next Two Years

Over at the Univa Blog, Gary Tyreman writes that the company sponsored an industry-wide survey to better understand what key challenges our HPC users are currently facing that are preventing them from moving their machine learning (ML) projects into production. “Our goal is to use this data to help guide our customers and recommend the right set of tools and migration options needed to accelerate value in machine learning.”