Today Mellanox announced that the company’s interconnect technology accelerates the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (3 times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.
Pressures by management for cost containment are answered by improving software maintenance procedures and automating many of the repetitive activities that have been handled manually. This lowers Total Cost of Ownership (TCO), boosting IT productivity, and increasing return on investment (ROI).
A newly released report commissioned by the National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine examines priorities and associated trade-offs for advanced computing investments and strategy. “We are very pleased with the National Academy’s report and are enthusiastic about its helpful observations and recommendations,” said Irene Qualters, NSF Advanced Cyberinfrastructure Division Director. “The report has had a wide range of thoughtful community input and review from leaders in our field. Its timing and content give substance and urgency to NSF’s role and plans in the National Strategic Computing Initiative.”
Today the Information Technology and Innovation Foundation (ITIF) published a new report that urges U.S. policymakers to take decisive steps to ensure the United States continues to be a world leader in high-performance computing. “While America is still the world leader, other nations are gaining on us, so the U.S. cannot afford to rest on its laurels. It is important for policymakers to build on efforts the Obama administration has undertaken to ensure the U.S. does not get out paced.”
Many HPC applications began as single processor (single core) programs. If these applications take too long on a single core or need more memory than is available, they need to be modified so they can run on scalable systems. Fortunately, many of the important (and most used) HPC applications are already available for scalable systems. Not all applications require large numbers of cores for effective performance, while others are highly scalable. Here is how to better understand your HPC application needs.
Over at the Dell HPC Community Blog, Ashish Kumar Singh, Mayura Deshmukh and Neha Kashyap discuss the performance characterization of Intel Broadwell processors with High Performance LINPACK (HPL) and STREAM benchmarks. “The performance of all Broadwell processor used for this study is higher for both HPL and STREAM benchmarks. “There is ~12% increase in measured memory bandwidth for Broadwell processors compared to Haswell processors. Broadwell processors measure better power efficiencies than the Haswell processors. In conclusion, Broadwell processors may fulfill the demands of more compute power for HPC applications.”
Today Intersect360 Research published a new research report on the Hyperscale market. “This report provides definitions, segmentations, and dynamics of the hyperscale market and describes its scope, the end-user applications it touches, and the market drivers and dampers for future growth. It is the foundational report for the Intersect360 Research hyperscale market advisory service.”
“The findings of a recent IDC study on the cybersecurity practices of U.S. businesses reveal a wide spectrum of attitudes and approaches to the growing challenge of keeping corporate data safe. While the minority of cybersecurity “best practitioners” set an admirable example, the study findings indicate that most U.S. companies today are underprepared to deal effectively with potential security breaches from outside or inside their firewalls.”
Over at the Dell HPC Blog, Olumide Olusanya and Munira Hussain have posted an interesting comparison of FDR and EDR InfiniBand. “In the first post, we shared OSU Micro-Benchmarks (latency and bandwidth) and HPL performance between FDR and EDR Infiniband. In this part, we will further compare performance using additional real-world applications such as ANSYS Fluent, WRF, and NAS Parallel Benchmarks. In both blogs, we have shown several micro-benchmark and real-world application results to compare FDR with EDR Infiniband.”
Today Intersect360 Research released its eighth 2015 Site Budget Allocation Map, a look at how HPC sites divide and spend their budgets.