“To be successful in high-performance computing (HPC) today, it is no longer enough to sell good hardware: vendors need to develop an ‘ecosystem’ in which other hardware companies use their products and components; in which system administrators are familiar with their processors and architectures; and in which developers are trained and eager to write code both for the efficient use of the system and for end-user applications. No one company, not even Intel or IBM, can achieve all of this by itself anymore.”
Intel is seeking an HPC Solutions Architect in our Job of the Week.
The fifth Irish Supercomputer List was released today with a full ranking of the nation’s HPC systems. Launched in November 2013 to raise the profile of High Performance Computing in Ireland and abroad, the list is updated twice annually with a continuous open call for participation from users and maintainers of Irish HPC installations.
“Modal is a cosmological statistical analysis package that can be optimized to take advantage of a high number of cores. The inner product computations with Modal can be run on the Intel Xeon Phi coprocessor. As a base, the entire simulation took about 6 hours on the Intel Xeon processor. Since the inner calculations are independent from each other, this lends to using the Intel Xeon Phi coprocessor.”
“In July, Intel announced plans for the HPC Scalable System Framework – a design foundation enabling wide range of highly workload-optimized solutions. This talk will delve into aspects of the framework and highlight the relationship and benefits to application development and execution.”
The democratization of HPC got a major boost last year with the announcement of an NSF award to the Pittsburgh Supercomputing Center. The $9.65 million grant for the development of Bridges, a new supercomputer designed to serve a wide variety of scientists, will open the door to users who have not had access to HPC until now. “Bridges is designed to close three important gaps: bringing HPC to new communities, merging HPC with Big Data, and integrating national cyberinfrastructure with campus resources. To do that, we developed a unique architecture featuring Hewlett Packard Enterprise (HPE) large-memory servers including HPE Integrity Superdome X, HPE ProLiant DL580, and HPE Apollo 2000. Bridges is interconnected by Intel Omni-Path Architecture fabric, deployed in a custom topology for Bridges’ anticipated workloads.”
“Modern systems will continue to grow in scale, and applications must evolve to fully exploit the performance of these systems. While today’s HPC developers are aware of code modernization, many are not yet taking full advantage of the environment and hardware capabilities available to them. Intel is committed to helping the HPC community develop modern code that can fully leverage today’s hardware and carry forward to the future. This requires a multi-year effort complete with all the necessary training, tools and support. The customer training we provide and the initiatives and programs we have launched and will continue to create all support that effort.”
Today Russia’s RSC Group announced that Team TUMuch Phun from the Technical University of Munich (TUM) won the Highest Linpack Award in the SC15 Student Cluster Competition. The enthusiastic students achieved 7.1 Teraflops on the Linpack benchmark using an RSC PetaStream cluster with computing nodes based on Intel Xeon Phi. TUM student team took 3rd place in overall competition within 9 teams participated in SCC at SC15, so as only one European representative in this challenge.
Software for data analysis, system management, and for debugging other software were be among the innovations on display at SC15 last week. In addition to the software, novel and improved hardware will also be on display, together with an impressive array of initiatives from Europe in research and development leading up to Exascale computing.
Asetek showcased its full range of RackCDU hot water liquid cooling systems for HPC data centers at SC15 in Austin. On display were early adopting OEMs such as CIARA, Cray, Fujitsu, Format and Penguin. HPC installations from around the world incorporating Asetek RackCDU D2C (Direct-to-Chip) technology were also be featured. In addition, liquid cooling solutions for both current and future high wattage CPUs and GPUs from Intel, Nvidia and OpenPower were on display.