“In the case of the Intel Xeon Phi coprocessor, although 60 cores are commonly used for computation, there is another core that is available, but not traditionally used as part of a simulation. Experiments using the 61st core for actual computation while running a reverse Monte Carlo ray tracing application for the modeling of radiative heat transfer, demonstrated that the use of another core improved performance, and that oversubscribing the coprocessor operating system thread did not degrade the performance.”
In this Chip Chat podcast, Bill Mannel, Vice President and General Manager for HPC and Big Data from Hewlett Packard Enterprise (HPE) describes the High Performance Computing Alliance between HPE and Intel. He highlights how the two companies are developing innovative solutions based on Intel Scalable System Framework (Intel SSF) and are working to enhance HPC solutions while engaging customers directly in centers of excellence (COEs) located in Grenoble, France and Houston, Texas. Bill also emphasizes how HPE compute solutions are experiencing incredible momentum in government, commercial and academic market verticals and that HPE is receiving excellent results from the integration of HPE Apollo products and Intel HPC technology.
Today Intel Corporation announced that it has completed the acquisition of Altera, a leading provider of field-programmable gate array (FPGA) technology. The acquisition complements Intel’s leading-edge product portfolio and enables new classes of products in the high-growth data center and Internet of Things (IoT) market segments.
In this podcast, the Radio Free HPC team makes their tech predictions for 2016. Will secure firmware be the key differentiator for HPC vendors? Will this be the year of FPGAs? And could we see a 100 Petaflop machine on the TOP500 before the year ends?
“Developers of modern HPC applications face a challenge when scaling out their hybrid (MPI/OpenMP) applications. As cluster sizes continue to grow, the amount of analysis data collected can easily become overwhelming when going from 10s to 1000s of ranks and it’s tough to identify which are the key metrics to track. There is a need for a lightweight tool that aggregates the performance data in a simple and intuitive way, provides advice on next optimizations steps, and hones in on performance issues. We’ll discuss a brand new tool that helps quickly gather and analyze statistics up to 100,000 ranks. We’ll give examples of the type of pertinent information collected at high core counts, including memory and counter usage, MPI and OpenMP imbalance analysis, and total communication vs. computation time. We’ll work through analyzing an application and effective ways to manage the data.”
“SeqAn (www.seqan.de) is an open-source C++ template library (BSD license) that implements many efficient and generic data structures and algorithms for Next-Generation Sequencing (NGS) analysis. It contains gapped k-mer indices, enhanced suffix arrays (ESA) or an (bidirectional) FM-index, as well algorithms for fast and accurate alignment or read mapping. Based on those data types and fast I/O routines, users can easily develop tools that are extremely efficient and easy to maintain. Besides multi-core, the research team at Freie Universität Berlin has started generic support for distinguished accelerators such as Intel Xeon Phi in a new IPCC. In this talk we will introduce SeqAn and its generic design, describe successful applications that use SeqAn, and describe how SeqAn will incorporate SIMD and multicore parallelism for its core data structures using the pairwise alignment module as an example.”
“In this presentation, we will discuss several important goals and requirements of portable standards in the context of OpenMP. We will also encourage audience participation as we discuss and formulate the current state-of-the-art in this area and our hopes and goals for the future. We will start by describing the current and next generation architectures at NERSC and OLCF and explain how the differences require different general programming paradigms to facilitate high-performance implementations.”
An interesting use of HPC technologies is in the area of understanding the propagation of radio frequency energy in an outdoor environment. “Applications of this type need to be completed in seconds to minutes to be useful. Since the tracing of each ray is independent of another ray, this type of application can be distributed easily among the many cores of the Intel Xeon Phi coprocessor.”
Prof. Kai Li from Princeton presented this talk at the Intel HPC Developer Conference at SC15. “Full correlation matrix analysis (FCMA) is an unbiased approach for exhaustively studying interactions among brain regions in functional magnetic resonance imaging (fMRI) data from human participants. In order to answer neuro-scientific questions efficiently, we are developing a closed-loop analysis system with FCMA on a cluster of nodes with Intel Xeon Phi coprocessors. In this talk, we will discuss our current results and future plans.”
“To be successful in high-performance computing (HPC) today, it is no longer enough to sell good hardware: vendors need to develop an ‘ecosystem’ in which other hardware companies use their products and components; in which system administrators are familiar with their processors and architectures; and in which developers are trained and eager to write code both for the efficient use of the system and for end-user applications. No one company, not even Intel or IBM, can achieve all of this by itself anymore.”