“This is an exciting time in high performance computing,” said Prof Simon McIntosh-Smith, leader of the project and Professor of High Performance Computing at the University of Bristol. “Scientists have a growing choice of potential computer architectures to choose from, including new 64-bit ARM CPUs, graphics processors, and many-core CPUs from Intel. Choosing the best architecture for an application can be a difficult task, so the new Isambard GW4 Tier 2 HPC service aims to provide access to a wide range of the most promising emerging architectures, all using the same software stack.”
Today the Mont-Blanc European project announced it has selected Cavium’s ThunderX2 ARM server processor to power its new HPC prototype. The new Mont-Blanc prototype will be built by Atos, the coordinator of phase 3 of Mont-Blanc, using its Bull expertise and products. The platform will leverage the infrastructure of the Bull sequana pre-exascale supercomputer range for network, management, cooling, and power. Atos and Cavium signed an agreement to collaborate to develop this new platform, thus making Mont-Blanc an Alpha-site for ThunderX2.
A new supercomputer has been deployed at the Jülich Supercomputing Center (JSC) in Germany. Called QPACE3, the new 447 Teraflop machine is named for “QCD Parallel Computing on the Cell. “QPACE3 is being used by the University of Regensburg for a joint research project with the University of Wuppertal and the Jülich Supercomputing Center for numerical simulations of quantum chromodynamics (QCD), which is one of the fundamental theories of elementary particle physics. Such simulations serve, among other things, to understand the state of the universe shortly after the Big Bang, for which a very high computing power is required.”
In this video from SC16, Dr. Eng Lim Goh from HPE/SGI discusses new trends in HPC Energy Efficiency and Deep Learning. “SGI’s leadership in data analytics derives from deep expertise in High Performance Computing and over two decades delivering many of the world’s fastest supercomputers. Leveraging this experience and SGI’s innovative shared and distributed memory computing solutions for data analytics enables organizations to achieve greater insight, accelerate innovation, and gain competitive advantage.”
SC16 returns to Salt Lake City on Nov. 13-18. The Six-day supercomputing event features internationally-known expert speakers, cutting-edge workshops and sessions, a non-stop student competition, the world’s largest supercomputing exhibition,panel discussions and much more. “No other annual event showcases the revolutionary advances and possibilities of high performance computing than the annual ACM/IEEE International Conference for High Performance Computing, Networking, Data Storage Analysis. From the impact of HPC on the future of medicine, to its transformative power in developing countries and “smart cities.” SC is the premiere venue for presenting leading-edge HPC research.”
Datacenters that are designed for High Performance Computing (HPC) applications are more difficult to design and construct than those that are designed for more basic enterprise applications. Organizations that are creating these datacenters need to be aware of, and design for systems that are expected to run at their maximum or near maximum performance for the lifecycle of the servers.
Over at the Parallella Blog, Andreas Olafsson from Adapteva writes that the company has reached an important milestone on its next-generation Epiphany-V chip. “Thanks to a generous grant from DARPA, we just taped out a 16nm chip with 1024 64-bit processor cores. To give a comparison, our 4.5B transistor chip is smaller than Apple’s latest A10 chip and has 256 times as many processors. The chip offers an 80x processor density advantage over high performance chips from Intel and Nvidia.”
Ozalp Babaoglu from the University of Bologna presented this Google Talk. “At exascale, failures and errors will be frequent, with many instances occurring daily. This fact places resilience squarely as another major roadblock to sustainability. In this talk, I will argue that large computer systems, including exascale HPC systems, will ultimately be operated based on predictive computational models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing “nuts-and-bolts” operations.”