Sign up for our newsletter and get the latest HPC news and analysis.

ICL in Knoxville is the Newest Intel Parallel Computing Center (IPCC)

header-icl

The Innovative Computing Laboratory (ICL) at the University of Tennessee, Knoxville has been named the newest Intel Parallel Computing Center (IPCC).

Record STAC Performance with Haswell EP and Xeon Phi

imgres-1

The STAC Securities Technology Analysis Center has published new STAC-2 benchmarks showing record performance on the Intel Haswell EP processors coupled with the Intel Xeon Phi.

Announcing the Cray XC40 Supercomputer with DataWarp Technology

xc40-multiple-cabinets

Burst Buffers are here! Today Cray announced the launch of the Cray XC40 supercomputer and the Cray CS 400 cluster supercomputer – the next-generation models of the Company’s high-end supercomputing systems and cluster solutions. Based on the new Intel Xeon processor E5-2600 v3 product family, formerly code named “Haswell,” the new systems deliver a 2x improvement in performance over previous Cray XC and Cray CS systems.

Podcast: The Return of the Intel Parallel Universe Computing Challenge

chip chat

In this Chip Chat podcast, Mike Bernhardt, Community Evangelist for HPC and Technical Computing at Intel, discusses the importance of code modernization as we move into multi- and many-core systems in the HPC field. According Bernhardt, markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization.

Benchmarking Intel Haswell vs. Xeon Phi on the Libor Finance Code

libor

Over at the Xcelerit Blog, Jörg Lotze benchmarks Intel’s new Haswell (Xeon E5 v3 series) against the company’s flagship Xeon Phi coprocessor using a popular computational finance code. As the test application, he use a Monte-Carlo simulation used to price a portfolio of LIBOR swaptions. “The Xeon Phi accelerator wins the race clearly for double precision, reaching around 1.8x speedup vs. the Haswell CPU. However, this drops to 1.2x in single precision. The main reason is that the single precision version requires only half the memory and hence makes better use of the cache.”

Advancing Science in Alternative Energy & Bioengineering with Many-Core Processors

cell

“We believe that the Intel path forward for HPC architecture and software offers a solution that allows for a simpler code base and reduced software effort. Optimizations for GPUs do not necessarily improve performance on CPUs.”

ZIH in Dresden is the Latest Intel Parallel Computing Center

TU Dresden logo

The Center for Information Services and High Performance Computing (ZIH) in Dresden has been established as an Intel Parallel Computing Center (IPCC).

Mark Seager on Why the Best is Yet to Come for HPC

Mark Seager, CTO of Technical Computing Ecosystem at Intel

“The single most important truth about high-performance computing (HPC) over the next decade is that it will have a more profound societal impact with each passing year. The issues that HPC systems address are among the most important facing humanity: disease research and medical treatment; climate modelling; energy discovery; nutrition; new product design; and national security. In short, the pace of change and of enhancements in HPC performance – and its positive impact on our lives – will only grow.”

Overview of the MVAPICH Project: Status and Roadmap

DK Panda

“Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, High-Speed Ethernet and RDMA over Converged Enhanced Ethernet (RoCE). The MVAPICH2 (High Performance MPI over InfiniBand, iWARP and RoCE) and MVAPICH2-X software libraries, developed by his research group, are currently being used by more than 2,150 organizations worldwide (in 72 countries).”

Interview: Behind the Scenes at the DEEP Project

Estela Suarez, Project Manager of DEEP & DEEP-ER at Jülich

“In terms of the hardware, one of the biggest successes surely was to make the Intel Xeon Phi boot via the Extoll network. This might not sound so special, but for the DEEP project it is – because this basically is the essential milestone for proving our architectural concept: The Cluster-Booster approach. In traditional heterogeneous architectures the accelerators cannot boot without a host CPU. Our aim was to develop a cluster – made up of usual CPUs – and a booster – made up of accelerators – that can both act autonomously while being interconnected via two networks.”