Interview with Ivan Girotto of the Irish Centre for High-End Computing.

Print Friendly, PDF & Email

When you think of “green” you might think of sustainability and energy initiatives, and you might also think of Ireland. But until now, not many of us would associate HPC or exascale with Ireland.

The world-class researchers at the Irish national supercomputing centre aim to change that.

We caught up with Ivan Girotto of the ICHEC research team at ISC’11 and are pleased to bring you this profile interview.

The Exascale Report: Can you give us some background and an overview of the ICHEC?

Girotto: The scientific modeling community in Ireland realized tremendous growth from 2003 to 2005, creating more demand for high-end computing facilities and technology. Recognizing that Irish scientists needed access to advanced computational facilities in order to be internationally competitive in research, the Science Foundation Ireland (SFI) and the Higher Education Authority (HEA) funded the establishment of the Irish Centre for High-End Computing (ICHEC) in mid-2005.

In the 6 years since its creation, ICHEC has grown steadily from a handful of staff in 2005 to a current establishment of 24, recruited from around the world. Today, ICHEC provides resources to roughly 300 researchers from research institutions throughout the country.

ICHEC operates two main HPC systems. The largest is an SGI Altix ICE 8200EX equipped with around 4k cores and a second small fat-node cluster with approximately 500 cores. The ICHEC staff is tasked with providing computational resources that are reliable, well configured and fit for purpose for the range of codes used within the Irish research community.

In addition to the above conventional clusters, ICHEC is about to open a moderately sized GPU computing production machine (c. 50 GPUs). This will complete our GPU development infrastructure, supporting our strong involvement in GPU computing research, for which ICHEC has been recognized by NVIDIA as a CUDA Research Center in June 2010

But much more important than the equipment for ICHEC’s mission is the software development and relevant skill sets brought to the table by our staff. “Research enablement” is the phrase that we use to describe assisting Irish researchers to use High Performance Computing (HPC) in the most effective way.

Members of the ICHEC staff are regularly embedded as partners within individual research groups, providing direct support in developing code and workflows to tackle challenging problems at the frontiers of science. Over the course of time since its creation, this partnership model has demonstrated that it can move Irish researchers from relatively small cluster environments to world-class infrastructures.

ICHEC supports users from across the full spectrum of disciplines, from linguistics to number theory, materials science to astrophysics and computational chemistry to bioinformatics and more.

ICHEC has an ambitious vision to be among the leading supercomputer centres of Europe particularly in terms of the quality of its hands-on and mentoring support for the research community. ICHEC has already demonstrated that it can support Irish researchers to become world leaders. For example, we mentored one of our users to port and run his code across c.300,000 cores of BlueGene/P in Juelich.

TER: How do you anticipate ICHEC changing over the next several years as organizations start to expect more capability and more capacity in anticipation of moving to exascale? What role will ICHEC play in the development of exascale (from testing hardware, interconnects, storage appliances, etc. to the development of software and applications?)

Girotto: Exascale is not a target for Ireland at present. Ireland’s size and the limited funding available means we cannot be involved in building the very largest Tier-0 HPC infrastructure. However, the ICHEC staff, working in collaboration with a number of Irish researchers, have shown that in some cases locality is no longer a limiting issue. Our researchers, with the support of ICHEC staff members have successfully run scientific simulations on both European (PRACE) and US DOE (INCITE) world-class capability systems. As far as exascale is concerned, and indeed as far as current HPC infrastructures are concerned, we believe the main challenge lies in the development of software applications that can sustain these highly parallel systems. The trend toward massive parallelism appears inescapable and highly parallel software is a clear requirement to achieving reasonable efficiency – as projected in the majority of technology roadmaps illustrating the next generation of supercomputers. This will dramatically affect how scientists and HPC experts must work together in the design of efficient and fit-for-purpose algorithms and development of applications. With this concept of cooperative software development and engineering, we at ICHEC believe that research enablement and education will continue to be our main contribution and focus in the years ahead as platforms move from petascale to exascale infrastructures.

TER: Particular to the topic of “Co-design” – what are your thoughts on Co-design as it applies to us reaching exascale by the end of the decade? “Is Co-design really going to work – when companies (commercial) and research organizations need to compete with each other for revenue and funding – and want to closely guard their own intellectual property – what incentive do they have to share in such a collaborative function?”

Girotto: Co-design is essential to exascale computing as no single company, no matter how large, has the ability to address the needs of all users for all exascale computational problems. An excellent example of co-design is the NVIDIA/Mellanox collaboration for GPUdirect. As we are all aware, data movement is expensive in terms of power (watts consumed) and loss of performance (increased time to solution). With multi-vendor support, GPUDirect MPI communication throughput has been increased by simply sharing a buffer between two drivers. While simple in concept and clearly beneficial, it took extensive vendor cooperation to make it happen. As our colleague Rob Farber (ICHEC visiting scientist) reports in his upcoming textbook “CUDA Application Design and Development” a 30% increase in MPI throughput has been reported, which is significant for MPI applications and exascale computation. The end result is that both companies benefited from the collaboration as each gets to sell more product as a result of the increased performance, and the community at large benefits from the improved performance.

Gains in performance such as these could allow us to remove some current barriers that we are facing as we try to scale up tightly-coupled MPI GPU-enabled codes. Indeed, we can expect a major effect on a code such as Quantum Espresso which ICHEC is currently porting to GPU.

Another excellent example was covered in Farber’s Scientific Computing article “Competing with C++ and Java”, with the PGI x86 compiler. This product enables transparent compilation of CUDA to x86. The transparency can only be enabled by close collaboration and sharing of proprietary information between the two companies. The benefit is that the CUDA development platform is now positioned so companies can use it for *all* application development. The benefit is that CUDA is no longer a niche language for just GPU development. Both companies benefit from the cooperation as they both extend their reach in the market. Exascale users benefit because CUDA has been proven to support strong scaling to a million plus concurrent threads of execution.

For related stories, visit The Exascale Report Archives.