Workhorse Conte Cluster at Purdue goes to Pasture

Print Friendly, PDF & Email

Over at Purdue, Adrienne Miller writes that the university’s powerful Conte supercomputer, which retired on August 1 after five years of service, was crucial to the development of powerful nanotechnology tools.

Conte was simply essential for all this development,” says Tillmann Kubis, a research assistant professor of electrical and computer engineering who led the development of the most recent iteration of the tool, known as NEMO5. The software, which consists of more than 700,000 lines of code, has been commercialized through a partnership with Silvaco, Inc. Kubis estimates that more than 80 percent of NEMO5 was developed on Conte.

When it was built in 2013, Conte replaced its predecessor Carter as the fastest supercomputer solely for use by researchers on a single campus and not part of a national supercomputing center. It debuted as number 28 on the Top500 list of the world’s most powerful supercomputers and remained on the list for its entire lifespan, clocking in at number 282 on the June 2018 ranking.

Using Conte, a Purdue research group developed the Nanonelectronics Modeling (NEMO) suite, a set of simulation tools optimized for high-performance computing. NEMO is especially useful for research in solid state electronics and heat transport, and is used by both academic researchers and semiconductor companies, among other things in developing the next generation of more powerful electronics.

Conte was built as part of a partnership with HP, Intel and Mellanox using technology that was, at the time, ahead even of the leading edge. It achieved a top processing speed of more than 1.3 petaflops and a sustained, measured maximum speed of more than 943 teraflops. A person doing one calculation per second would need tens of millions of years to do what Conte could do in a single second. The supercomputer was named after Samuel Conte, who helped establish Purdue’s first-in-the-nation computer science department.

Conte had a worldwide impact, serving as the computational backend for NanoHUB, a nanotechnology research platform led by Purdue’s Network for Computational Nanotechnology (NCN) that’s helped 13,000 users perform computer simulations. “All those simulations were run on Conte,” says Gerhard Klimeck, professor of electrical and computer engineering and director of the NCN.

NanoHUB’s user friendly, web-based simulation tools have put a supercomputer’s powerful parallel computing capabilities in the hands of researchers and students around the planet who wouldn’t otherwise have access to such computational power.

Conte was the sixth supercomputer built as part of Purdue’s Community Cluster Program, an innovative shared computing infrastructure that lets faculty access more computational resources than they could afford on their own, without having to maintain the machines. Purdue will build its eleventh community cluster in as many years later this year.

Klimeck cites the hands-off maintenance and instant access to the machines as reasons why he chooses to use the community clusters, which are operated for faculty partners by ITaP Research Computing. “It’s always available – we don’t have to write a grant to get access to it,” he says.

Sign up for our insideHPC Newsletter