In an invited talk in one of the committee rooms of the House of Lords on 18 April, Professor Meuer gave British Peers, and luminaries of the UK computer community, a tour-de-force presentation on the development of supercomputing from the Cray 1 in the 1970s to the advent of exascale.
He emphasised that the demands on high-performance computing are changing and that data crunching is becoming as important a topic as number crunching. However, he said, the conventional tools for assessing the performance of supercomputers – in particular the Linpack benchmark upon which the Top500 listing is based – may not necessarily be the most appropriate measures in such data analysis applications. He stressed that alternative metrics, including Jack Dongarra’s HPC Challenge benchmarks and the Graph500 initiative, were important in assessing machines for specific purposes.
The value of the Top500 benchmark is that it has been applied consistently over a period of nearly 20 years (celebrations of the 20th anniversary will take place in Salt Lake City in November this year). When plotted on a logarithmic scale, the increase in supercomputing power over that period has been a remarkably straight line and he saw no reason to doubt that the trend would continue into the future.
The consistency of the growth in compute power over the period is all the more remarkable as the underlying technologies have changed significantly in that period, he pointed out. ‘For me, the first real supercomputer was the Cray 1 vector supercomputer in 1976,’ he said. But the technology changed to massively parallel architectures, more conventional processor chips and, recently, to include GPU type chips.
Professor Meuer recalled that the Cray 2 was the most powerful supercomputer in the world in 1986. The price tag, of $22M was so high that when one was purchased for Stuttgart, the deal was signed allegedly only after ‘a candlelit dinner’ between the Minister-President of Baden-Wurttemberg and then then CEO of Cray Research, John Rollwagen. For comparison, Professor Meuer said, the Apple iPad2 in 2011 had two-thirds of the processing power of the Cray 2 at a price tag of only $500 – a reduction in price by a factor of 44,000.
He raised the radical question as to whether we need new computer architectures to cope with ‘Big Data’. In traditional computational sciences, he said, the problems fit into memory; the methods require high precision arithmetic; and the computation is based on static data. Recently, interest has grown in data intensive sciences where the problems do not fit into memory; variable precision or integer based arithmetic is required; and the computations are based on dynamic data structures. Such problems arise as a result of experiments such as the Large Hadron Collider at CERN, the European Laboratory for Particle Physics, where the task is analysis (data mining) of raw data from the high throughput instruments.
Looking to the future, Professor Meuer reminded his audience of the perennial problem that to increase the number of transistors per chip, the transistors must become smaller and smaller and so the manufacturing process must be able to define ever-smaller feature sizes year after year. He conceded that the ultimate limits of conventional silicon technology would be reached within the next few decades. Perhaps, he speculated, it would soon be time to turn to more exotic technologies, such as quantum computing. He concluded by citing Mark B. Ketchen, manager of the physics of information group at IBM’s Thomas J. Watson Research Centre in Yorktown Heights, New York, on quantum computing: ‘In the past, people have said, “maybe it’s 50 years away, it’s a dream, maybe it’ll happen sometime”. I used to think it was 50. Now I’m thinking like it’s 15 or a little more. It’s within reach. It’s within our lifetime. It’s going to happen.’