Over at The Exascale Report, Bill Gropp from NCSA writes that to make Exascale a reality, we need to stop talking just about Exascale.
We can start by moving away from a focus on FLOPS (especially as measured by benchmarks that we know are misleading but that the greater computer science community thinks we still take seriously) and focus on solving the hardest, toughest, most challenging computational problems. This also provides the best guidance and rationale for the development of the new technologies needed to realize the much faster machines we all believe are essential. Yes, not having such a simple metric as ExaFLOPS makes it harder to quantify the goals, but we all know that an effective HPC system can’t be described by a single number.
Over at Datacenter Knowledge, Intel’s Winston Saunders looks at how the most recent Top500 and Green500 machines stack up in terms of Exascalar, the “logarithmic distance” to 1018 flops in a 20 MegaWatt power envelope.
The November 2012 Exascalar (Performance-Efficiency Scalar) Top 10 list is shown below. The biggest change is at the top of the list, the new DPE/SC/Oak Ridge National Laboratory system with a best-ever Exascalar of 2.22. Since Exascalar is logarithmic, this equates to about a factor of 166 from the Exascalar goals in efficiency and performance. In June 2012, the peak Exascalar was 2.26, about a 10 percent improvement from the June 2012 list.
The Business Standard reports that Nvidia is collaborating with the Delhi Institute of Technology to reach its goal of achieving Exascale computing by 2017. The new Exascale Research Lab (ERL) will provide advanced ongoing research, testing, and technology development in a variety of areas including processor architecture, circuits, memory architecture, high-speed signaling, programming models, algorithms, and applications.
Nvidia and IIT Delhi share the common vision of developing technologies that boost computing performance to exascale levels in order to help find solutions to next-generation problems,” said Dr Subodh Kumar, Professor, Department. of Computer Science & Engineering at IIT Delhi. “Working with NVIDIA presents significant opportunities for innovation. The pool of talent available at our institute coupled with the access to the latest GPU technology is a promising prospect that will surely propel our race to creating radical, ground-breaking technologies.”
Over at ExtremeTech, Joel Hruska writes that Sandia National Laboratories have launched a new program to speed development of an exascale-capable operating system. The eXacale Programming Environment and System Software (XPRESS) project will receive $2.3 million per year for the next three years from the Department of Energy.
The project’s director, Ron Brightwell, notes that the operating systems and message-passing programs currently used on modern supercomputers are 15-20 years old and were written in an era when individual nodes with hundreds of processors weren’t even on the drawing board. XPRESS, he states, “aims to provide a system software foundation designed to maximize the performance and scalability of future large-scale parallel computers.”
Dan Reed from the University of Iowa writes that the design of an Exascale systems needs to take into account Little’s Law, a simple, yet subtle forumula that that relates performance (throughput), delay (response time) and number of interacting units (customers).
Little’s Law also says there is no free lunch. Optimizing for high system utilization leads to large wait times for customers, as anyone who has waited on a busy telephone reservation line knows all too well. Conversely, ensuring short waiting lines requires system utilization to be low. Simply put, one must choose between high system efficiency and short customer wait times. One cannot (in general) have both.
Dense integration of optical circuits capable of transmitting and receiving at high data rates will solve the limitations of congested data traffic in current interconnects. IBM’s CMOS nanophotonics technology demonstrates transceivers to exceed the 25Gbps data rate. In addition, the technology is capable of feeding a number of parallel optical data streams into a single fiber by utilizing compact on-chip wavelength-division multiplexing devices. The ability to multiplex large data streams at high data rates will allow future scaling of optical communications capable of delivering terabytes of data between distant parts of computer systems.
Over at The Exascale Report, Alinea CTO David Lecomber writes that new, innovative approaches could lead to the extreme-scale development tools needed for Exascale machines.
A challenge for the future is to ensure that tool performance is maintained. The extra nodes are probably not the greatest challenge: the tree architecture in Allinea DDT can handle that. The primary concern will be to ensure that the step-change in chip-level parallelism is handled well by the tools and that will lead to interesting questions for chip and device vendors and for the operating systems developers as well as tool vendors.
Over at The Exascale Report, John Barr writes that the European Commission has been funding research activities that expand the use of Petascale facilities by researchers across Europe and lay some of the groundwork for the move towards Exascale. However, the European Exascale strategy is evolving, with a much stronger focus on support for industry – both ISVs and the European HPC supply side industry – appearing in recent initiatives.
The Communication proposed that a world class HPC infrastructure should be provided for European academic and industrial users (with a focus on support for SMEs), and that Europe’s position as a supplier of HPC technologies should be strengthened. The governance for this activity should cover both industry (through an industry led technology platform) and science (through PRACE). It is proposed that the annual funding for European HPC R&D be increased from €630 million in 2009 to €1.2 billion to make this activity competitive at a global level. The additional €600 million would come from national budgets, the Commission and industrial users. Half of the new funding would be to fund the procurement of HPC systems and testbeds, with the remainder split evenly between training and software.
John Shaff, one of our celebrated Rock Stars of HPC, has been appointed CTO at NERSC. Shalf will also continue to serve in his current role as head of the Computer and Data Sciences Department in Berkeley Lab’s Computational Research Division (CRD).
NERSC is the primary HPC facility for scientific research sponsored by theDOE’s Office of Science. As Chief Technology Officer, Shalf will help NERSC develop a plan to achieve exascale performance.
A key goal of DOE’s exascale program is to develop high performance scientific computers that deliver a thousand times the performance of today’s most powerful computers at all scales, while using less than twice the power, by the end of the next decade. The demands of energy efficiency are driving deep changes that will change the way we do computing at all scales, not just exascale. NERSC will take an active role to work with industry as a public/private partnership to guide HPC designs and bring the DOE user community along in this time of great transition.”
The DOE has yet to design or develop actual exascale systems, but Herrod remains confident that future investments could make them a reality. He is less sure of the systems’ importance to Congress, as lawmakers have yet to decide on funding. That said, extreme science is not waiting. Scientists and researchers have, in some cases, reached the limits of what petascale can provide and stand at a crossroads waiting for exascale and funding to come together. As Thomas Sterling, associate director of the Center for Research in Extreme Scale Technologies (CREST) at Indiana University asked at the end of 2011, “Can the US influence exascale direction and maintain a strategic lead in its deployment?” We have arrived at the end of 2012, and the answer remains to be seen.
It’s a good read. And while future U.S. policy and science budgets are up in the air these days, the resolve to reach this next leap in computation certainly remains strong as ever. Read the Full Story.
Over at the ISC Blog, HLRS Director Michael Resch concurs with NCSA’s Bill Kramer on the pitfalls of the TOP500, but he says the List is not the one to blame.
Centers will have to change. Continuing to make the mistakes that Bill has kindly pointed out will make users go away and hence in the long term will severely harm those centers which still buy systems for a high TOP500 ranking. However, those who keep improving the services for their users and keep working on workflows and applications will have a really good chance to survive. Knowing that Bill Kramer’s NCSA is a strong supporter of such a user driven approach and that countries like China, Russia, South Korea, Taiwan and Singapore follow in this trail I look forward to exciting times for HPC.
DDN has launched a $100,000 prize to recognize scientific breakthroughs enabled by high performance computing with major research institutions pledging their support. The global program focuses on scientific advancement and insight through the exploration of novel approaches to Big Data analysis, Exascale processing, cloud computing, memory-class storage and other emerging developments in HPC.
For more than 12 years, DDN’s leading-edge storage solutions for content-intensive computing have helped universities around the world redefine the boundaries of science and research,” said Alex Bouzari, CEO and cofounder, DDN. “With WARP, we will accelerate the incredible changes of the Big Data era by bringing together the best minds in our industry, and also by providing needed support to the next generation of groundbreaking researchers.”
In conjunction with the WARP program, the company also announced $100,000 in annual prizes to recognize emerging scientific breakthroughs enabled by technology, including a $75,000 first prize and a $25,000 second prize each year. A WARP board of advisors will be established to influence the direction of the WARP program and to select the annual WARP Prize winners. Read the Full Story.
In this video from SC12, Paul Kinyon from the SGI product management team describes how the company is working with partners like Altair to solve customer’s toughest computational challenges. The company is looking at a range of technologies that could enbable Exascale computing capabilities at a practical level of power consumption.