Nervousness about the commitment of the US Government and the Congress to this sort of long-term investment in technology could be seen at the official SC12 press conference, where Dona Crawford, associate director, Computation, at the Lawrence Livermore National Laboratory, announced that the US Council on Competitiveness is to be funded to the tune of nearly one million dollars over the next three years to develop recommendations to Congress on extreme computing. Why should this be necessary, journalists at the press conference asked, when the Department of Energy had already sent to Congress, in February, its plan to build an exascale machine before the end of the decade.
The Exascale Applications and Software Conference (EASC2013) has issued its Call for Abstracts. The conference will take place April 9-11, 2013 in Edinburgh, Scotland and is organised by EPCC at the University of Edinburgh in association with CRESTA, NAIS and Nu-FuSE projects.
The aim of this conference is to bring together all of the stakeholders involved in solving the software challenges of the exascale – from application developers, through numerical library experts, programming model developers and integrators, to tools designers.
Invited speakers include :
Satoshi Matsuoka, Tokyo Institute of Technology
Vladimir Voevodin, Research Computing Center, Moscow State University
Bill Tang, Princeton Plasma Physics Laboratory
George Mozdzynski, ECMWF – The European Centre for Medium-Range Weather Forecasts
Peter Coveney, Centre for Computational Science at University College London
Jack Dongarra, Electrical Engineering and Computer Science Department, University of Tennessee
Submissions are due Dec. 10, 2012. Read the Full Story.
In this video from SC12, Neil Levine from Inktank describes the company’s efforts to commercialize and support the Ceph open source file system. With high reliability and nearly unlimited scalability, Ceph has great potential for Big Data applications as well as an enabling technology for Exascale computing.
In this video from SC12, Norm Morse from OpenSFS and Hugo Falter from the European EOFS get together over a couple of beers to discuss the latest developments in the Lustre community on the road to Exascale.
The group’s initial focus is the Lustre parallel file system, which supports many of the requirements of leadership class HPC simulation environments, has a diverse development community and is open-source software.”
In related news, LUG 2013 will be held in San Diego April 15-17, 2013.
Over at Computerworld, Patrick Thibidous writes that China has impressed analysts with its rocket-speed commitment to HPC. And with budget cuts threatening to stall next-generation supercomputer development in the U.S., one has to wonder if the Chinese beat us to Exascale with home-grown technologies.
For its Tianhe-1A system, China turned to U.S. chips — Intel’s Xeon processors — but used a China-developed interconnect. With its Sunway BlueLight supercomputer, China used its own chip, the Shen Wie SW 1600 microprocessor, but with InfiniBand interconnects. “You can see what they’re doing,” said Beckman, explaining that China’s developers reduce risk by mixing and matching standard technologies with homegrown approaches. “Now, you can see what’s going to happen,” said Beckman. “You take your homegrown CPU, the homegrown network, and you put them together and you have a machine that from soup to nuts is a technical achievement for China and is really competitive.”
This week at SC12, DDN announced a $100 million investment in its research and development efforts, specifically directed at resolving key challenges to achieving Exascale levels of performance in scientific computing. The new investments by DDN represent a substantial percentage of DDN’s engineering resources and will be directed toward technology challenges, which become critical at Exascale proportions, including: I/O Acceleration; Converged Infrastructure; Information Value Extraction; and Energy and Data Center Efficiency.
Data-intensive computing impacts individuals, organizations, industries and governments by enabling the creation of valuable information based on massive volumes of highly complex data,” said Alex Bouzari, CEO and cofounder, DDN. “Significant investment is required to allow researchers to address challenges such as the design of new materials needed for better electric car batteries, the improvement of multi-physics models for more accurate severe weather modeling, and the development of high-resolution cosmological simulations to help understand dark matter and the universe around us. With today’s announcement, DDN is establishing a clear direction for our Exascale computing agenda and reaffirms DDN’s continued central role in the future of supercomputing.”
DDN disclosed that the investments will center around the following critical technologies for Exascale computing:
I/O Acceleration: New file system, middleware and storage tiering methods will be required to eliminate scalability barriers associated with conventional methods of file, object and database access in order to achieve 1,000x scalability, TB/s performance and million-way application CPU parallelism.
Converged Infrastructure: The convergence of computing, storage and networking technologies will give rise to intelligent and accelerated data storage infrastructures which can co-locate pre-processing and post-processing routines natively within the storage infrastructure to enable applications to access data with increased acuity.
Information Value Extraction: Leveraging converged infrastructures, DDN R&D efforts will support the development of scalable data analytics environments to extract actionable insights from vast volumes of unstructured data.
Energy and Data Center Efficiency: With the emergence of storage-class memory and software tools, infrastructures can be built with fewer components compared to today’s disk-based technologies. These initiatives will serve to significantly reduce hardware acquisition costs but will also make data centers much more space and power efficient by reducing storage footprint by more than 75%.
This is exciting news for the HPC community looking to see more vendors step up to the plate for the incredibly daunting goal of Exascale computing within this decade. I can tell you that the DDN booth was packed with people all week at SC12. Read the Full Story.
At the SC12 conference, Rogue Wave Software, announced that TotalView has achieved a significant debugging milestone during testing conducted as part of its strategic scalability initiative, demonstrating its capability to debug a parallel job running on 786,432 processor cores. The tests were conducted on Lawrence Livermore National Laboratory’s (LLNL) Sequoia, its IBM Blue Gene/Q supercomputer .
We are actively working to increase the capabilities of our scientific codes to scale and take advantage of the phenomenal power of Sequoia,” stated Scott Futral, LLNL group leader for Development Environment. “As part of this effort, we are looking for ways to get more on-node parallelism from existing codes and architecting our new codes to support the even more massive degrees of parallelism that we know will be needed in the future. Rogue Wave’s dedication to pushing for ever-increasing scales with its TotalView debugger and the recent tests give us reason to be confident that TotalView will continue to be a critical development tool as we reach higher and higher scales with our own codes.”
Rogue Wave will announce the result of this second set of tests, which demonstrate successful debugging of an even higher number of threads, on Thursday November 15th at 12:00 PM MST. Visit SC12 Rogue Wave booth #3418, to participate in a competition to correctly guess the number of threads TotalView debugged. Read the Full Story.
The InfiniBand Trade Association (IBTA) and the Open Fabrics Alliance (OFA) are joining up at a booth this year at SC12 as well as a panel session on future I/O architectures.
I/O is a significant factor in enabling the performance and scalability for HPC and Big Data analysis. The panel session, “Exascale and Big Data I/O” will be moderated by Bill Boas from System fabric Works and will discuss the Tier1 OEM and End Customer/User requirements for future I/O architectures, standards and protocols, and whether they should be open or proprietary. Panelists will include top industry technologists such as Larry Kaplan, I/O architect at Cray, Sorin Faibish, chief scientist, Fast Data Group at EMC, Ronald Luijten, data motion architect at IBM Zurich Research, Michael Kagan, co-founder and chief technology officer at Mellanox Technologies, Manoj Wadekar, chief scientist QLogic, and Peter Braam, storage software fellow at Xyratex.
The panel session will take place on Wednesday, Nov. 14 at 1:30 p.m. in Room 355-BC. Visit IBTA and OFA at SC12 booth #3630 for more information or read the Full Story.
We are pleased to collaborate again with HPC China and to have our fourth highperformance computing education and outreach workshop in China as part of HPC China’s overall conference program,” said Gilad Shainer, chairman of the HPC Advisory Council. “The HPC Advisory Council’s worldwide workshops have become world renowned as an excellent educational opportunity for HPC and data center IT professionals who are looking to deploy or provide additional enhancements and functionality to their advanced high-performance solutions.”
Over at The Exascale Report, Mike Bernhardt caught up with SC12 Technical Program Chair, Rajeev Thakur, to give us a summary of the many exascale topic discussions included in this year’s program.
The upcoming SC12 conference in Salt Lake City, Nov. 10–16, offers the HPC community the opportunity to participate in numerous events related to exascale. Listed below are no less than 75 events that explicitly mention exascale or extreme scale in the title or abstract! In addition, the SC12 program includes many other events that implicitly relate to exascale. The SC12 exhibit hall will also showcase numerous companies and research organizations promoting their latest products and technologies, many of which will be important in the development of future extreme-scale systems. In addition, a satellite event, not part of the official SC12 program, will offer a workshop organized by funding agencies from several countries on early results of the G8 exascale projects.
Industry has a dual role in high-end computing: firstly, supplying systems, technologies and software services for HPC; and secondly, using HPC to innovate in products, processes and services. Both are important in making Europe more competitive. Especially for SMEs, access to HPC, modelling, simulation, product prototyping services and consulting is important to remain competitive. This Action Plan advocates for a dual approach: strengthening both the industrial demand and supply of HPC.”
Over at The Exascale Report, Mike Bernhardt has posted an interesting interview with John Gustafson, newly appointed Senior Fellow and Chief Product Architect for AMD’s Graphics Business Unit.
If you can solve the performance-per-watt problem, then the next challenges are building a good software model, followed by resilience and reliability, all areas where AMD will undoubtedly make key contributions. I’m questioning that we should just be pumping out ten to the eighteenth double precision operations all the time as our goal, because in most areas of computing, there is a way to improve the quality of the computation and the validity of the results, and we need to be taking a much harder look at ‘what the heck are we computing anyway?’ and not just blindly apply finer meshes and more time steps. It’s time to think about the physics really hard and ask what is the problem we are trying to solve.”
Power is a major challenge standing in the way of the Exascale computing. While the target is to consume 20 MW or less for an exascale machine, current technology trends will not take us there by 2018. In this podcast, the Radio Free HPC team discusses why this is such a tough challenge, where such a system might need to be hosted, and types of infrastructure that will need to be considered. Along the way, you’ll hear scary “power” music and figure out how this all relates to Mad Max, lasers, unicorns, and Planet of the Apes.
Over at The Exascale Report, Mike Bernhardt writes that supercomputing has a lot at stake with the 2012 U.S. Presidential election.
Politicians on both sides of the fence say they support HPC, but it appears to be nothing more than lip service. We are just not seeing the commitment to the kind of long term strategy needed to drive exascale – or for that matter, even advanced HPC research. We are all at the mercy of what the hardware vendors can build – what they believe will drive product sales in the near term. HPC innovation and exascale development require strong, unified Federal funding, and this goes hand in hand with economic recovery. And, without adequate funding and a strong commitment to technology leadership, I fear the U.S. may not even finish in this world-changing race.”