The SSC advises the PRACE Council on all scientiﬁc and technical matters. In particular, it oversees the peer review process for the allocation of PRACE resources. The most important objective of the SSC at this stage is to gain the conﬁdence of the scientiﬁc community that PRACE resources are being allocated openly and fairly, and operated appropriately, to support the best science. That way, PRACE will attract the best science. This pursuit of scientiﬁ cexcellence above all else will be key to our success, both to justify continued investment by the PRACE partners and to ensure Europe remains competitive in exploiting strategically vital HPC technologies.
At insideHPC, we believe that PRACE is an organization to watch. Funded by the European Union, PRACE is the Partnership for Advanced Computing in Europe, providing member countries with world-class systems for science to strengthen the region’s scientific and industrial competitiveness.
You can subscribe to the newsletter on the PRACE homepage.
According to this story by Steve Lohr at the New York Times, a blue-ribbon advisory group report made to the White House last week said that research funding might be better deployed elsewhere than towards an international speed race based on a machine’s performance on a particular number-calculating benchmark such as LINPACK.
In presenting the report last Thursday, David E. Shaw, chief scientist at the investment and technology firm that bears his name, and a member of the advisory group, observed that gaining the top spot on the annual ranking of supercomputers is “an arms race that is very expensive and may not be a good use of funds.”
I think it is interesting to note that the President’s Council of Advisors on Science and Technology report makes no mention of Exascale programs, which may be beyond it’s scope. It does seem to have a bone to pick with the TOP500 though:
If Top500 rankings can no longer be viewed as a definitive measure of a country’s high performance computing capabilities, what goals should our nation be setting for fundamental research in HPC systems, and what criteria should be used in allocating funding for such research? Given the natural inclination to quantify the relative performance of competitors in any race, there is a temptation to replace the traditional FLOPS-based metric with another fixed, purelytive metric (or perhaps two or three such metrics) that policymakers can use on an ongoing basis to rank America’s competitive position in HPC relative to those of other countries. This approach, however, is subject to several pitfalls that could both impair our ability to maintain our historical leadership in the field of high-performance computing and increase the level of expenditures required to even remain competitive.
First, it is no longer feasible to capture what is important about high-performance computing as a whole using one (or even a small number of ) fixed, quantitative metrics, as a result of:
the progressive broadening of our nation’s requirements in the area of high-performance computing;
the consequent “splintering” of the set of computational tasks required to satisfy these requirements;
a wide range of substantial advances in the various technologies available to perform such computational tasks; significant changes in the “bottlenecks” and “rate-limiting steps” that constrain many high-performance applications as a result of different rates of improvement in different technological parameters.
The rest of the 120 page report is an interesting read so far, but there is a lot to go through so I promise to follow up with some armchair analysis.
This week the U.S. Congress passed bipartisan legislation to create jobs and maintain America’s economic leadership by increasing investment in science education, advanced research, and manufacturing innovation. It is expected that the President will sign the America COMPETES Reauthorization Act into law.
Recent months have brought further confirmation that America is at risk of losing its competitive edge,” said Congressman Dan Lipinski, a former professor and one of the few members of Congress trained as an engineer. “New international test results show American students continue to lag behind their overseas peers, and a follow-up to the influential Rising Above the Gathering Storm report showed other countries continue to catch up and threaten to overtake the U.S. in key measures of innovation. The investments the COMPETES reauthorization makes in American innovation are critical to reversing this trend and putting our country on a path to job creation and long-term economic growth. I am also pleased that, as I fought for over many months, the bill takes numerous steps to promote manufacturing innovation, including through the adoption of ideas from my National Manufacturing Strategy Act.”
Everthing is getting bigger in Texas. The University of Texas System Board of Regents has unanimously approved $23 million for improvements that will increase connectivity and computer capacity for all 15 University of Texas institutions, support research projects, and foster stronger collaborations among scientists in Texas and around the world.
Discoveries at the leading-edge of science require increasingly powerful computational technologies and are increasingly collaborative,” said TACC Director Jay Boisseau. ”This project will greatly enhance the ability of researchers at UT System institutions to address the most challenging computational problems, and to work together to make breakthrough discoveries.”
The improvements will allow institutions to conduct projects using shared data storage. This will enable researchers from different sites to access a single data source, aiding collaboration. A UT Data Repository prototype will be developed to provide disk storage and data collection management software for open science and clinical research data.
With this solicitation, the NSF requests proposals from organizations willing to serve as HPC Resource Providers within Extreme Digital (XD), the successor to TeraGrid, and who propose to acquire and deploy new, innovative petascale HPC systems and services. Competitive HPC systems will: expand the range of data intensive computationally-challenging science and engineering applications that can be tackled with XD HPC services and efficiently provide a high degree of stability and usability by January, 2013.
Director of Argonne National Laboratory Eric. D. Isaacs blogs that the United States cannot afford to take a back seat in computer technology to the Chinese, or to anyone else. He contends that the nation that leads the world in HPC will have an enormous competitive advantage in every sector and will attract the best scientific and engineering talent from around the world.
We need to make sure that American researchers and engineers have access to the supercomputers and other technological tools they need to help solve the great scientific, energy, environment, and security challenges of our time. We also need to make sure that our laboratories are equipped with cutting-edge facilities that will draw talented young scientists from around the world.
Isaacs goes on to say that the new number one Chinese supercomputers were built largely from American-designed components, but the country is already are at work on a 1-petaflop supercomputer made from Chinese parts.
America needs a substantial, longterm national investment to speed our journey down the road to exascale computing – a road that leads to economic growth, international competitiveness and national security. Without that commitment, the American supercomputers of the future may be labeled, “Made in China.”
Think of it as a time machine. What happens in high-performance computing then happens in high-performance technical servers, and finally your laptop. We’re looking at that big change and saying what we need is a real organized effort on the hardware, software and applications to tackle this. It can’t just be one of those. In the past, the vendors have designed a new system and then in some sense it comes out, and users look at it and ask: “How do I port my code to this?” or “What we’re looking at is improving that model to ‘co-design’” — a notion that comes from the embedded computing space, where the users of the system, the hardware architects and the software people, all get together and make trade-offs with what the best optimized supercomputer will look like to answer science questions.
At insideHPC, we strongly believe that the United States needs a national Exascale initiative put in place as soon as possible. The task is on a scale of difficulty equivalent to what it took to put a man on the moon in the sixties. Can we do it in this decade? Mabye, but the Chinese, the Europeans, and a host of other geographies are investing heavily in Exascale. They are committed, organized, and moving forward. If we as a nation don’t get on the stick, we’ll be watching from the sidelines wondering how we ever brought those astronauts safely back.
In this video, Mike Bernhardt from The Exascale Report interviews IDC’s Earl Joseph and Steve Conway about their recent study: A Strategic Agenda for European Leadership in Supercomputing – HPC 2020. The report is available as a free download.
In their recent report entitled “A Strategic Agenda for European Leadership in Supercomputing: HPC 2020″ IDC said that it expects the DoE to seek $5 Billion in funding for a set of Exascale computers.
Within the DOE, both the Office of Science and the National Nuclear Security Administration have begun initiatives focused on exascale supercomputing in this decade. IDC believes that the DOE will seek in excess of $5 billion (€3.75 billion) to develop multiple exascale computers.
Could there be an National Exascale Initiative in our future? The recent announcement of the 2.5 Petaflop Tianhe-1A supercomputer in China set off a media frenzy in the mainstream press last week, and nothing moves legislation like fear. At the same time, the IDC report is recommending that Europe step up its HPC investments in a big way.
The interesting thing to me is that there is already a host of Exascale projects going on worldwide today, as cited in the same report. And this is all for a computational capability that is eight or more years away. Many are saying the required advancements in power efficiency will never happen, but I believe that in the end it will all come down to finding someone willing to write the check.
A tip of the hat goes to Timothy Prickett Morgan at the Register for pointing us to this story.
The PRACE BoF at SC10 takes place on Wednesday 17 November from 12.15-1:30pm in room 389. The session will present the latest news about the PRACE Research Infrastructure. It covers the current status and the future plans; the results to be expected during the EC funded implementation projects, the integration of services currently provided by DEISA within the European HPC ecosystem, and collaboration opportunities for academia and industry.
PRACE, the Partnership for Advanced Computing in Europe, is a unique persistent pan-European Research Infrastructure (RI) for HPC.
The PRACE RI is governed by an international non-profit association with its seat in Brussels. Twenty countries are presently members of the association ‘Partnership for Advanced Computing in Europe AISBL’.
The first production system, a 1 Petaflop/s IBM BlueGene/P (Jugene) at FZJ (Forschungszentrum Jülich) is available for European scientists. Find more information about how to apply for resources at: www.prace-project.eu/hpc-access
Access to the RI is open to researchers from recognized European academic institutions and industry for projects of merit based on peer-review governed by a PRACE Scientific Council.
PRACE will provide a permanent pan-European HPC service consisting of various leading systems (Tier-0) forming the top level of the European HPC ecosystem.
The capabilities of the PRACE RI, initially at the Petascale level, are expected to grow to the Exascale level within a decade.
PRACE also has a booth with DEISA in the SC10 exhibition hall in booth # 4021. The word is they are doing daily prize drawings.
IDC has published a new report for the European Commission that recommends an unprecidented supercomputing strategy for Europe. To help Europe advance to achieve worldwide scientific and industrial leadership, the report recommends that the EC step up investments in HPC resources between now and the year 2020.
Looking ahead to what will be needed for success in 2020, Earl Joseph, IDC program vice president for HPC, noted that: “The cost and complexity of the next generation of HPC systems mean that Europe must be selective in its investments. The proposed strategy exploits Europe’s existing strengths, including advanced software development, and could help make Europe the world leader in areas that will be crucial for global economic competitiveness in the 21st century.”
The report further recommends substantially greater investment in HPC facilities and the technical skills and training needed to take advantage of them. The cost of developing Exascale HPC systems is such that no one European country can afford on its own to compete with the U.S. and others. European cooperation is vital, the report says.
China is hard at work on a “national processor” that could enable the Asian country to build a homegrown supercomputer to rank near the top of the TOP500 list of the world’s fastest machines. Details of the chip were presented by lead architect Weiwu Hu at the recent Hot Chips conference at Stanford University.
Hu, lead architect of the Godson project, said via e-mail that China’s Dawning 6000 supercomputer, originally slated for completion in mid-2010, will instead debut in 2011, using the Godson 3B. Halfhill calculates that the Dawning supercomputer will use CPUs that are slower than fastest Intel chips. However, it could still rank on the Top 500 list of the 500 fastest supercomputers in the world–a significant coup for China’s fledgling electronics industry. “Just getting into the Top 500 with a native processor is a worthy accomplishment,” says Industry Analyst Tom Halfhill.
The Godson processor uses a special MIPS instruction set enhanced by engineers at China’s Institute of Computing Technology to include 300 additional instructions devoted to vector processing, When completed, the Dawning 6000 could be the first MIPS-based supercomputer on the Top 500 list since 2004.
There are a number of great sources out there for news on Cloud Computing, and I particularly like what Nicole Hemsoth is doing over at HPC in the Cloud. So when I read about a new blog on Government Clouds I immediately pedalled over to check it out.
CloudGovTalk editor Tim Harder of EMC states his mission simply:
Looking at the trends in data storage, it’s becoming very plain to see that data is quickly outpacing the amount of storage that organizations have in their data centers. Every day, the amount of information increases, and we’ve reached a point where implementing a cloud infrastructure to meet these demands is an integral part of the organization’s health and success. This is especially true in the public sector, where an exponentially increasing amount of records, information and data needs to be shared, stored and analyzed for agencies and government entities that are spread out across wide areas. I launched GovCloudTalk to provide IT leaders and decision makers in the federal government with a forum where they can find useful information about the cloud and answers to their agencies’ cloud computing questions.
In a time when President Obama is pushing to consolidate government datacenters in big way, the need for this kind of focused content is clear. The next datacenter affected might be your own and let’s hope that informed decisions rule the day.
Keep up the good work, Tim. Starting a blog may be easy, but let me tell you that keeping it going can be a bear.
The EE Times is reporting this week that Wei-wu Hu, a professor at Beijing’s Institute of Computing Technology, gave a talk about the present and future of China’s homegrown chip at the annual Hot Chips conference. Hu’s paper focused on the high-end Godson 3B
5-nm STMicroelectronics process. The chip–which taped out in May and will be in silicon in September–measures 300 mm2 and delivers 128 gigaflops, Hu said.
The heart of the chip is the 64-bit, MIPS-compatible 464V core which sports a superscalar out-of-order pipeline capable of retiring four instructions per clock cycle. It supports 200 instructions to emulate the Intel x86.
The “V” in the core’s name indicates the latest twist in the Godson design, extensions for vector processing.
The core extends its previous 64-bit floating point unit with a 256-bit SIMD vector unit including eight 64-bit MACs. Engineers also created a unique interface to feed the chip with pre-formatted data.
Although the 3B is still in testing, its designers have high hopes for a supercomputing homerun
Hu showed several board-level examples of designs that will use the 3B in servers or as nodes in massively parallel supercomputing clusters. Earlier this year Shenzhen-based computer maker Dawning Information Industry Co. Ltd. created a petaflops system based on Intel and Nvidia processors and said its next generation will use the 16-core Godson 3C.
Hu suggested some of the Godson designs could hit performance levels of multiple petaflops—potentially putting China’s designers in the number one slot on the list of the world’s Top 500 supercomputers for the first time.
More in the article, which is a good read for catching up on the high end of China’s indigenous technology initiative.