More than 100 exascale experts will gather in Barcelona this week for the Big Data and Extreme-scale Computing (BDEC). With a packed agenda featuring such notable speakers at Irene Qualters (NSF), William Harrod (DOE), Satoshi Matsuoka (Tokyo Institute of Technology), and Marek Michalewic (A*STAR), the event will explore algorithms, computer system architecture, operating systems, workflow middleware, compilers, libraries, languages and applications.
In the past five years, the United States, the European Union, and Japan have each moved aggressively to develop their own plans for achieving exascale computing in the next decade. Such concerted planning by the traditional leaders of HPC speaks eloquently about both the substantial rewards that await the success of such efforts, and about the unprecedented technical obstacles that apparently block the path upward to get there. But while these exascale initiatives have understandably focused on the big challenges of exascale for hardware and software architecture—exponential increase in parallelism, energy efficiency as a first class design constraint, heterogeneity in several different dimensions—the emergence, during the same time frame, of the phenomena of Big Data in a wide variety of scientific fields represents, not so much a new obstacle, but rather a kind of tectonic eruption that is transforming the entire research landscape on which all plans for exascale computing must play out. The workshop series on Big Data and Extreme-scale Computing (BDEC) is premised on the idea that we must begin to systematically map out and account for the ways in which the major issues associated with Big Data intersect with, impinge upon, and potentially change the national (and international) plans that are now being laid for achieving exascale computing. The goal is to help the international community develop a plan for building a partnership to provide the next generation of HPC software to support big data and extreme computing for scientific discovery.