In this special guest feature from the Print’nFly Guide to SC15 in Austin, Peter ffoulkes from OrionX looks at how HPC Transforms. “If we thought the last five years were disruptive, we may not have seen anything yet, and in many ways the HPC community will continue to lead that transformation, even if it does not always receive recognition for that leadership. The general enterprise market shift towards a data-centric focus, based upon “big-data”, the impending deluge of sensor data from “The Internet of Things”, and real-time analytics using in-memory databases could be the best thing that has happened to the HPC community in decades.”
In this special Halloween podcast, the Radio Free HPC team shares their biggest fears for HPC. From cybersecurity to a lack of funding for important research, what’s the worst that could possibly happen?
In this special guest feature, Tom Wilkie from Scientific Computing World reports that the European Commission is funding research projects and centers of excellence as part of its strategy to coordinate European HPC efforts. In October, the EC made a series of announcements on how it is going to invest some of the €700 million allocated to its Public-Private Partnership on high performance computing.
A new EU research project called “ExaHyPE” is developing open-source software for exascale-class supercomputers. As an international project coordinated at TUM in Munich, ExaHyPE (“An Exascale Hyperbolic PDE Engine”) seeks to develop novel software, initially for simulations in geophysics and astrophysics.
In HPC news from CEA in France, the EoCoE (Oriented Energy Center of Excellence) project officially launched earlier this month. Pronounced “Echo,” the EoCoE has a mission to create a new, long lasting and sustainable community around computational energy science.
The HPC Advisory Council has published the Agenda for their China Conference. The event takes place Nov. 9 in Wuxi, China.
In this special guest feature, Linda Barney writes that researchers at the University of Cambridge are using an Intel Xeon Phi coprocessor-based supercomputer from SGI to accelerate discovery efforts. “We have managed to modernize and optimize the main workhorse code used in the research so it now runs at 1/100-1/1000 of the original runtime. This allows us to tackle problems which would have taken unfeasibly long to solve. Secondly, it has opened windows for previously unthinkable research, namely using the MODAL code in cosmological parameter search: this is a problem which is constantly being solved in an iterative process, but adding the MODAL results to the process has only become possible with the improved performance.”
“Argonne National Laboratory is one of the labs helping to lead the exascale push for the nation with the DOE. We lead in a numbers of areas with software and storage systems and applied math. And we’re really focusing, our expertise is focusing on those new ideas, those novel new things that will allow us to sort of leapfrog the standard slow evolution of technology and get something further out ahead, three years, five years out ahead. And that’s where our research is focused.”
Sometimes the inbox for HPC news fills up faster than we can handle. In an effort to keep up, we’ve compiled noteworthy news into a Jeopardy type of Speed Round that phrases topics in the form a question.
“We are excited that the H2020 SAGE Project gives us the opportunity to research and move HPC storage into the Exascale age,” said Ken Claffey, vice president and general manager, Seagate HPC systems business. “Seagate will contribute its unique skills and device technology to address the convergence of Exascale and Big Data, with an excellent selection of participants each bringing their own capabilities together to build the future of storage on an unprecedented scale.”