MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


European ExaNeSt Project to Pave the Way to Exascale

Today the European Consortium announced a step towards Exascale computing with the ExaNeSt project. Funded by the Horizon 2020 initiative, ExaNeSt plans to build its first straw man prototype in 2016. The Consortium consists of twelve partners, each of which has expertise in a core technology needed for innovation to reach Exascale. ExaNeSt takes the sensible, integrated approach of co-designing the hardware and software, enabling the prototype to run real-life evaluations, facilitating its scalability and maturity into this decade and beyond.

Bright Computing Receives Horizon 2020 Grant for Advancing System Management Technology

Today Bright Computing announced it has been awarded a grant of more than 1.5 million Euros by the European Commission under its Horizon 2020 program. The grant will be used for the Bright Beyond HPC program, which focuses on enhancing and scaling Bright’s industry-leading management platform for advanced IT infrastructure, including high performance computing clusters, big data clusters, and OpenStack-based private clouds.

Princeton Plasma Physics Lab Wins 80 Million Processor Hours on Titan Supercomputer

The U.S Department of Energy has awarded a total of 80 million processor hours on Titan supercomputer to an astrophysical project based at the DOE’s Princeton Plasma Physics Laboratory (PPPL). The grants will enable researchers to study the dynamics of magnetic fields in the high-energy density plasmas that lasers create. Such plasmas can closely approximate those that occur in some astrophysical objects.

Long Live the King – The Complicated Business of Upgrading Legacy HPC Systems

“Upgrading legacy HPC systems relies as much on the requirements of the user base as it does on the budget of the institution buying the system. There is a gamut of technology and deployment methods to choose from, and the picture is further complicated by infrastructure such as cooling equipment, storage, networking – all of which must fit into the available space. However, in most cases it is the requirements of the codes and applications being run on the system that ultimately define choice of architecture when upgrading a legacy system. In the most extreme cases, these requirements can restrict the available technology, effectively locking a HPC center into a single technology, or restricting the application of new architectures because of the added complexity associated with code modernization, or porting existing codes to new technology platforms.”

Time-lapse Video: Edison Supercomputer Moves to Berkeley

In this video, engineers move the NERSC Edison Supercomputer from Oakland to Berkeley. The one week long move is condensed into :41 seconds in this time lapse video, shot during the entire process. Edison is a Cray XC30, with a peak performance of 2.57 petaflops/sec, 133,824 compute cores, 357 terabytes of memory, and 7.56 petabytes of disk.

ECMWF to Upgrade Cray XC Supercomputers for Weather Forecasting

Today Cray announced a $36 million contract to upgrade and expand the Cray XC supercomputers and Cray Sonexion storage system at the European Centre for Medium-Range Weather Forecasts (ECMWF). When the project is completed, the enhanced systems will allow the world-class numerical weather prediction and research center to continue to drive improvements in its highly-complex models to provide more accurate weather forecasts.

Allinea Scalable Profiler Speeds Application Readiness for Summit Supercomputer at Oak Ridge

Today Allinea announced that Oak Ridge National Laboratory has deployed its code performance profiler Allinea MAP in strength on the Titan supercomputer. Allinea MAP enables developers of software for supercomputers of all sizes to produce faster code. Its deployment on Titan will help to use the system’s 299,008 CPU cores and 18,688 GPUs more efficiently. Software teams at Oak Ridge are also preparing for the arrival of the next generation supercomputer, the Summit pre-Exascale system – which will be capable of over 150 PetaFLOPS in 2018.

Job of the Week: HPC Compiler & Tools Engineer at LLNL

Lawrence Livermore National Lab is seeking an HPC Compiler & Tools Engineer in our Job of the Week. “As a member of the Development Environment Group in the Livermore Computing (LC) supercomputing center, will work as a software developer specializing in compilers and application development tools for supporting High Performance Computing (HPC). Will work with scientific computing teams, the open source software community, and HPC vendor partners on the development of enabling technologies for the state-of-the-art platforms currently in use and under procurement.”

Seagate to Power CEA Supercomputing Data Management Infrastructure

Today Seagate announced that the French Alternative Energies and Atomic Energy Commission (CEA) has selected the Seagate ClusterStor L300 for its GS1K HPC storage needs. GS1K is the next-generation supercomputing data management infrastructure for CEA’s Military Applications Division.

Brookhaven Lab Expands Computational Science Initiative

Today the Brookhaven National Laboratory announced that it has expanded its Computational Science Initiative (CSI). The programs within this initiative leverage computational science, computer science, and mathematics expertise and investments across multiple research areas at the Laboratory-including the flagship facilities that attract thousands of scientific users each year-further establishing Brookhaven as a leader in tackling the “big data” challenges at experimental facilities and expanding the frontiers of scientific discovery.