The summer of 2016 will see a raft of summer schools and other initiatives to train more people in high-performance computing, including efforts to increase the diversity of HPC specialists with a specific program aimed at ethnic minorities. But interested students need to get their applications in now.
In this video, researchers describe how the Jetstream project at Indiana University. Jetstream is a user-friendly cloud environment designed to give researchers access to interactive computing and data analysis resources on demand, whenever and wherever they want to analyze their data. It will provide a library of virtual machines designed to do discipline specific scientific analysis. Software creators and researchers will also be able to create their own customized virtual machines or their own private computing system within Jetstream.
In this podcast, the Radio Free HPC team looks at the Top Technology Stories for High Performance Computing in 2015. “From 3D XPoint memory to Co-Design Architecture and NVM Express, these new approaches are poised to have a significant impact on supercomputing in the near future.” We also take a look at the most-shared stories from 2015.
A new major collaborative project is set to transform the UK pharmaceutical industry by enabling the manufacturing processes of the innovative medicines of the future to be designed digitally. The STFC Hartree Centre is a partner in the £20.4m ADDoPT (Advanced Digital Design of Pharmaceutical Therapeutics) project, which involves major pharmaceutical companies, Pfizer, GSK, AstraZeneca and Bristol-Myers Squibb.
As multi-socket, then multi-core systems have become the standard, the Message Passing Interface (MPI) has become one of the most popular programming models for applications that can run in parallel using many sockets and cores. Shared memory programming interfaces, such as OpenMP, have allowed developers to take advantage of systems that combine many individual servers and shared memory within the server itself. However, two different programming models have been used at the same time. The MPI 3.0 standard allows for a new MPI interprocess shared memory extension (MPI SHM).
Today Compute Canada and the Canadian Association of Research Libraries (CARL) announced a collaboration to build a scalable national platform for research data management and discovery. The partnership joins information management expertise from the CARL Portage Network with information technology expertise from Compute Canada to develop services that researchers need to respond to the demands of data-intensive research and to comply with funding bodies’ data sharing policies.
In this week’s industry Perspective, Katie Garrison of One Stop Systems explains how GPUltima allows HPC professionals to create a highly dense compute platform that delivers a petaflop of performance at greatly reduced cost and space requirements.compute power needed to quickly process the amount of data generated in intensive applications.
The Call for Submissions is open for the upcoming GPU Programming Hackathon at University of Delaware (UDEL). The event takes place from May 2-6, 2016 at UDEL in Newark, Delaware.
The U.S Department of Energy has awarded a total of 80 million processor hours on Titan supercomputer to an astrophysical project based at the DOE’s Princeton Plasma Physics Laboratory (PPPL). The grants will enable researchers to study the dynamics of magnetic fields in the high-energy density plasmas that lasers create. Such plasmas can closely approximate those that occur in some astrophysical objects.
“Upgrading legacy HPC systems relies as much on the requirements of the user base as it does on the budget of the institution buying the system. There is a gamut of technology and deployment methods to choose from, and the picture is further complicated by infrastructure such as cooling equipment, storage, networking – all of which must fit into the available space. However, in most cases it is the requirements of the codes and applications being run on the system that ultimately define choice of architecture when upgrading a legacy system. In the most extreme cases, these requirements can restrict the available technology, effectively locking a HPC center into a single technology, or restricting the application of new architectures because of the added complexity associated with code modernization, or porting existing codes to new technology platforms.”