XSEDE is now accepting 2016 Research Allocation Requests for the Bridges supercomputer. Available starting in January, 2016 at the Pittsburgh Supercomputing Center, Bridges represents a new concept in high performance computing: a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users.
“We are excited that the H2020 SAGE Project gives us the opportunity to research and move HPC storage into the Exascale age,” said Ken Claffey, vice president and general manager, Seagate HPC systems business. “Seagate will contribute its unique skills and device technology to address the convergence of Exascale and Big Data, with an excellent selection of participants each bringing their own capabilities together to build the future of storage on an unprecedented scale.”
Today the San Diego Supercomputer Center (SDSC) announced that it has made significant upgrades to its cloud-based storage system to include a new range of computing services designed to support science-based researchers, especially those with large data requirements that preclude commercial cloud use, or who require collaboration with cloud engineers for building cloud-based services.
DK Panda from Ohio State University presented this talk at the HPC Advisory Council Spain Conference. “Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand and 10-40GE/iWARP. His research group is currently collaborating with National Laboratories and leading InfiniBand and 10-40GE/iWARP companies on designing various subsystems of next generation high-end systems.”
Jim Ganthier from Dell presented this talk at the HPC User Forum. “Dell HPC solutions are deployed across the globe as the computational foundation for industrial, academic and governmental research critical to scientific advancement and economic and global competitiveness. With the richness of the Dell enterprise portfolio, HPC customers are increasingly relying on Dell HPC experts to provide integrated, turnkey solutions and services resulting in enhanced performance, reliability and simplicity.”
As an open source tool designed to navigate large amounts of data, Hadoop continues to find new uses in HPC. Managing a Hadoop cluster is different than managing an HPC cluster, however. It requires mastering some new concepts, but the hardware is basically the same and many Hadoop clusters now include GPUs to facilitate deep learning.
Toni Cortés from the Barcelona Supercomputing Center presented this talk at the HPC Advisory Council Spain Conference. “BSC is the National Supercomputing Facility in Spain and was officially constituted in April 2005. BSC-CNS manages MareNostrum, one of the most powerful supercomputers in Europe, located at the Torre Girona chapel. The mission of BSC-CNS is to investigate, develop and manage information technology in order to facilitate scientific progress.”
In this video plus transcripts from the 2015 HPC User Forum in Broomfield, Bob Sorensen from IDC moderates a panel discussion on the the National Strategic Computing Initiative (NSCI). “Established by an Executive Order by President Obama, the National Strategic Computing Initiative has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.”
In this video from the 2015 HPC User Forum, Will Koella from the Department of Defense discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.