MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Evolution of NASA Earth Science Data Systems in the Era of Big Data


Christopher Lynnes from NASA presented this talk at the HPC User Forum. “The Earth Observing System Data and Information System is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources—satellites, aircraft, field measurements, and various other programs.”

Scientific Cloud Computing Lags Behind the Enterprise


“In business and commercial computing, momentum towards cloud and big data has already built up to the point where it is unstoppable. In technical computing, the growth of the Internet of Things is pressing towards convergence of technologies, but obstacles remain, in that HPC and big data have evolved different hardware and software systems while Open Stack, the Open Source cloud computing platform, does not work well with HPC.”

Submit Your 2016 Research Allocation Requests for the Bridges Supercomputer


XSEDE is now accepting 2016 Research Allocation Requests for the Bridges supercomputer. Available starting in January, 2016 at the Pittsburgh Supercomputing Center, Bridges represents a new concept in high performance computing: a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users.

SDSC Steps up with Upgraded Cloud and Storage Services

The reliable and scalable architecture of the SDSC Cloud was designed for researchers and departments as a low cost and efficient alternative to public cloud service providers.  Image: Kevin Coakley, SDSC

Today the San Diego Supercomputer Center (SDSC) announced that it has made significant upgrades to its cloud-based storage system to include a new range of computing services designed to support science-based researchers, especially those with large data requirements that preclude commercial cloud use, or who require collaboration with cloud engineers for building cloud-based services.

Video: Dell’s New HPC Vision, Strategy, and Plans

Jim Ganthier, Dell

Jim Ganthier from Dell presented this talk at the HPC User Forum. “Dell HPC solutions are deployed across the globe as the computational foundation for industrial, academic and governmental research critical to scientific advancement and economic and global competitiveness. With the richness of the Dell enterprise portfolio, HPC customers are increasingly relying on Dell HPC experts to provide integrated, turnkey solutions and services resulting in enhanced performance, reliability and simplicity.”

Planning for the Convergence of HPC and Big Data

HPC BIGDATA Convergence

As an open source tool designed to navigate large amounts of data, Hadoop continues to find new uses in HPC. Managing a Hadoop cluster is different than managing an HPC cluster, however. It requires mastering some new concepts, but the hardware is basically the same and many Hadoop clusters now include GPUs to facilitate deep learning.

BSC and Integrating Persistent Data and Parallel Programming Models


Toni Cortés from the Barcelona Supercomputing Center presented this talk at the HPC Advisory Council Spain Conference. “BSC is the National Supercomputing Facility in Spain and was officially constituted in April 2005. BSC-CNS manages MareNostrum, one of the most powerful supercomputers in Europe, located at the Torre Girona chapel. The mission of BSC-CNS is to investigate, develop and manage information technology in order to facilitate scientific progress.”

ISC Cloud & Big Data: From Banking to Personalized Medicine


In this special guest feature, Tom Wilkie from Scientific Computing World looks at some issues of life and death that will be discussed at the upcoming ISC Cloud and Big Data conference in Frankfurt.

Video: Scalable High Performance Systems

Alexandru Iosup

In this video, Alexandru Iosup from the TU Delft presents: Scalable High Performance Systems. “During this masterclass, Alexandru discussed several steps towards addressing interesting new challenges which emerge in the operation of the datacenters that form the infrastructure of cloud services, and in supporting the dynamic workloads of demanding users. If we succeed, we may not only enable the advent of big science and engineering, and the almost complete automation of many large-scale processes, but also reduce the ecological footprint of datacenters and the entire ICT industry.”

Alan Turing Institute Hits the Ground Running for HPC & Data Science

Professor Andrew Blake, Director, Alan Turing Institute

The Alan Turing Institute is the UK’s national institute for data science. It has marked its first few days of operations with the announcement of its new director, the confirmation of £10 million of research funding from Lloyd’s Register Foundation, a research partnership with GCHQ, collaboration with the EPSRC and Cray, and the commencement of its first research activities.