In this video, Meg Whitman and Martin Fink from discuss The Machine, a computing architecture vision of the future. Currently in development at HP Labs, The Machine uses clusters of special-purpose cores, photonics links, and memristors to implement a unified memory that’s as fast as RAM yet stores data permanently, like a flash drive.
“We really need to re-look at what the requirements are that will lead us all the way up to being able to support Exascale deployments. One of these absolute requirements is CPU fabric integration, because the performance that’s needed, the density, the power, are all areas that have to be vastly improved to support deployments of exascale.”
The central element of the HPC Platform is the HBP Supercomputer, the project’s main production system located at Jülich Supercomputing Centre. Over the next decade, the HBP Supercomputer will be built in stages to arrive at the exascale capability needed for cellular simulations of the complete human brain.
“Our annual ‘Budget Map’ report series looks at the relative spending between all of the products, components, and services that make up the HPC market. With six years of end user data, we get a strong grip on where the money is flowing, whether it’s on big items like clusters and storage, or on topical things like power consumption, programming, or compute cycles in public cloud. We also get a sense of future budget outlook and how the market is likely to evolve.”
In this panel discussion from LUG 2014, Lustre users predict 2020 HPC Platform Architectures and Their Impact on Storage. “What will the future of HPC storage look like in the National Labs? This panel discussion suggest that storage will be vectoring off into some very new and interesting directions.”