Today Eurotech announced the installation of the DEEP “Booster”, a tightly coupled cluster of manycore coprocessors, at Jülich Supercomputing Centre.
In this special guest feature, John Kirkley writes that Intel is using its new Omni-Path Architecture as a foundation for supercomputing systems that will scale to 200 Petaflops and beyond. “With its ability to scale to tens and eventually hundreds of thousands of nodes, the Intel Omni-Path Architecture is designed for tomorrow’s HPC workloads. The platform has its sights set squarely on Exascale performance while supporting more modest, but still demanding, future HPC implementations.”
Today IBM Research announced that working with alliance partners at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering it has produced the semiconductor industry’s first 7nm node test chips with functional transistors. According to IBM, the breakthrough underscores the company’s continued leadership and long-term commitment to semiconductor technology research.
Over at Scientific Advances, a newly published paper describes a new high-efficiency computing paradigm called memcomputing. Modeled after the human brain, a memprocessor processes and stores information within the same units by means of their mutual interactions. Now, researchers have built a working prototype.
“Early in February, Barcelona Supercomputing Center (BSC) successfully deployed the Mont-Blanc prototype. After three years of intensive research effort, the team installed a two-rack prototype which is now available to the Mont-Blanc consortium partners. This has been a formidable challenge as this is the first time that a large HPC system based on mobile embedded technology has been deployed and made fully operational to a scientific community composed of scientists of six of the most important research centers in Europe.”
“In Deep Learning what we do is try to minimize the amount of hand engineering and get the neural nets to learn, more or less, everything. Instead of programing computers to do particular tasks, you program the computer to know how to learn. And then you can give it any old task, and the more data and the more computation you provide, the better it will get.”
“This radical new approach will fuse memory and storage, flatten data hierarchies, bring processing closer to data, embed security throughout the hardware and software stacks and enable management of the system at scale. Learn more by joining a panel of senior HP Labs researchers working on The Machine as they offer a closer look at what it takes to make it happen.”