Today DDN announced that Yahoo Japan has deployed an active archive system jointly developed by DDN and IBM Japan. The new system allows Yahoo! JAPAN to cache dozens of petabytes of data from its OpenStack Swift storage solution in a Japan-based data center, and transfer data to a U.S.-based data center at an astonishing rate of 50 TB of data per day – thus enabling energy cost savings of 74 percent due to lower energy rates in the United States versus Japan, while ensuring fast data access regardless of location.
“Individual institutions or organizations will have opportunities to deploy storage locally and can federate their local repository into the national system,” says Dr. Greg Newby, Compute Canada’s Chief Technology Officer. “This provides enhanced privacy and sharing capabilities on a robust, country-wide solution with improved data security and back-up. This is a great solution to address the data explosion we are currently experiencing in Canada and globally.”
Today the Barcelona Supercomputing Center announced plans to MareNostrum 4, a 13.7 Petaflop supercomputer that will be 12.4 times more powerful than the current MareNostrum 3 system. In a contract valued at almost €30 million, IBM will integrate in one sole machine using its own technologies alongside those of Lenovo, Intel, and Fujitsu.
“Most of the IT innovation that is happening today is with a cloud-first model,” said Joris Poort, co-founder and CEO of Rescale. “We’re building a platform that can satisfy and accelerate the ideas of the world’s top scientists and thinkers. From automotive design to drug discovery and even actual rocket science, we’re empowering our customers as leaders in their respective fields, to accomplish more and innovation faster.”
The new TOP500 list is out, and Rad is Free HPC is here podcasting the scoop in their own special way. With two new systems in the TOP10, there are many different perspectives to share. “The Cori supercomputer, a Cray XC40 system installed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), slipped into the number 5 slot with a Linpack rating of 14.0 petaflops. Right behind it at number 6 is the new Oakforest-PACS supercomputer, a Fujitsu PRIMERGY CX1640 M1 cluster, which recorded a Linpack mark of 13.6 petaflops.”
“Real-time-analytics and Big Data environments are extremely demanding and the network is critical in linking together the extra high performance IBM POWER based servers and Tencent Cloud’s massive amounts of data,”said Amir Prescher, Sr. Vice President, Business Development, at Mellanox Technologies. “Tencent Cloud developed an optimized hardware/software platform to achieve new computing records, showing that Mellanox’s 100Gb/s Ethernet technology can deliver total infrastructure efficiency and improves application performance, making them ideal for Big Data applications.”
Today IBM announced new hybrid cloud all-flash storage solutions developed to modernize and transform storage deployments, providing a strong bridge to the development of cognitive applications. These new solutions and software allow clients to store their valuable data where it makes the best business sense.
Martina Naughton presented this talk at the HPC Advisory Council Spain Conference. “IBM has a strong tradition of research collaboration with academia. We go beyond the boundaries of our IBM labs to work with colleagues in universities around the world to address global grand challenge problems. We also foster collaborative research related to transformation and innovation of businesses and governments, relationships through fellowships, grants, and funding for programs of shared interest.”
Designed specifically with researchers in mind, the Birmingham Environment for Academic Research (BEAR) Cloud will augment an already rich set of IT services at the University of Birmingham and will be used by academics across all disciplines, from Medicine to Archaeology, and Physics to Theology. “We are very proud of the new system, but building a research cloud isn’t easy,” said Simon Thompson, Research Computing Infrastructure Architect in IT Services at the University of Birmingham. “We challenged a range of carefully-selected partners to provide the underlying technology.”
In this podcast, the Radio Free HPC team looks at the new OpenCAPI interconnect standard. “Released this week by the newly formed OpenCAPI Consortium, OpenCAPI provides an open, high-speed pathway for different types of technology – advanced memory, accelerators, networking and storage – to more tightly integrate their functions within servers. This data-centric approach to server design, which puts the compute power closer to the data, removes inefficiencies in traditional system architectures to help eliminate system bottlenecks and can significantly improve server performance.”