Today DDN announced that Yahoo Japan has deployed an active archive system jointly developed by DDN and IBM Japan. The new system allows Yahoo! JAPAN to cache dozens of petabytes of data from its OpenStack Swift storage solution in a Japan-based data center, and transfer data to a U.S.-based data center at an astonishing rate of 50 TB of data per day – thus enabling energy cost savings of 74 percent due to lower energy rates in the United States versus Japan, while ensuring fast data access regardless of location.
“Individual institutions or organizations will have opportunities to deploy storage locally and can federate their local repository into the national system,” says Dr. Greg Newby, Compute Canada’s Chief Technology Officer. “This provides enhanced privacy and sharing capabilities on a robust, country-wide solution with improved data security and back-up. This is a great solution to address the data explosion we are currently experiencing in Canada and globally.”
The Cobham Technical Services Opera software is helping Tokamak Energy to reduce the very high costs associated with prototyping a new fusion power plant concept,” said Paul Noonan, R&D Projects Director for ST40. “After we have built our new prototype, we hope to have assembled some profoundly exciting experimental and theoretical evidence of the viability of producing fusion power from compact, high field, spherical tokamaks.”
Using a unique computational approach to rapidly sample proteins in their natural state of gyrating, bobbing, and weaving, a research team from UC San Diego and Monash University in Australia has identified promising drug leads that may selectively combat heart disease, from arrhythmias to cardiac failure.
“With demand for graduates with AI skills booming, we’ve released the NVIDIA Deep Learning Teaching Kit to help educators give their students hands on experience with GPU-accelerated computing. The kit — co-developed with deep-learning pioneer Yann LeCun, and largely based on his deep learning course at New York University — was announced Monday at the NIPS machine learning conference in Barcelona. Thanks to the rapid development of NVIDIA GPUs, training deep neural networks is more efficient than ever in terms of both time and resource cost. The result is an AI boom that has given machines the ability to perceive — and understand — the world around us in ways that mimic, and even surpass, our own.”
Today Mellanox announced that NIH, the U.S. National Institute of Health’s Center for Information Technology, has selected Mellanox 100G EDR InfiniBand solutions to accelerate Biowulf, the largest data center at NIH. The project is a result of a collaborative effort between Mellanox, CSRA, Inc., DDN, and Hewlett Packard Enterprise. “The Biowulf cluster is NIH’s core HPC facility, with more than 55,000 cores. More than 600 users from 24 NIH institutes and centers will leverage the new supercomputer to enhance their computationally intensive research.”
“The complexity and high costs of architecting and maintaining streaming analytics solutions often make it difficult to get new projects off the ground. That’s part of the reason Kx, a leading provider of high-volume, high-performance databases and real-time analytics solutions, is always interested in exploring how new technologies may help it push streaming analytics performance and efficiency boundaries. The Intel Xeon Phi processor is a case in point. At SC16 in Salt Lake City, Kx used a 1.2 billion record database of New York City taxi cab ride data to demonstrate what the Intel Xeon Phi processor could mean to distributed big data processing. And the potential cost/performance implications were quite promising.”
ANSYS, HLRS and Cray have pushed the boundaries of supercomputing by achieving a new supercomputing milestone by scaling ANSYS software to 172,032 cores on the Cray XC40 supercomputer, hosted at HLRS, running at 82 percent efficiency. This is nearly a 5x increase over the record set two years ago when Fluent was scaled to 36,000 cores. “This record-setting scaling of ANSYS software on the Cray XC40 supercomputer at HLRS proves that close collaborations with customers and partners can produce exceptional results for running complex simulations,” said Fred Kohout, senior vice president and chief marketing officer at Cray.
How is Hewlett Packard Enterprise reinventing the fundamental architecture on which all computers have been built for the past 60 years? In this video, HPC describes the evolution of The Machine research project – one of the largest and most complex research projects in the company’s history – and how HPE demonstrated the world’s first Memory-Driven Computing architecture.
“Storage performance has been one of the biggest challenges in developing supercomputers. To meet the demands for storage performance, IME was introduced to the Oakforest-PACS on a massive scale, the first such introduction in the world,” said Osamu Tatebe, lead, public relations, JCAHPC / professor, Center for Computational Sciences, University of Tsukuba. “We are very pleased that we could achieve effective I/O performance exceeding 1 TB per second in writing tens of thousands of processes to the same file. With this new storage technology, we believe that we will be able to contribute to society with the further development of computational science, big data analysis and machine learning.”