“This project will make a substantial contribution to advancing wind energy,” said Steve Hammond, NREL’s Director of Computational Science and the principal investigator on the project. “It will advance our fundamental understanding of the complex flow physics of whole wind plants, which will help further reduce the cost of electricity derived from wind energy.”
Engineers of the Hikari HVDC power feeding system predict it will save 15 percent compared to conventional systems. “The 380 volt design reduces the number of power conversions when compared to AC voltage systems,” said James Stark, director of Engineering and Construction at the Electronic Environments Corporation (EEC), a Division of NTT FACILITIES. “What’s interesting about that,” Stark added, “is the computers themselves – the supercomputer, the blade servers, cooling units, and lighting – are really all designed to run on DC voltage. By supplying 380 volts DC to Hikari instead of having an AC supply with conversion steps, it just makes a lot more sense. That’s really the largest technical innovation.”
Loyola University Maryland has been awarded a $280,120 grant from the National Science Foundation (NSF) to build an HPC cluster that will exponentially expand research opportunities for faculty and students across disciplines.
“The drive towards Exascale computing requires cooling the next generation of extremely hot CPUs, while staying within a manageable power envelope,” said Bob Bolz, HPC and Data Center Business development at Aquila. “Liquid cooling holds the key. Aquarius is designed from the ground up to meet reliability and the feature-specific demands of high performance and high density computing. Our design goal was to reduce the cost of cooling server resources to well under 5% of overall data center usage.”
Vectorization and threading are critical to using such innovative hardware product such as the Intel Xeon Phi processor. Using tools early in the design and development processor that identify where vectorization can be used or improved will lead to increased performance of the overall application. Modern tools can be used to determine what might be blocking compiler vectorization and the potential gain from the work involved.
Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”
PRACE has announced the winners of its 13th Call for Proposals for PRACE Project Access. Selected proposals will receive allocations to the following PRACE HPC resources: Marconi and MareNostrum.
“SGI and Bright Computing have been working together for the last year to provide our joint customers with enterprise level clustered infrastructure management software for production supercomputing,” said Gabriel Broner, vice president and general manager of HPC, SGI. “By partnering with Bright Computing, our customers are able to select the cluster management tool that best suits their needs.”
Today the Energy Department’s Advanced Manufacturing Office announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department’s national laboratories to tackle major manufacturing challenges. The High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing (HPC) to advance applied science and technology in manufacturing, with an aim of increasing energy efficiency, advancing clean energy technology, and reducing energy’s impact on the environment.
Nvidia’s GPU platforms have been widely used on the training side of the Deep Learning equation for some time now. Today the company announced a new Pascal-based GPU tailor-made for the inferencing side of Deep Learning workloads. “With the Tesla P100 and now Tesla P4 and P40, NVIDIA offers the only end-to-end deep learning platform for the data center, unlocking the enormous power of AI for a broad range of industries,” said Ian Buck, general manager of accelerated computing at NVIDIA.”