NOAA and its partners have developed a new forecasting tool to simulate how water moves throughout the nation’s rivers and streams, paving the way for the biggest improvement in flood forecasting the country has ever seen. Launched today and run on NOAA’s powerful new Cray XC40 supercomputer, the National Water Model uses data from more than 8,000 U.S. Geological Survey gauges to simulate conditions for 2.7 million locations in the contiguous United States. The model generates hourly forecasts for the entire river network. Previously, NOAA was only able to forecast streamflow for 4,000 locations every few hours.
Today SC16 announced that the conference will feature 38 high-quality workshops to complement the overall Technical Program events, expand the knowledge base of its subject area, and extend its impact by providing greater depth of focus.
Today the U.S. Department of Energy announced that it will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers. “Our simulations will rely on current petascale and future exascale capabilities at DOE supercomputing centers. To validate the predictions about material behavior, we’ll conduct experiments and use the facilities of the Advanced Photon Source, Spallation Neutron Source and the Nanoscale Science Research Centers.”
“Few fields are moving faster right now than deep learning,” writes Buck. “Today’s neural networks are 6x deeper and more powerful than just a few years ago. There are new techniques in multi-GPU scaling that offer even faster training performance. In addition, our architecture and software have improved neural network training time by over 10x in a year by moving from Kepler to Maxwell to today’s latest Pascal-based systems, like the DGX-1 with eight Tesla P100 GPUs. So it’s understandable that newcomers to the field may not be aware of all the developments that have been taking place in both hardware and software.”
Today Cycle Computing announced its continued involvement in optimizing research spearheaded by NASA’s Center for Climate Simulation (NCCS) and the University of Minnesota. Currently, a biomass measurement effort is underway in a coast-to-coast band of Sub-Saharan Africa. An over 10 million square kilometer region of Africa’s trees, a swath of acreage bigger than the entirety […]
“In order to address data intensive workloads in need of higher performance for storage, TYAN takes full advantage of Intel NVMe technology to highlight hybrid storage configurations. TYAN server solutions with NVMe support can not only boost storage performance over the PCIe interface but provide storage flexibility for customers through scale-out architecture” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit.
LANL reports that a moment of inspiration during a wiring diagram review has saved more than $2 million in material and labor costs for the Trinity supercomputer at Los Alamos National Laboratory.
In this Intel Chip Chat Podcast, Nidhi Chappell, the Director of Machine Learning Strategy at Intel discusses the company’s planned acquisition of Nervana Systems to further drive Intel’s capabilities in the artificial intelligence (AI) field. “We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks. Nervana’s Engine and silicon expertise will advance Intel’s AI portfolio and enhance the deep learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.”
“I am honored to have been asked to drive NCSA’s continuing mission as a world-class, integrative center for transdisciplinary convergent research, education, and innovation,” said Gropp. “Embracing advanced computing and domain collaborations across the University of Illinois at Urbana-Champaign campus and ensuring scientific communities have access to advanced digital resources will be at the heart of these efforts.”
Researchers at the University of Oxford have achieved a quantum logic gate with record-breaking 99.9% precision, reaching the benchmark required theoretically to build a quantum computer. “An analogy from conventional computing hardware would be that we have finally worked out how to build a transistor with good enough performance to make logic circuits, but the technology for wiring thousands of those transistors together to build an electronic computer is still in its infancy.”