In this video, Dr. Carl J. Williams, Deputy Director of the Physical Measurement Laboratory at the National Institute of Standards and Technology within the United States Department of Commerce, reviews the National Strategic Computing Initiative. Issued by Executive Order, the initiative aims to maximize benefits of high-performance computing research, development and deployment.
“Over two days we’ll delve into a wide range of interests and best practices – in applications, tools and techniques and share new insights on the trends, technologies and collaborative partnerships that foster this robust ecosystem. Designed to be highly interactive, the open forum will feature industry notables in keynotes, technical sessions, workshops and tutorials. These highly regarded subject matter experts (SME’s) will share their works and wisdom covering everything from established HPC disciplines to emerging usage models from old-school architectures and breakthrough applications to pioneering research and provocative results. Plus a healthy smattering of conversation and controversy on endeavors in Exascale, Big Data, Artificial Intelligence, Machine Learning and much much more!”
“I’m pleased to have the opportunity to lead this important Council,” said Dr. J. Michael McQuade of United Technologies Corporation, who will serve as the first Chair of the ECP Industry Council. “Exascale level computing will help industry address ever more complex, competitively important problems, ones which are beyond the reach of today’s leading edge computing systems. We compete globally for scientific, technological and engineering innovations. Maintaining our lead at the highest level of computational capability is essential for our continued success.”
HPC4Mfg will host their first annual High Performance Computing for Manufacturing Industry Engagement Day on March 2-3 in San Diego. With a theme of “Spurring Innovation in U.S. Manufacturing Through Advanced Computing,” the conference will bring together representatives from U.S. manufacturing, national laboratories, universities, and consortiums to discuss the recent advancements in manufacturing realized through the application of HPC and how leveraging HPC expertise through public-private partnerships has lowered the risk of adoption.
Argonne has selected 10 computational science and engineering research projects for its Aurora Early Science Program starting this month. Aurora, a massively parallel, manycore Intel-Cray supercomputer, will be ALCF’s next leadership-class computing resource and is expected to arrive in 2018. The Early Science Program helps lay the path for hundreds of other users by doing actual science, using real scientific applications, to ready a future machine. “As with any bleeding edge resource, there’s testing and debugging that has to be done,” said ALCF Director of Science Katherine Riley.
“The National Renewable Energy Laboratory(NREL), located at the foothills of the Rocky Mountains in Golden, Colorado, is the nation’s primary laboratory for research and development of renewable energy and energy efficiency technologies. NREL is continuing an active research and development program for modeling of wind farm interactions and mesoscale dynamics within the National Wind Technology Center. This R&D program has an opening for one full-time engineer in wind farm modeling and mesoscale research.”
Ian Foster and other researchers in CODAR are working to overcome the gap between computation speed and the limitations in the speed and capacity of storage by developing smarter, more selective ways of reducing data without losing important information. “Exascale systems will be 50 times faster than existing systems, but it would be too expensive to build out storage that would be 50 times faster as well,” said Foster. “This means we no longer have the option to write out more data and store all of it. And if we can’t change that, then something else needs to change.”
“This talk reports efforts on refactoring and optimizing the climate and weather forecasting programs – CAM and WRF – on Sunway TaihuLight. To map the large code base to the millions of cores on the Sunway system, OpenACC-based refactoring was taken as the major approach, with source-to-source translator tools applied to exploit the most suitable parallelism for the CPE cluster and to fit the intermediate variable into the limited on-chip fast buffer.”
In this special guest feature, James Reinders looks at Intel Xeon Phi processors from a programmer’s perspective. “How does a programmer think of Intel Xeon Phi processors? In this brief article, I will convey how I, as a programmer, think of them. In subsequent articles, I will dive a bit more into details of various programming modes, and techniques employed for some key applications. In this article, I will endeavor to not stray into deep details – but rather offer an approachable perspective on how to think about programming for Intel Xeon Phi processors.”
Cheyenne is a new 5.34-petaflops, high-performance computer built for NCAR by SGI. Cheyenne be a critical tool for researchers across the country studying climate change, severe weather, geomagnetic storms, seismic activity, air quality, wildfires, and other important geoscience topics. In this video, Brian Vanderwende from UCAR describes typical workflows in the NCAR/CISL Cheyenne HPC environment as well as performance […]