MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Weather Prediction and the Scalability Challenge

“Weather prediction using high performance computing relies on having physically based models of the atmosphere that can deliver forecasts well in advance of the weather actually happening. ECMWF has embarked on a scalability program together with the NWP and climate modeling community in Europe. The talk will give an overview of the principles underlying numerical weather prediction as well as a description of the HPC related challenges that are facing the NWP and climate modeling communities today.”

Disruptive Opportunities and a Path to Exascale: A Conversation with HPC Visionary Alan Gara of Intel

“We want to encourage and support that collaborative behavior in whatever way we can, because there are a multitude of problems in government agencies and commercial entities that seem to have high performance computing solutions. Think of bringing together the tremendous computational expertise you find from the DOE labs with the problems that someone like the National Institutes of Health is trying to solve. You couple those two together and you really can create something amazing that will affect all our lives. We want to broaden their exposure to the possibilities of HPC and help that along. It’s important, and it will allow all of us in HPC to more broadly impact the world with the large systems as well as the more moderate-scale systems.”

Satoshi Matsuoka Presents: The Inevitable End of Moore’s Law

“The promising new parameter in place of the transistor count is the perceived increase in the capacity and bandwidth of storage, driven by device, architectural, as well as packaging innovations: DRAM-alternative Non-Volatile Memory (NVM) devices, 3-D memory and logic stacking evolving from VIAs to direct silicone stacking, as well as next-generation terabit optics and networks. The overall effect of this is that, the trend to increase the computational intensity as advocated today will no longer result in performance increase, but rather, exploiting the memory and bandwidth capacities will instead be the right methodology.”

Podcast: TACC Powers Zika Hackathon to Fight Disease

In this TACC podcast, Ari Kahn from the Texas Advanced Computing Center and Eddie Garcia from Cloudera describe a recent Hackathon in Austin designed to tackle data challenges in the fight against the Zika virus. The Texas Advanced Computing Center provided time on the Wrangler data intensive supercomputer as a virtual workspace for the Zika hackers.

Larry Smarr Presents: Using Supercomputers to Reveal your Inner Microbiome

“I have been collecting massive amounts of data from my own body over the last ten years, which reveals detailed examples of the episodic evolution of this coupled immune-microbial system. An elaborate software pipeline, running on high performance computers, reveals the details of the microbial ecology and its genetic components. A variety of data science techniques are used to pull biomedical insights from this large data set. We can look forward to revolutionary changes in medical practice over the next decade.”

Video: Best Practices in HPC Software Development

“Scientific code developers have increasingly been adopting software processes derived from the mainstream (non-scientific) community. Software practices are typically adopted when continuing without them becomes impractical. However, many software best practices need modification and/or customization, partly because the codes are used for research and exploration, and partly because of the combined funding and sociological challenges. This presentation will describe the lifecycle of scientific software and important ways in which it differs from other software development. We will provide a compilation of software engineering best practices that have generally been found to be useful by science communities, and we will provide guidelines for adoption of practices based on the size and the scope of the project.”

Slimming Down Supercomputer Power Bills

Any performance improvements that could be wrung out of supercomputers by adding more power have long been exhausted. New supercomputers demand new options that will give scientists a sleek, efficient partner in making new discoveries such as the new supercomputer called Summit that’s being developed and is to arrive at Oak Ridge National Lab in the next couple of years. “If necessity is the mother of invention, we’ll have some inventions happening soon,” says deputy division director of Argonne Leadership Computing Facility Susan Coghlan.

Ryft: Bringing High Performance Analytics to Every Enterprise

Pat McGarry from Ryft presented this talk at the HPC User Forum in Tucson. “Years in the making, the Ryft ONE combines two proven innovations in hardware and software to optimize compute, storage and I/O performance: the Ryft Hybrid FPGA/x86 Compute Platform, which leverages a massively parallel bitwise computing architecture and the Ryft Algorithmic Primitives (RAP) Library.

Video: Lustre Update from Seagate Technologies

Peter Bojanic presented this talk at LUG 2016 in Portland. “At LUG 2016, Seagate announced it will incorporate Intel Enterprise Edition for Lustre (IEEL), a big data software platform, into its market-leading ClusterStor storage architecture for high-performance computing. The move will strengthen Seagate’s HPC data storage product line and provide customers with an additional choice of Lustre parallel file systems to help drive advancements in the HPC and big data market.”

Internet2 Celebrates 20 Years by Recognizing Contributions to Research

Internet2 celebrated their 20th anniversary this week at the 2016 Internet2 Global Summit in Chicago. As the operator of the nation’s largest and fastest, coast-to-coast research and education infrastructure, Internet2 recognized outstanding service and contributions to its community with set of awards. “2016 marks the 20th anniversary of the founding of Internet2 and the 2016 Global Summit is an opportune time to showcase the Internet2 community’s rich history and bright future—focused on the transformation of research and education in the next 20 years and beyond.”