“Computers are becoming an increasingly cheaper, more powerful tool that cannot be ignored by professionals. Computer simulation reproduces the behavior of natural and man-made systems to help us understand, predict, and communicate. In this series kick-off, we will show you how computer simulation is used by LLNL scientists on the worlds fastest computers. We will also show you how you can get started doing your own computer simulations with free, open-source tools for class projects or just for fun.”
Although many initially thought that liquid and servers should probably never mix – what if the server cooling is done in a completely controlled and secured environment? Liquid submersion cooling has the potential to revolutionize the design, construction, and energy consumption of data centers around the world.
“The main topics for our April 7-9 meeting in Santa Fe are industrial partnerships with large HPC centers and how they’re working, with perspectives from the U.S., France and the UK. We’ll also take another hard look at what’s happening with processors, coprocessors and accelerators and at potential disruptive technologies, as well as zeroing in on the HPC storage market and trends and the CORAL procurement that involves Oak Ridge, Argonne and Livermore.”
“Systems like Argonne’s Mira, an IBM Blue Gene/Q system with nearly a million cores, can enable breakthroughs in science, but to use them productively requires expertise in computer architectures, parallel programming, mathematical software, data management and analysis, performance analysis tools, software engineering, and so on. Our training program exposes the participants to all those topics and provides hands-on exercises for experimenting with most of them.”
Fred Streitz from Lawrence Livermore National Lab presented this talk at the Stanford HPC Conference. “The HPC Innovation Center bridges American industry’s growing need for advanced solution to complex challenges in research and development with LLNL’s forefront supercomputing and disruptive scientific capabilities.”
In this whitepaper from Adaptive Computing – we learn about the new concept around Big Workflow and how it directly addresses the needs of critical, data-intensive, applications. By creating more intelligence around data control, Big Workflow directly provides a way for big data, HPC, and cloud environments to interoperate, and do so dynamically based on what applications are running.
Ramesh Balakrishnan from Argonne presented this talk at the Stanford HPC Conference. “The main scientific challenge in fluid dynamics remains that of gaining better insight into the physics of turbulence and its role in the transfer of momentum, heat, and mass in engineering applications which include the aerodynamics of high lift devices, chemically reacting flows in combustion systems, such as combustors of jet engines, and the aeroacoustics of low and high speed flows.”