“With NVIDIA GPU technology on IBM Cloud, we are one step closer to offering supercomputing performance on a pay-as-you-go basis, which makes this new approach to tackling big data problems accessible to customers of all sizes,” says Jerry Gutierrez, HPC leader for SoftLayer, an IBM Company. “We’re at an inflection point in our industry, where GPU technology is opening the door for the next wave of breakthroughs across multiple industries.”
“I have been collecting massive amounts of data from my own body over the last ten years, which reveals detailed examples of the episodic evolution of this coupled immune-microbial system. An elaborate software pipeline, running on high performance computers, reveals the details of the microbial ecology and its genetic components. A variety of data science techniques are used to pull biomedical insights from this large data set. We can look forward to revolutionary changes in medical practice over the next decade.”
The Piz Daint supercomputer spotted a large reservoir of magma right below the tiny South Korean island of Ulleung. No harm to humans is expected, but the origin of the magma pool remains unclear.
“Enlisting the help of World Community Grid volunteers will enable us to computationally evaluate over 20 million compounds in just the initial phase and potentially up to 90 million compounds in future phases,” said Carolina Horta Andrade, Ph.D., adjunct professor at the Federal University of Goiás in Brazil and the lead researcher on the OpenZika project. “Running the OpenZika project on World Community Grid will allow us to greatly expand the scale of our project, and it will accelerate the rate at which we can obtain the results toward an antiviral drug for the Zika virus.”
Today XSEDE announced that Dr. Pamela McCauley has been named a plenary speaker for the XSEDE16 conference. In this dynamic keynote address, McCauley will discuss the impact of innovation on individuals, nations, and the global society.
“Scientific code developers have increasingly been adopting software processes derived from the mainstream (non-scientific) community. Software practices are typically adopted when continuing without them becomes impractical. However, many software best practices need modification and/or customization, partly because the codes are used for research and exploration, and partly because of the combined funding and sociological challenges. This presentation will describe the lifecycle of scientific software and important ways in which it differs from other software development. We will provide a compilation of software engineering best practices that have generally been found to be useful by science communities, and we will provide guidelines for adoption of practices based on the size and the scope of the project.”
Steve Oberlin, chief technology officer for accelerated computing at NVIDIA, will give two NCSA 30th Anniversary Featured Lectures on May 26. The morning talk is tailored for NCSA staff, Computer Science, and Electrical and Computer Engineering students and faculty. The second talk is open to the public.
Any performance improvements that could be wrung out of supercomputers by adding more power have long been exhausted. New supercomputers demand new options that will give scientists a sleek, efficient partner in making new discoveries such as the new supercomputer called Summit that’s being developed and is to arrive at Oak Ridge National Lab in the next couple of years. “If necessity is the mother of invention, we’ll have some inventions happening soon,” says deputy division director of Argonne Leadership Computing Facility Susan Coghlan.
D-Wave Systems will host a three-hour seminar on Quantum Computing at ISC 2016. Designed to teach HPC users more about quantum computing and how it might be applied to their most complex computing problems, the no-cost event takes place June 20 at the Frankfurt Marriott.
Getting started with HPC can be a challenge for SMEs, but managing a cluster doesn’t have to be a struggle. IBM’s Platform Computing group has been helping users to stand up and run clusters efficiently for years. Now, with the recently announced IBM Platform LSF Suites for Workgroups and HPC, the company has made it easier than ever to get kick the tires on High Performance Computing. “So basically, we would give you all the tools that would allow you to easily migrate from a loose collection of work stations to a small cluster environment. And we would handle the bare metal provisioning and then installing the software that you need really to manage your workload.”