Today Nimbix announced plans to use Infomart Dallas to support the infrastructure needs of its upgraded HPC cloud platform, JARVICE 2.0. Infomart Dallas provides Nimbix with a high-density data center complete with industry-leading connectivity options, access to the core Dallas/Fort Worth (DFW) network hub, and the low-cost power it needs to deliver its innovative HPC solutions.
There are just two weeks left to submit papers to PASC16. The event takes place June 8-10, 2016 in Lausanne, Switzerland.
In this Chigago Tonight video, Katrin Heitmann from Argonne National Lab describes one of the most complex simulations of the evolution of the universe ever created. “What we want to do now with these simulations is exactly create this universe in our lab. So we build this model and we put it on a computer and evolve it forward, and now we have created a universe that we can look at and compare it to the real data.”
In this video, Dr. Michael Karasick from IBM moderates a panel discussion on Machine Learning. “The success of cognitive computing will not be measured by Turing tests or a computer’s ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved.”
Rob Futrick presented this talk at SC15. “Cycle Computing’s CycleCloud software suite is the leading cloud orchestration, provisioning, and data management platform for Big Compute, Big Data, and large technical computing applications running on any public, private, or internal environment. For years, customers in Life Sciences, Manufacturing, Financial Services, and other Engineering and Research areas have used CycleCloud software to manage some of the world’s largest production cloud deployments.”
In this video from the Cycle Computing the HPC in the Cloud Educational Series, Jeff Layton, HPC Principal Architect at Amazon Web Services, explains concepts and options around using storage in the AWS Cloud.
In this video from SC15, Dell’s Onur Celebioglu discusses why HPC is now important to a broader group of use cases. He also provides an overview of HPC for research, life sciences and manufacturing. Participants learned more about why HPC, big data and cloud are converging, and how Dell solves challenges in our HPC engineering lab and through collaborative work with other leading technology partners and research institutions.
Ruud van der Pas from Oracle presented this talk at OpenMPcon. “Unfortunately it is a very widespread myth that OpenMP Does Not Scale – a myth we intend to dispel in this talk. Every parallel system has its strengths and weaknesses. This is true for clustered systems, but also for shared memory parallel computers. While nobody in their right mind would consider sending one zillion single byte messages to a single node in a cluster, people do the equivalent in OpenMP and then blame the programming model. Also, shared memory parallel systems have some specific features that one needs to be aware of. Few do though. In this talk we use real-life case studies based on actual applications to show why an application did not scale and what was done to change this. More often than not, a relatively simple modification, or even a system level setting, makes all the difference.”
Steve Cooper from One Stop Systems presented this talk at SC15. “The OSS GPUltima is the densest and most cost-effective petaflop solution for scalable data center infrastructures,” said Steve Cooper, OSS CEO. “Supporting up to 128 NVIDIA Tesla K80 GPU accelerators, the OSS GPUltima shows tremendous performance gains in many applications like oil and gas exploration, financial calculations, and medical devices. Providing ample cooling and power to accommodate this many high-end cards, it surpasses other devices in performance. In addition, the OSS GPUltima is a preconfigured rack of GPUs, servers and interconnections already integrated and the whole unit is tested and ready to add application software.”
“Developers of modern HPC applications face a challenge when scaling out their hybrid (MPI/OpenMP) applications. As cluster sizes continue to grow, the amount of analysis data collected can easily become overwhelming when going from 10s to 1000s of ranks and it’s tough to identify which are the key metrics to track. There is a need for a lightweight tool that aggregates the performance data in a simple and intuitive way, provides advice on next optimizations steps, and hones in on performance issues. We’ll discuss a brand new tool that helps quickly gather and analyze statistics up to 100,000 ranks. We’ll give examples of the type of pertinent information collected at high core counts, including memory and counter usage, MPI and OpenMP imbalance analysis, and total communication vs. computation time. We’ll work through analyzing an application and effective ways to manage the data.”