Attendees of SC16 who are interested in open source data management will have plenty of opportunities to learn about the integrated Rule-Oriented Data System (iRODS) and the new iRODS 4.2, which will be released just in time for the conference.
“The results of DDN’s annual HPC Trends Survey reflect very accurately what HPC end users tell us and what we are seeing in their data center infrastructures. The use of private and hybrid clouds continues to grow although most HPC organizations are not storing as large a percentage of their data in public clouds as they anticipated even a year ago. Performance remains the top challenge, especially when handling mixed I/O workloads and resolving I/O bottlenecks.”
“When we started the marketplace 3 years ago the service we offered was all manual,” explained Wolfgang Gentzsch who founded UberCloud together with Burak Yenier in 2012. “But already then we were developing our HPC software container technology based on Docker which today provides a fully automated software packaging and porting environment and allows users to access their engineering workflow within seconds, at their fingertips, in any cloud. They don’t have to learn how to handle a new cloud user platform, because handling a software container provides the same look and feel identical to the engineer’s desktop.”
“Engineers and scientists can now get access to HPC resources more easily because of the Fortissimo Marketplace, a new platform for brokering high-performance computing (HPC) services. The new cloud-based marketplace offers small manufacturing businesses fast and convenient access to supercomputing services.
Professor Mark Parsons, project coordinator for the Fortissimo project, stated: “We know that companies that use high-performance computing and high-performance data analytics really seek clear economic and business benefits from doing so. However, we also know that far too few companies actually use these technologies.”
Gary Grider from LANL presented this talk at the Storage Developer Conference. “MarFS is a Near-POSIX File System using cloud storage for data and many POSIX file systems for metadata. Extreme HPC environments require that MarFS scale a POSIX namespace metadata to trillions of files and billions of files in a single directory while storing the data in efficient massively parallel ways in industry standard erasure protected cloud style object stores.”
“To reinforce and continue with our pioneering work on fog computing that started in 2008, we pursue synergies between leading technology companies and academic and scientific community,” said Mario Nemirovsky, Network Processors Manager at BSC. “By collaborating with the OpenFog Consortium, we will be able to contribute to the consolidation of an IoT platform for the interoperability for consumers, business, industry and research. We are looking forward to a constructive and fruitful collaborations with all OpenFog members.”
Today Microsoft released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and Nvidia GPUs. “We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.
The Dell HPC Community at SC16 has posted their Meeting Agenda. “Blair Bethwaite from Monash University will present OpenStack for HPC at Monash. After that, Josh Simons from VMWare will describe the latest technologies in HPC virtualization.” The event takes place Saturday, Nov. 12 at the Radisson Hotel in Salt Lake City.
Designed specifically with researchers in mind, the Birmingham Environment for Academic Research (BEAR) Cloud will augment an already rich set of IT services at the University of Birmingham and will be used by academics across all disciplines, from Medicine to Archaeology, and Physics to Theology. “We are very proud of the new system, but building a research cloud isn’t easy,” said Simon Thompson, Research Computing Infrastructure Architect in IT Services at the University of Birmingham. “We challenged a range of carefully-selected partners to provide the underlying technology.”
In this video from the Microsoft Ignite Conference, Tejas Karmarkar describes how to run your HPC Simulations on Microsoft Azure – with UberCloud container technology. “High performance computing applications are some of the most challenging to run in the cloud due to requirements that can include fast processors, low-latency networking, parallel file systems, GPUs, and Linux. We show you how to run these engineering, research and scientific workloads in Microsoft Azure with performance equivalent to on-premises. We use customer case studies to illustrate the basic architecture and alternatives to help you get started with HPC in Azure.”