The OpenFog Consortium was founded over one year ago to accelerate adoption of fog computing through an open, interoperable architecture. The newly published OpenFog Reference Architecture is a high-level framework that will lead to industry standards for fog computing. The OpenFog Consortium is collaborating with standards development organizations such as IEEE to generate rigorous user, functional and architectural requirements, plus detailed application program interfaces (APIs) and performance metrics to guide the implementation of interoperable designs.
Today Cycle Computing announced that the HyperXite team is using CycleCloud software to manage Hyperloop simulations using ANSYS Fluent on the Azure Cloud. “Our mission is optimize and economize the transportation of the future and Cycle Computing has made that endeavor so much easier, said Nima Mohseni, Simulation Lead, HyperXite. “We absolutely require a solution that can compress and condense our timeline while providing the powerful computational results we require. Thank you to Cycle Computing for making a significant difference in our ability to complete our work.”
“Explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators …) allowing users to take full control to set-up and run in their native environments. This talk explores Singularity how it combines software packaging models with minimalistic containers to create very lightweight application bundles which can be simply executed and contained completely within their environment or be used to interact directly with the host file systems at native speeds. A Singularity application bundle can be as simple as containing a single binary application or as complicated as containing an entire workflow and is as flexible as you will need.”
“Available on GitHub as Open Source, the Batch Shipyard toolkit enables easy deployment of batch-style Dockerized workloads to Azure Batch compute pools. Azure Batch enables you to run parallel jobs in the cloud without having to manage the infrastructure. It’s ideal for parametric sweeps, Deep Learning training with NVIDIA GPUs, and simulations using MPI and InfiniBand.”
“Our collaboration with Cycle Computing enables the ANSYS Enterprise Cloud to meet the elastic capacity and security requirements of enterprise customers,” said Ray Milhem, vice president, Enterprise Solutions and Cloud, ANSYS. “CycleCloud has run some of the largest Cloud Big Compute and Cloud HPC projects in the world, and we are excited to bring their associated, proven software capability to our global customers with the ANSYS Enterprise Cloud.”
“Over two days we’ll delve into a wide range of interests and best practices – in applications, tools and techniques and share new insights on the trends, technologies and collaborative partnerships that foster this robust ecosystem. Designed to be highly interactive, the open forum will feature industry notables in keynotes, technical sessions, workshops and tutorials. These highly regarded subject matter experts (SME’s) will share their works and wisdom covering everything from established HPC disciplines to emerging usage models from old-school architectures and breakthrough applications to pioneering research and provocative results. Plus a healthy smattering of conversation and controversy on endeavors in Exascale, Big Data, Artificial Intelligence, Machine Learning and much much more!”
Today Univa announced the general availably of its Unisight v4.1 product, providing simple and extensible metric collection for all types of Univa Grid Engine data including NVIDIA GPUs and Software Licenses. Unisight v4.1 is a comprehensive monitoring and reporting tool that provides Grid Engine cluster admins the ability to measure resource utilization and use facts to plan additional server and application purchases. “With Unisight v4.1, organizations take an important step toward improving data center automation choices by understanding infrastructure utilization and workflow,” said Fritz Ferstl, CTO and Business Development, EMEA at Univa. “With built-in reports, customers can monitor resource usage – including software license – to obtain the deep insights required to make informed long-term IT strategy and budget decisions from server architecture to memory requirements.”
“Billed as an exposition into ‘The Future of Cloud HPC Simulation,’ the event brought together experts in high-performance computing and simulation, cloud computing technologists, startup founders, and VC investors across the technology landscape. In addition to product demonstrations with Rescale engineers, including the popular Deep Learning workshop led by Mark Whitney, Rescale Director of Algorithms, booths featuring ANSYS, Microsoft Azure, Data Collective, and Microsoft Ventures offered interactive sessions for attendees.”
Dr. Umit Catalyurek from Georgia Institute of Technology presented this talk as part of the USC Big Data to Knowledge series. “This lecture will be a brief crash course on computer architecture, high performance computing and parallel computing. We will, again very briefly, discuss how to classify computer architectures and applications, and what to look in applications to achieve best performance on different architectures.”
In this podcast, the Radio Free HPC team speaks to our special guest for the week: Binnie Coppersmith, also known as Henry’s Mom. It’s Binnie’s 80th birthday, and Dan wants to know once and for all if Henry is an alien, or at least why he is the way he is. After that, we look at why the UberCloud has received $1.7 Million in Pre-A Series funding. It’s great news for HPC in the Cloud.