In this video from SC16, Yugendra Guvvala, VP of Technology at RAID Inc. describes the company’s new Dashboard software. The Dashboard provides a single pane of glass to manage your high performance Lustre storage pools. “Scaling to tens of petabytes and thousands of clients – considered a best filesystem for storage by many – Lustre is a high performance storage architecture for clusters. The central component of this architecture is the Lustre shared file system, which is currently available for Linux, providing a POSIX-compliant UNIX file system interface. RAID, Inc. offers custom Lustre solutions with installation & 24/7 support.”
OpenACC is a directive based programming model that gives C/C++ and Fortran programmers the ability to write parallel programs simply by augmenting their code with pragmas. Pragmas are advisory messages that expose optimization, parallelization, and accelerator offload opportunities to the compiler so it can generate efficient parallel code for a variety of different target architectures including AMD and NVIDIA GPUs plus ARM, x86, Intel Xeon Phi, and IBM POWER processors.
This version of COMPSs, available from today, updates the result of the team’s work in the last years on the provision of a set of tools that helps developers to program and execute their applications efficiently on distributed computational infrastructures such as clusters, grids and clouds. COMPSs is a task based programming model known for notably improving the performance of large scale applications by automatically parallelizing their execution.
Today Appentra Solutions announced that the company will participate in the Emerging Technologies Showcase at SC16. As an HPC startup, Appentra was selected for its Parallware technology, an LLVM-based software technology that assists in the parallelization of scientific codes with OpenMP and OpenACC. “The new Parallware Trainer is a great tool for providing support to parallel programmers on their daily work,” said Xavier Martorell, Parallel Programming Models Group Manager at Barcelona Supercomputing Center.
Over at the ARM Connected Community, Darren Cepulis, writes that the popular chip platform is now part of the OpenHPC community. As one of a series of strategic moves, the effort should help bolster ARM as a platform for high performance computing.
In this special guest feature, Bill Mannel from Hewlett Packard Enterprise writes that upcoming Intel HPC Developer Conference in Salt Lake City is a great opportunity to learn about code modernization for the next generation of high performance computing applications. “As computing systems grow increasingly complex and new architecture designs become mainstream, training developers to write code which runs on future HPC systems will require a collaborative environment and the expertise of the best and brightest in the industry.”
The OpenACC standards group today announced several major milestones including the addition of new member, the National Supercomputing Center in Wuxi, the adoption of OpenACC by several major HPC applications, the addition of support for new target platforms and expanded implementation
“OpenStack promises to be a standard platform for creating a private cloud but it can be very difficult to configure,” said Dan Kuczkowski, Senior Vice President of Worldwide Sales at Bright Computing. “We are very pleased that Stony Brook, a longtime Bright customer, trusted in the Bright platform for HPC cluster management and decided to adopt Bright OpenStack as their private cloud standard.”
Are supercomputers practical for Deep Learning applications? Over at the Allinea Blog, Mark O’Connor writes that a recent experiment with machine learning optimization on the Archer supercomputer shows that relatively simple models run at sufficiently large scale can readily outperform more complex but less scalable models. “In the open science world, anyone running a HPC cluster can expect to see a surge in the number of people wanting to run deep learning workloads over the coming months.”
Today Microsoft released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and Nvidia GPUs. “We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.