MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Accelerating Cognitive Workloads with Machine Learning

In this video, Ruchir Puri, an IBM Fellow at the IBM Thomas J. Watson Research Center talks about building large-scale big data systems and delivering real-time solutions such as using machine learning to predict drug reactions. “There is a need for systems that provide greater speed to insight — for data and analytics workloads to help businesses and organization make sense of the data, to outthink competitors as we usher in a new era of Cognitive Computing.”

Video: Microsoft Azure for Engineering Analysis and Simulation

Tejas Karmarkar from Microsoft presented this talk at SC15. “Azure provides on-demand compute resources that enable you to run large parallel and batch compute jobs in the cloud. Extend your on-premises HPC cluster to the cloud when you need more capacity, or run work entirely in Azure. Scale easily and take advantage of advanced networking features such as RDMA to run true HPC applications using MPI to get the results you want, when you need them.”

Video: Vectorization Advisor in Action for Computer-Aided Formulation

In this video from the Intel HPC Developer Conference at SC15, Kevin O’Leary from Intel presents: Vectorization Advisor in Action for Computer-Aided Formulation. “The talk will focus on a step-by-step walkthrough of optimizations for an industry code by using the new Vectorization Advisor (as part of Intel® Advisor XE 2016). Using this tool, HPC experts at UK Daresbury Lab were able to spot new SIMD modernization and optimization opportunities in the DL_MESO application – an industry engine currently used by “computer-aided formulation” companies like Unilever.”

Video: An Overview of Supercomputing

“Computers are an invaluable tool for most scientific fields. It is used to process measurement data and make simulation models of e.g. the climate or the universe. Brian Vinter talks about what makes a computer a supercomputer, and why it is so hard to build and program supercomputers.”

Accelerating Machine Learning with Open Source Warp-CTC

Today Baidu’s Silicon Valley AI Lab (SVAIL) released Warp-CTC open source software for the machine learning community. Warp-CTC is an implementation of the #‎CTC algorithm for #‎CPUs and NVIDIA #‎GPUs. “According to SVAIL, Warp-CTC is 10-400x faster than current implementations. It makes end-to-end deep learning easier and faster so researchers can make progress more rapidly.”

Video: Bridges Supercomputer to be a Flexible Resource for Data Analytics

In this video, Nick Nystrom from PSC describes the new Bridges Supercomputer. Bridges sports a unique architecture featuring Hewlett Packard Enterprise (HPE) large-memory servers including HPE Integrity Superdome X, HPE ProLiant DL580, and HPE Apollo 2000. Bridges is interconnected by Intel Omni-Path Architecture fabric, deployed in a custom topology for Bridges’ anticipated workloads.

Video: A Brief Introduction to OpenFabrics

Sean Hefty from Intel presented this talk at the Intel HPC Developer Conference at SC15. “OpenFabrics Interfaces (OFI) is a framework focused on exporting fabric communication services to applications. OFI is best described as a collection of libraries and applications used to export fabric services. The key components of OFI are: application interfaces, provider libraries, kernel services, daemons, and test applications. Libfabric is a core component of OFI. It is the library that defines and exports the user-space API of OFI, and is typically the only software that applications deal with directly. It works in conjunction with provider libraries, which are often integrated directly into libfabric.”

Video: Intel Black Belt Discussion on HPC Code Modernization

In this video from the Intel HPC Developer Conference at SC15, James Reinders hosts an Intel Black Belt discussion on Code Modernization. “Modern high performance computers are built with a combination of resources including: multi-core processors, many core processors, large caches, high speed memory, high bandwidth inter-processor communications fabric, and high speed I/O capabilities. High performance software needs to be designed to take full advantage of these wealth of resources. Whether re-architecting and/or tuning existing applications for maximum performance or architecting new applications for existing or future machines, it is critical to be aware of the interplay between programming models and the efficient use of these resources. Consider this a starting point for information regarding Code Modernization. When it comes to performance, your code matters!”

Video: Overcoming Storage Roadblocks in HPC Clouds

“The virtually infinite scale of cloud compute resources is now within easy reach from either existing network-attached or object-based storage. No longer is the location of your storage a roadblock to reaping the ease, timeliness, and cost offered by cloud compute services. Avere’s Enterprise Cloud Bursting solution utilizes the Virtual FXT Edge filer (vFXT) which puts high-performance, scalable NAS where you need it to enable massive compute on-demand for enterprise apps with simple installation and zero hardware maintenance. Avere makes your NAS data accessible to cloud compute without experiencing latency or requiring that your data be moved to the cloud.”

Video: Intersect360 Research Describes HPC Market Trends at SC15

In this video from the Dell booth at SC15, Addison Snell from Intersect360 Research discusses why HPC is now important to a broader group of use cases, and dug deep into overviews of HPC for research, life sciences and manufacturing. Participants learned more about why HPC, Big Data, and Cloud are converging.