Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


IBM Machine Learning Platform Comes to the Private Cloud

“Machine Learning and deep learning represent new frontiers in analytics. These technologies will be foundational to automating insight at the scale of the world’s critical systems and cloud services,” said Rob Thomas, General Manager, IBM Analytics. “IBM Machine Learning was designed leveraging our core Watson technologies to accelerate the adoption of machine learning where the majority of corporate data resides. As clients see business returns on private cloud, they will expand for hybrid and public cloud implementations.”

Defining AI, Machine Learning, and Deep Learning

“In this guide, we take a high-level view of AI and deep learning in terms of how it’s being used and what technological advances have made it possible. We also explain the difference between AI, machine learning and deep learning, and examine the intersection of AI and HPC. We also present the results of a recent insideBIGDATA survey to see how well these new technologies are being received. Finally, we take a look at a number of high-profile use case examples showing the effective use of AI in a variety of problem domains.”

Podcast: Democratizing Education for the Next Wave of AI

“Coursera has named Intel as one of its first corporate content partners. Together, Coursera and Intel will develop and distribute courses to democratize access to artificial intelligence and machine learning. In this interview, Ibrahim talks about her and Coursera’s history, reports on Coursera’s progress delivering education at massive scale, and discusses Coursera and Intel’s unique partnership for AI.”

OpenFog Consortium Publishes Reference Architecture

The OpenFog Consortium was founded over one year ago to accelerate adoption of fog computing through an open, interoperable architecture. The newly published OpenFog Reference Architecture is a high-level framework that will lead to industry standards for fog computing. The OpenFog Consortium is collaborating with standards development organizations such as IEEE to generate rigorous user, functional and architectural requirements, plus detailed application program interfaces (APIs) and performance metrics to guide the implementation of interoperable designs.

Video: Computing of the Future

Jeffrey Welser from IBM Research Almaden presented this talk at the Stanford HPC Conference. “Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI.”

Job of the Week: Senior HPC Systems Administrator at Purdue

Purdue University is seeking a Senior HPC Systems Administrator in our Job of the Week. “In this role, you will assist world renowned researchers in advancing science. Additionally, as Senior HPC Systems Administrator, you will be responsible for large sections of Purdue’s innovative computational research environment and help set direction of future research systems. This role requires an individual to work closely with researchers, systems administrators, and developers throughout the University and partner institutions to develop large-impact projects and computational systems.”

Supercomputing the Hyperloop on Azure

Today Cycle Computing announced that the HyperXite team is using CycleCloud software to manage Hyperloop simulations using ANSYS Fluent on the Azure Cloud. “Our mission is optimize and economize the transportation of the future and Cycle Computing has made that endeavor so much easier, said Nima Mohseni, Simulation Lead, HyperXite. “We absolutely require a solution that can compress and condense our timeline while providing the powerful computational results we require. Thank you to Cycle Computing for making a significant difference in our ability to complete our work.”

Huawei: A Fresh Look at High Performance Computing

Francis Lam from Huawei presented this talk at the Stanford HPC Conference. “High performance computing is rapidly finding new uses in many applications and businesses, enabling the creation of disruptive products and services. Huawei, a global leader in information and communication technologies, brings a broad spectrum of innovative solutions to HPC. This talk examines Huawei’s world class HPC solutions and explores creative new ways to solve HPC problems.

Supercomputing Transportation System Data using TACC’s Rustler

Over at TACC, Faith Singer-Villalobos writes that researchers are using the Rustler supercomputer to tackle Big Data from self-driving connected vehicles (CVs). “The volume and complexity of CV data are tremendous and present a big data challenge for the transportation research community,” said Natalia Ruiz-Juri, a research associate with The University of Texas at Austin’s Center for Transportation Research. While there is uncertainty in the characteristics of the data that will eventually be available, the ability to efficiently explore existing datasets is paramount.

Overcoming the Learning Curve of New Processor Architectures

High-performance computing (HPC) tools are helping financial firms survive and thrive in this highly demanding and data-intensive industry. As financial models grow in complexity and greater amounts of data must be processed and analyzed on a daily basis, firms are increasingly turning to HPC solutions to exploit the latest technology performance improvements. Suresh Aswani, Senior Manager, Solutions Marketing, at Hewlett Packard Enterprise, shares how to overcome the learning curve of new processor architectures.