Why HPC is no longer just for government labs and academic institutions

Print Friendly, PDF & Email

In this special guest feature, Trish Damkroger from Intel writes that only HPC can handle the pending wave of Big Data coming to datacenters from Smart Cities.

Trish Damkroger is Vice President, Data Center Group and General Manager, Extreme Computing Organization at Intel.

In my conversations with members of the HPC community worldwide, I’m constantly inspired by new, novel HPC applications, including the need for Artificial Intelligence (AI) and high-performance data analytics to run the most complex workloads and process the mass amounts of data. These examples reemphasize the need for serious computing capabilities for modern and future AI/HPC applications that can harness this wealth of data to address societal issues. This is particularly clear in the case of Internet of Things (IoT) enabled smart cities, which could derive major benefit from HPC and HPC-like capabilities.

Data Opportunities and Challenges

I recently spoke with my colleague Sameer Sharma, Intel’s Global General Manager for IoT (New Markets, Smart Cities, and Intelligent Transportation), about his organization’s efforts to enable the future of smart cities — those which are incorporating insights derived from IoT devices and other sensors into the governance of shared spaces such as roads, waterways, airports, seaports, sporting venues, and universities. To guide their work, Sameer’s team took a step back and talked to people around the globe about challenges for urban life. Three key areas emerged:

  • Safety—People want to be safe and also feel safe.
  • Mobility—People need to get from point A to point B expediently and efficiently.
  • Sustainability—People want to manage the city’s environmental impact.

Sameer’s team worked with Harbor Research, a well-known research firm in the smart cities space, to better understand the data generation aspects of smart cities—in other words, what data is generated and how it is generated. The results were staggering. The analysis found that smart cities will produce a total of approximately 16.5 zettabytes of data in 2020 alone. To put this in context, a zettabyte is one trillion gigabytes, or roughly as many gigabytes of data as 11oz cups of coffee would fit in The Great Wall of China.

While each individual city’s data footprint depends on the unique mix of smart applications deployed, you could guess that each smart city still has a lot of data to contend with. In the future, many cities may turn to AI algorithms to tap into this data to manage city operations. However, the scale of the data and need for rapid insights will make HPC-level computing resources a requirement to make the most of this opportunity.

Federated Data Platforms for Urban Mobility and Safety

Projects throughout the world are revealing the potential of integrating highly-capable compute into urban spaces. In one example, the city of Bangkok, Thailand, installed smart cameras in three traffic intersections. Real-time tracking algorithms fed by these cameras and run atop Intel® Core™ processor based systems optimized traffic signal timing to improve traffic flow. This solution reduced queue length in these intersections by 30.5%, saving more than 50,000 vehicle commuter hours.

However, the value of the data from cameras like these shouldn’t end at the individual intersection. Results from the edge could be used to optimize traffic at the macro level as well. City-level traffic analysis via deep learning algorithms would require a more capable converged AI/HPC system in a data center, but could have big benefits to commuter safety and mobility, as well as city planning. The more cameras and sensors involved, the greater the potential, but also the greater the need for HPC resources for analysis.

In another example, the city of Rio De Janeiro, Brazil, deployed 1,800 HD video cameras to ensure the safety of the estimated 500,000 people visiting for the 2016 Summer Olympic Games. This solution used Intel Atom and Intel Core processors for analysis at the edge and a higher-performance Intel Xeon processor based system for further analysis in the data center.AI

This system processed about 1.5 million pieces of video data each day to help staff detect and respond to abandoned objects and prevent unauthorized access to off-limits areas. Sameer noted in our conversation that similar systems deployed today can use energy-efficient purpose-built AI accelerators like the Intel Movidius Myriad X Visual Processing Unit (VPU) for analysis at the edge that just a few years ago would have required analysis in a traditional data center. Additionally, today’s HPC systems benefit from 2nd Generation Intel Xeon Scalable processors with built-in AI acceleration (Intel® Deep Learning Boost). The HPC capabilities of Intel architecture become all the more important as the number of IoT sensors scales up, increasing the overhead of data analysis. Future HPC deployments will enable new opportunities for converged AI/HPC applications to derive maximum value from data like those in Rio De Jeneiro’s solution.

A Future of Widespread Smart Cities and Transportation

The market and demand for HPC capabilities continues to expand as more organizations turn towards AI/HPC converged solutions to turn data into social benefit. With more than 1,100 cities with population greater than 500,000 people worldwide in 2018 and many thousand more with population greater than 100,000 IoT-enabled smart cities could be a big new user group for HPC in the near future.

IoT’s drive for HPC compute doesn’t stop there. Industrial and healthcare IoT systems, to name just two other verticals, will also generate huge amounts of data and thus demand for compute to analyze that data. It is an exciting time for those of us at Intel continuing to work to enable a diversifying customer base.

Thanks to Sameer and his team for their contributions to this blog. Please visit Intel’s pages on our HPC and IoT product portfolios for more information on how Intel technologies are enabling innovation in both traditional an unexpected ways.

Trish Damkroger is Vice President and General Manager of the Technical Computing Initiative (TCI) in Intel’s Data Center Group. She leads Intel’s global Technical Computing business and is responsible for developing and executing Intel’s strategy, building customer relationships and defining a leading product portfolio for Technical Computing workloads, including emerging areas such as high performance analytics, HPC in the cloud, and artificial intelligence. Trish’s Technical Computing portfolio includes traditional HPC platforms, workstations, processors and all aspects of solutions including industry leading compute, storage, network and software products. Ms. Damkroger has more than 27 years of technical and managerial roles both in the private sector and within the United States Department of Energy, she was the Associate Director of Computation at Lawrence Livermore National Laboratory leading a 1,000 person group that is one of the world’s leading supercomputing and scientific experts. Since 2006, Ms. Damkroger has been a leader of the annual Supercomputing Conference series, the premier international meeting for high performance computing. Trish has been the General Chair for HPC’s premier industry event Supercomputing Conference 2014 and has been nominated the Vice-Chair for upcoming Supercomputing Conference in 2018 and has held many other committee positions. Ms. Damkroger has a master’s degree in electrical engineering from Stanford University. Trish was nominated and selected for the HPC Wire’s People to Watch list in 2014 and recently in March 2018. 

Sign up for our insideHPC Newsletter