Study finds On-Premise Systems Critical to Enterprise AI

Print Friendly, PDF & Email

Today Cray released the findings of its industry report on the State of Enterprise AI Adoption 2019, which reveals that performance, data locality and security are the top considerations for choosing an AI infrastructure. The survey assessed whether organizations and individuals are prepared to run AI effectively and to take advantage of the capabilities it offers. Findings show that over 70% of companies already have AI applications in use or under development, and as the use of AI grows, IT professionals must consider what infrastructure is needed to maximize the value, scale and performance of their AI workloads.

The State of Enterprise AI Adoption 2019 survey was constructed to help identify the state of enterprise AI adoption, evaluate the perception and value of AI in the enterprise and determine the infrastructure being used to execute AI workflows. The report findings, based on responses of more than 300 IT professionals across a broad range of industries, show that many organizations, 39%, are using supercomputers, HPC systems or dedicated AI hardware to run their AI workloads. For these organizations using on-premises systems, over 60% expect to expand their infrastructure to keep pace with increasing storage and performance demands.

The research highlights that supercomputing plays an important role in enabling mainstream AI adoption in the enterprise,” said Frederick Kohout, senior vice president and CMO at Cray. “IT professionals see the need to expand their use of on-premises infrastructure, like supercomputers and HPC clusters, to meet the expected growth of business-critical AI applications. Cray’s new Shasta supercomputing systems were designed to handle data-intensive workloads, like AI, and can scale seamlessly from a single cabinet to exascale-class systems so every Enterprise can start small and grow without interruption or penalty.”

Enterprise AI Production Environments Hybridized with Cloud and On-Premises Infrastructure

Nearly 40% of those surveyed said their organizations currently use on-premises systems to execute AI workloads. This includes infrastructure like supercomputers, high performance computing clusters and dedicated AI nodes. And according to these respondents, the top two considerations for utilizing on-premises systems are performance and data locality/security.

Key findings include:

  • 40% report using on-premises systems
  • 65% of respondents with AI applications in use stated they need to expand their on-premises systems to keep pace with increasing performance requirements
  • 35% of respondents are looking to move some of their cloud workloads to on-premises systems
  • 53% of respondents noted that their organization utilizes public cloud services to run AI workloads

The research indicates that many organizations are operating a hybrid data center and rely on the use of multiple systems to run AI applications. Over 70% of those using on-premises systems have also implemented AI in the public cloud, and over half of all public cloud users also utilize on-premises infrastructure for their AI workloads.

Also of note, nearly 60% of IT professionals identified cost-effectiveness as a top consideration when selecting an infrastructure solution for AI. This was followed by the ability to integrate AI systems into existing architecture (50.8%), ease of use (49.2%) and scaling with increasing uses and demands (39.8%). With these being the top infrastructure considerations, it explains why so many organizations utilize both on-premises and cloud infrastructure. By utilizing both, enterprises get the ease of use and cost-effectiveness of the cloud with the performance and scalability of on-premises systems.

This survey by Cray is consistent with our own AI research studies,” said Addison Snell, CEO of Intersect360 Research. “Training for machine learning requires high performance computing, either on-premise or in the cloud, and there is a significant overlap between AI investment and systems for HPC and hyperscale. One way or another, those interested in AI need to access a high-performance infrastructure.”

Other key research findings include:

  • High levels of educational activity: 72% of respondents participated in one or more activities to educate themselves on AI in the past year. Over 48% attended AI conferences or received training, over 41% participated in vendor webinars, about 40% took self-study courses and over 35% downloaded and read reports on AI.
  • AI as a core business function: Over 34% believe AI is already a “critical to business” capability in their organization, or that it will be at some time during 2019. Another 41% expect AI to become critical to their business within the next three years.
  • Greater operational value placed on AI: Nearly 70% of survey respondents believe AI could improve operational efficiency while more than half of respondents said AI could help improve the customer experience, create competitive advantage and make data more actionable. Additionally, over 44% believe it could help with controlling costs and growing revenue.
  • Shifting perception of AI in the workplace: Nearly half of respondents say AI had a positive effect on their daily working experience in the past year; and more than 65% believe it will have a positive effect in the next year. Only 5.6% of respondents believe they won’t benefit from AI.

Download the report

Sign up for our insideHPC Newsletter