HPE Unveils a Set Artificial Intelligence Platforms and Services

Today Hewlett Packard Enterprise announced new purpose-built platforms and services capabilities to help companies simplify the adoption of Artificial Intelligence, with an initial focus on a key subset of AI known as deep learning.

Inspired by the human brain, deep learning is typically implemented for challenging tasks such as image and facial recognition, image classification and voice recognition. To take advantage of deep learning, enterprises need a high performance compute infrastructure to build and train learning models that can manage large volumes of data to recognize patterns in audio, images, videos, text and sensor data.

As deep learning-based AI advances, it will transform science, commerce and the quality of our lives by automating tasks that don’t require the most complex human thinking,” said Steve Conway, senior vice president, Hyperion Research. “HPE’s infrastructure and software solutions are designed for ease-of-use and promise to play an important role in driving AI adoption into enterprises and other organizations in the next few years.”

Many organizations lack several integral requirements to implement deep learning, including expertise and resources; sophisticated and tailored hardware and software infrastructure; and the integration capabilities required to assimilate different pieces of hardware and software to scale AI systems. To help customers overcome these challenges and realize the potential of AI, HPE is announcing the following offerings:

HPE Rapid Software Installation for AI: HPE introduced an integrated hardware and software solution, purpose-built for high performance computing and deep learning applications. Based on the HPE Apollo 6500 system in collaboration with Bright Computing to enable rapid deep learning application development, this solution includes pre-configured deep learning software frameworks, libraries, automated software updates and cluster management optimized for deep learning and supports NVIDIA Tesla V100 GPUs.

HPE Deep Learning Cookbook: Built by the AI Research team at Hewlett Packard Labs, the deep learning cookbook is a set of tools to guide customers in selecting the best hardware and software environment for different deep learning tasks. These tools help enterprises estimate performance of various hardware platforms, characterize the most popular deep learning frameworks, and select the ideal hardware and software stacks to fit their individual needs. The Deep Learning Cookbook can also be used to validate the performance and tune the configuration of already purchased hardware and software stacks. One use case included in the cookbook is related to the HPE Image Classification Reference Designs. These reference designs provide customers with infrastructure configurations optimized to train image classification models for various use cases such as license plate verification and biological tissue classification. These designs are tested for performance and eliminate any guesswork, helping data scientists and IT to be more cost-effective and efficient.

HPE AI Innovation Center: Designed for longer term research projects, the innovation center will serve as a platform for research collaboration between universities, enterprises on the cutting edge of AI research and HPE researchers. The centers, located in Houston, Palo Alto, and Grenoble, will give researchers for academia and enterprises access to infrastructure and tools to continue research initiatives.

Enhanced HPE Centers of Excellence (CoE): Designed to assist IT departments and data scientists who are looking to accelerate their deep learning applications and realize better ROI from their deep learning deployments in the near term, the HPE CoE offer select customers access to the latest technology and expertise including the latest NVIDIA GPUs on HPE systems. The current CoE are spread across five locations including Houston; Palo Alto; Tokyo; Bangalore, India; and Grenoble, France.

We live in a world today where we’re generating copious amounts of data, and deep learning can help unleash intelligence from this data,” said Pankaj Goyal, vice president, Artificial Intelligence Business, Hewlett Packard Enterprise. “However, a ‘one size fits all’ solution doesn’t work. Each enterprise has unique needs that require a distinct approach to get started, scale and optimize its infrastructure for deep learning. At HPE, we aim to make AI real for our customers no matter where they are in their journeys with our industry-leading infrastructure portfolio, AI expertise, world-class research and ecosystem of partners.”

In its mission to help make AI real for its customers, HPE offers customers a flexible consumption services for HPE infrastructure, which avoids over-provisioning, increases cost savings and scales up and down as needed to accommodate the needs of deep learning deployments.

Artificial intelligence has the ability to transform scientific data analysis, making predictions and surprising connections,” said Paul Padley, professor of physics and astronomy, Rice University. “We are at a precipice where the AI revolution can now have a profound impact on reshaping innovation, science, education and society, at large. Access to the HPE AI innovation centers will help us continue to advance our research efforts in our journey to making academic progress by using the tools and solutions available to us through HPE.”

AI is becoming mainstream in the consumer world with applications such as voice interfaces, personal assistants and image tagging. However, the implications of AI go beyond mainstream consumer use cases to fields including genomic sequencing analytics, climate research, medical science, autonomous driving and robotics. These technology advancements and breakthroughs have been – and continue to be – made possible by deep learning.

Learn more about how HPE is bringing deep learning techniques to customers in VP of Artificial Intelligence Pankaj Goyal’s latest blog.

Sign up for our insideHPC Newsletter