It’s a different kind of computing world out there. The demand for more compute performance for applications used by engineering, risk modeling, or life sciences is relentless. So, how are you keeping up with modern HPC demands? Meet Apollo – creating next-gen HPC and super-computing.
This article describes the challenges that users face and the solutions available to make running cloud based HPC applications a reality. You’ll learn about different cloud computing models, potential economic savings and factors to consider when comparing an on-site data center with a cloud-based provider.
Cloud computing is changing the way that IT organizations operate and innovate. While moving enterprise type applications to a cloud service provider is progressing, technical computing has been lagging in this transformation. This is changing, as the technology that once sat in a protected data center is now available in the cloud.
For some applications, cloud based clusters may be limited due to communication and/or storage latency and speeds. With GPUs, however, these issue are not present because application running on cloud GPUs perform exactly the same as those in your local cluster — unless the application span multiple nodes and are sensitive to MPI speeds. For those GPU applications that can work well in the cloud environment, a remote cloud may be an attractive option for both production and feasibility studies.
While there is much discussion and products in the market regarding cloud computing and the ability to spin up a virtual machines quickly and efficiently, the fact remains that without planning for cloud based storage, the data will get lost. Simply put, without storage, there is no data.
High Performance Computing (HPC) in the cloud has become a hot topic with new offerings targeted at this market. The demands of technical computing professional to use the cloud for HPC workloads are different than that of a general enterprise software requirement. Performance is key, which requires a different infrastructure at the cloud providers premises.
As an open source tool designed to navigate large amounts of data, Hadoop continues to find new uses in HPC. Managing a Hadoop cluster is different than managing an HPC cluster, however. It requires mastering some new concepts, but the hardware is basically the same and many Hadoop clusters now include GPUs to facilitate deep learning.
Moving from a desktop oriented computing environment to a cluster based environment has it challenges for some organizations. However, a number of companies are aware of the benefits and are progressing in the move to a cluster based technical computing system.
Massive amounts of computing power and data are needed for effective and efficient processing for many areas that are considered in the Life Science domain. From drug design to genomic sequencing and risk analysis , many workflows require that the tools and processes be in place so that entire organizations are more effective.
HPC developers want to write code and create new applications. The advanced nature of HPC often requires that this process be associated with specific hardware and software environment present on a given HPC resource. Developers want to extract the maximum performance from HPC hardware and at the same time not get mired down in the complexities of software tool chains and dependencies.