flyelephantToday the FlyElephant announced a number of upgrades that allow users to work with private repositories with an improved system security and good task functionality. “FlyElephant is a platform for scientists, providing a computing infrastructure for calculations, helping to find partners for the collaboration on projects, and managing all data from one place. FlyElephant automates routine tasks and helps to focus on core research issues.”
Today Bright Computing announced that it has teamed up with the Germany-based ProfitBricks to provide a cutting edge elastic HPC solution to a Swiss University. “This is a unique example of how Bright Computing can help a company move their HPC requirement to the cloud,” said Lee Carter, VP EMEA at Bright Computing. “Bright enables the university to dynamically expand and contract the infrastructure needed to support their research projects, all at the click of a button. This ensures the university only pays for the computational resources it needs, when they need them, saving time and expense.”
Today Penguin Computing announced the availability Cyber Dyne’s KIMEME software on the POD public HPC cloud service. “It’s now possible to submit and manage large DOEs and optimization simulations flawlessly in the cloud,” said Ernesto Mininno, CEO, Cyber Dyne. “These tasks are much easier and faster thanks to the computational power of Penguin Computing’s POD HPC services.”
Today Hewlett Packard Enterprise announced HPE Haven OnDemand, an innovative cloud platform that provides advanced machine learning APIs and services that enable developers, startups and enterprises to build data-rich mobile and enterprise applications. Delivered as a service on Microsoft Azure, HPE Haven OnDemand provides more than 60 APIs and services that deliver deep learning analytics on a wide range of data, including text, audio, image, social, web and video.
Cloud computing has become a strong alternative to in house data centers for a large percentage of all enterprise needs. Most enterprises are adopting some form of could computing, with some estimates that as high as 90 % are putting workloads into a public cloud infrastructure. The whitepaper, Empowering Cloud Utilization with Cloud Bursting is an excellent summary of various options for enterprises that are planning for using a public cloud infrastructure.
Today IBM that it is opening a new Cloud Data Center in Johannesburg, South Africa. The new cloud center is the result of a close collaboration with Gijima and Vodacom and is designed to support cloud adoption and customer demand across the continent. IBM will provide clients with a complete portfolio of cloud services for running enterprise and as a service workloads.
Today Advanced Clustering Technologies announced it has partnered with CD-adapco to offer the company’s industry-leading engineering simulation software solution, STAR-CCM+, to customers using Advanced Clustering’s on demand HPC cluster in the cloud, ACTnowHPC. “We’re pleased to announce that our HPC cloud now makes STAR-CCM+ immediately accessible to engineers who purchase the license from CD-adapco,” said Kyle Sheumaker, President of Advanced Clustering Technologies. “With STAR-CCM+, we’re making it easier than ever for our customers to enhance workflow productivity in order to discover better designs faster.”
“Rescale provides a unified HPC simulation platform for the Enterprise IT environment. Rescale’s platform integrates with existing job schedulers to burst workloads to cloud computing resources. We provide high performance computing options such as InfiniBand-connected and GPU-accelerated nodes that can be provisioned on-demand. We will demo an example workload on such an on-demand cluster. Finally, we will cover the Rescale administration panel for managing your cloud/on-premise connectivity for software licenses and single sign-on authentication.”
“Although commerce and consumers have been computing in the cloud for years, the high-performance computing sector has been more hesitant. But all that may now be changing. The cost of cloud computing for HPC is falling, while new programming models that will allow HPC workloads to run more efficiently in the cloud are becoming available. Public cloud providers are installing hardware configurations that are more suited to HPC, while private clouds are giving users experience of how to run their jobs in a cloud environment.”
Registration opened today for the ISC 2016 conference, which takes place June 19-23 in Frankfurt. This year, the ISC 2016 conference program features an increased focus on Cloud, Machine Learning, and Robotics. In fact, insideHPC has learned that bulk of topics normally covered at the annual ISC Cloud conference have been absorbed into the ISC High Performance industry track. To learn more, we caught up with Wolfgang Gentzsch, a member of the ISC Steering Committee who has chaired the ISC Cloud event since its beginnings.