An unconventional use has grown up for Amazon’s AppStream. This was originally intended to support streaming games, but now, according to David Pellerin, Business Development Principal, High Performance Cloud Computing, at Amazon Web Services, some independent software vendors (ISVs) are looking at the technology as a way of enabling visualization of the results of scientific computations being carried out on the cloud.
His point was echoed by Wim Slagter from Ansys, in one of the seven pointers for best practice in the cloud that he outlined to the ISC Cloud conference in Heidelberg at the end of September. The first rule, he suggested, was ‘don’t move data more than you have to’. The starting files for a computerized engineering simulation may be quite small, he said — in the region of about 20MB, perhaps — but the results could be way more than 2GB and may take too long to download over some connections. This meant that visualization was necessary to view the results remotely. This led on to the requirement to have a full remote desktop which, in turn, was dependent on a low-latency (100ms or less) network connection.
Both network communications and the data storage had to be secure, Slagter continued, and this meant that ‘data in motion’ might have to be encrypted. In an echo of Michael Resch’s concerns about legal responsibilities, Slagter remarked that a cloud environment meant at least three actors had both practical and legal responsibility in keeping data private and secure: the cloud provider itself was responsible for the physical security of the building where the servers were located as well as the security protocols used; the ISV had responsibility for the security of the application that was being run; and the customer had to have a set of security policies and procedures governing who had access to the portal into the cloud and who was licensed, within the customer’s own company, to use the application software and access the data. At the same time, there had to be effective end-user access for both job and data management.
His point was reiterated later by Felix Wolfheimer of CST. ‘Who is at fault if a set-up fails or leaks data?’ he asked, ‘There are three people involved: the customer; the ISV; and the cloud provider.’ The complexity of the cloud arrangement could mean, he remarked, that no one knows what bugs lie inside the software stacks. If the software is open source, then the bugs will be identified fairly quickly, but the task remains of ensuring that the patches are implemented.
Slagter also confronted the challenges that cloud computing presented for software licensing. The traditional model was steady-state usage of the software, he reminded his audience, and was usually met by the outright purchase of the software (often with an annual support and maintenance/upgrade contract). For burst capacity, short-term access to the software could be met by ‘leasing’ rather than purchasing. But one of the advantages of the cloud to end-users was that they could use it to accommodate fluctuating demand for compute resources, and there a pay-per-use software model might have to be applied. He stressed that end-users also want to be able to ‘re-use’ their on-premises software licenses for the cloud – they want to extend the use of the software (without having to buy a separate license) for use within either a private or a public cloud.
But the ISC Cloud conference heard that one of the issues that is holding back further adoption of the cloud was that there were many different license options – and some of this diversity was on display during a special discussion session with representatives of some of the smaller ISVs. NICE, perhaps best-known for its EnginFrame Grid portal, has been working on distributed computing since 1996, according to the company’s Karsten Gaier. In a manner similar to Ansys, Nice offers licenses under a purchase model, a yearly rental, and to accommodate cloud use, it now offers monthly, daily, and even hourly rental options. Tomi Huttunen, director of R&D for the Finnish acoustic simulation software company Kuava, told the meeting that his company offered a license on the basis of a small monthly fee but it also factored in the CPU time used. Kuava offers ‘cloud credits’ based on the size of the job and length of time that it takes to run. ‘How to build software for hybrid clouds [a mixed use of the in-company datacentre and a public cloud] is a challenge for a small software company,’ he said.
According to Felix Wolfheimer, CST’s licensing model is more traditional. It too offers perpetual licences with an annual fee for support, and it will lease its software annually, over three months, one month or a week, for cloud set up. Acceleration options for high-performance computing are licenced separately using tokens. The company is known for its 3D electromagnetic simulation software and counts organisations such as Cern among its customers. It offers discounts to academic users. Although, as he frankly confessed to the meeting, the prospect of making licencing more flexible to take account of cloud usage was generating fear in the sales departments, CST is trying to collect experience with on-demand or pay-per-use arrangements, but in a test environment with a select few of its customers. He pointed out that the logistics of administering the contract and sending out the licencing files, complicated the process of moving to finer granularity of licencing for periods as short as a day or two.
The first article in this series can be found here. The next article will look at ease of access and at concerns over security in the cloud.
Sign up for our insideHPC Newsletter.