Over at the UberCloud, Wolfgang Gentzsch writes that, despite the ever increasing complexity of CAE tools, hardware, and system components engineers have never been this close to ubiquitous CAE as a common tool for every engineer.
Because VM’s failed to show presence in CAE, the challenges of software distribution, administration, and maintenance kept CAE systems locked up in closets, available to only a select few. In fact the US Council of Competitiveness estimates that only about 5% of all engineers are using high-performance servers for their CAE simulations, the other 95% just use their workstations. . . . until in 2013 Docker Linux Containers saw the light of day. The key practical difference between Docker and VMs is that Docker is a Linux-based system that makes use of a userspace interface for the Linux kernel containment features. Another difference is that rather than being a self-contained system in its own right, a Docker container shares the Linux kernel with the operating system running the host machine. It also shares the kernel with other containers that are running on the host machine. That makes Docker containers extremely lightweight, and well suited for CAE, in principle. Still it took us at UberCloud about a year to develop – based on micro-service Docker container technology – the macro-service production-ready counterpart for CAE, plus enhancing and testing it with a dozen of CAE applications and with engineering workflows, on about a dozen different single- and multi-node cloud resources. These high performance interactive software containers, whether they be on-premise, on public or on private clouds, bring a number of core benefits to the otherwise traditional HPC environments with the goal to make HPC widely available, ubiquitous.