Over at Computing Now, Art Sedighi writes that while cloud, grid, and HPC remain as distinct approaches, the twist in recent years has been the ability to coordinate and integrate these seemingly different environments. To illustrate, he describes a new peer-reviewed paper that introduces the concept of a meta-scheduler that can move workloads across all three environments.
Over the last decades, the cooperation amongst different resources that belong to various environments has been arisen as one of the most important research topic. This is mainly because of the different requirements, in terms of jobs’ preferences that have been posed by different resource providers as the most efficient way to coordinate large scale settings like grids and clouds. However, the commonality of the complexity of the architectures (e.g. in heterogeneity issues) and the targets that each paradigm aims to achieve (e.g. flexibility) remains the same. This is to efficiently orchestrate resources and user demands in a distributed computing fashion by bridging the gap among local and remote participants. At a first glance, this is directly related with the scheduling concept, which is one of the most important issues for designing a cooperative resource management system, especially in large scale settings. In addition, the term meta-computing, hence meta-scheduling, offers additional functionalities in the area of interoperable resource management because of its great proficiency to handle sudden variations and dynamic situations in user demands by bridging the gap among local and remote participants. This work presents a review on scheduling in high performance, grid and cloud computing infrastructures. We conclude by analysing most important characteristics towards inter-cooperated infrastructures.