From Computerworld Australia comes news that researchers at the University of Sydney have developed (and, sadly, patented) an algorithm that its inventors say will schedule jobs in a way that reduces energy use
The Energy Conscious Scheduling algorithm (ECS) has been patented by Young Choon Lee and Albert Zomaya at the university’s Centre for Distributed and High Performance Computing.
…ECS uses a processor’s dynamic voltage scaling (DVS) capability to map computational tasks to minimise completion time and energy use.
“Computations are typically comprised of interdependent tasks, so the need to wait for a parent task to complete can create slack and therefore wastage,” Zomaya said.
“When ECS is employed with the help of DVS capability, mapping decisions between processors, supply voltages, and tasks are streamlined to significantly lower the amount of energy required at any given time.”
The researchers are claiming that the scheduler can cut up to half (I’ve seen the range 10-160 percent) of the energy consumption without impacting operations. Interesting if true. The ECS works as middleware, without modifying the hardware, application, or operating system, by mapping both the computational performance and energy consumption of the system on representative tasks and then using that to make decisions about job scheduling. One imagines that in order to create a map dense enough to make good decisions a lot of benchmarking would be needed, but of course some of the requisite datapoints could come from operational processing.
The trade press is a little light on details, but the authors published a paper in the proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid. From the abstract of that paper
Jobs on high-performance computing systems are deployed mostly with the sole goal of minimizing completion times. This performance demand has been satisfied without paying much attention to power/energy consumption. Consequently, that has become a major concern in high-performance computing systems. In this paper, we address the problem of scheduling precedence-constrained parallel applications on such systems—specifically with heterogeneous resources—accounting for both application completion time and energy consumption. Our scheduling algorithm adopts dynamic voltage scaling (DVS) to minimize energy consumption. DVS can be used with a number of recent commodity processors that are enabled to operate in different voltage supply levels at the expense of sacrificing clock frequencies. In the context of scheduling, this multiple voltage facility implies that there is a trade-off between the quality of schedules and energy consumption. Our algorithm effectively balances these two performance goals using a novel objective function, which takes into account both goals; this claim is verified by the results obtained from our extensive comparative evaluation study.