Jörg Lotze from Xcelerit writes that the dataflow model can be hugely beneficial to high performance computing.
When asked to describe a data processing algorithm, domain specialists, for example engineers, researchers, or mathematicians, often walk to the white board and draw boxes with different processing stages and connect them with arrows. This effectively is dataflow – and shows that this way of thinking is natural in many problem domains. The dataflow programming model with its ‘shared-nothing’ semantics and explicitly expressed data dependencies provides pipeline parallelism by its very nature (a form of task parallelism). That is, all actors can execute concurrently on different sections of the data.
Read the Full Story.