The ISC Cloud’14 conference has issued issued its Call for Posters. The event takes place Sept 29-30 in Heidelberg Germany.
“How does Linux system performance compare to other OSes, particularly the performance-focused Solaris family? What features inspired by them could be added to Linux? Both are bristling with performance features and optimizations, and it’s difficult enough to fully understand the performance of the Linux kernel and its distributions, let alone other kernels and OSes for comparison. Brendan Gregg has unique insight into the performance features and analysis capabilities of both Linux and Solaris-based systems, which he covers in depth in his new book: Systems Performance: Enterprise and the Cloud.”
“At Cray, we are a big user and investor in Lustre. Because Lustre is such a great fit for HPC, we deploy it with almost all of our systems. We even sell and deliver Lustre storage independent of Cray compute systems. But Lustre is not (yet) the perfect solution for distributed and parallel-I/O, so Cray invests a lot of time and resources into improving, testing, and honing it. We collaborate with the open-source Lustre community on those enhancements and development. In fact, Cray is a leader in the Lustre community through our involvement in OpenSFS.”
“Confronting power limitations and the high cost of data movement, new supercomputing architectures within the DOE are requiring users make changes to application codes to achieve high performance. More specifically, users will need to exploit greater on-node parallelism and longer vector units, and restructure code to take advantage of memory locality. In this presentation you will learn about coming architectural trends and what you can do now to start preparing your application.”
Achieving good performance on any system requires balancing many competing factors. More than just minimizing communication (or floating point or memory motion), for high end systems the goal is to achieve the lowest cost solution. And while cost is typically considered in terms of time to solution, other metrics, including total energy consumed, are likely to be important in the future. Making effective use of the next generations of extreme scale systems requires rethinking the algorithms, the programming models, and the development process. This talk will discuss these challenges and argue that performance modeling, combined with a more dynamic and adaptive style of programming, will be necessary for extreme scale systems.
“Successful computational scientists are experts in both a scientific field, such as chemistry, physics, or astrophysics, knowledgeable about both mathematical representations and algorithmic implementations, and also specialize in developing and optimizing scientific application codes to run on computers, both large and small. A truly successful computational science investigation requires the “three A’s”: a compelling Application, the appropriate Algorithm, and the underlying Architecture.”