In this video, officials from Cyfronet describe Prometheus, a new HP Apollo 8000 supercomputer with 1.68 Petaflops of peak performance.
Daniel Gutierrez, Managing Editor, of insideBIGDATA has put together a terrific Guide to Scientific Research. The goal of this paper is to provide a road map for scientific researchers wishing to capitalize on the rapid growth of big data technology for collecting, transforming, analyzing, and visualizing large scientific data sets.
“As the use of coprocessors increases to speedup HPC applications, it is important to understand how much additional power the coprocessors use. With various measurements and benchmarks arising to calculate the power used during the running of compute and data intensive applications, measuring the power draw from an Intel Xeon Phi coprocessor is important to understanding the best use of resources.”
“The industry needs to accomplish a lot in the coming years to deliver a working, useful exascale machine. PBS Pro is only one piece of the puzzle… but it’s an important piece. Job scheduling and workload management are core capabilities – a “must have” for every HPC system – ensuring HPC goals are met by enforcing site-specific use policies, enabling users to focus on science and engineering rather than IT, and optimizing utilization (of hardware, licenses, and power) to minimize waste.”
While not every industry is making the move to the Cloud at quite the same rate, the high performance side of the Financial Services industry seems to be ahead of the curve. This year, the ISC 2015 conference will feature a session on HPC & Cloud Computing in Financial Services. To learn more about the latest trends in this area, we caught up with Prof. Juho Kanniainen from Tampere University of Technology and Tuomas Eerola from Techila Technologies.