In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Fortissimo Foundation from A3Cube, a clustered, pervasive, global direct-remote I/O access system. For more details, check out our A3Cube Slidecast over at insideBIGDATA. After that, they look at Paypal’s use of TI Keystone DSP processors for systems intelligence. By analyzing their chaotic real-time server data, Paypal is getting real-time, organized, intelligent results with extreme energy efficiency using HP’s Moonshot servers.
In this slidecast, John Gromala from HP describes the company’s new Apollo series of HPC servers. Tailor-made for the HPC market, the Apollo Series combines a modular design with innovative power distribution and air- and liquid-cooling techniques for extreme performance at rack scale, providing up to four times more performance per square foot than standard rack servers.
Heterogeneous hardware is now present in virtually all clusters. Make sure you can monitor all hardware on all installed clusters in a consistent fashion. With extra work and expertise, some open source tools can be customized for this task. There are few versatile and robust tools with a single comprehensive GUI or CLI interface that can consistently manage all popular HPC hardware and software. Any monitoring solution should not interfere with HPC workloads.
“Jointly defined by a group of major computer hardware and software vendors, the OpenMP API is a portable, scalable model that gives shared memory parallel programmers a simple and flexible interface for developing parallel applications on platforms ranging from embedded systems and accelerator devices to multicore systems and shared memory systems.”
Smaller clusters often overload a single server with multiple services such as file, resource scheduling, plus monitoring/management. While this approach may work for systems with fewer than 100 nodes, these services can overload the cluster network or the single server as the cluster grows. InsideHPC Guide show a plan for scalable HPC cluster growth