VDI or Virtual Desktop Infrastructure helps companies save money, time and resources. Instead of large bulky machines on every desk in the office, companies can connect multiple workstations to a single computer using thin clients. Instead of replacing individual desktops every year, companies only have to replace thin clients every 5 years. And when it comes time to do updates, the IT staff updates the one computer instead of spending time updating every individual workstation.
“Deep neural networks are increasingly important for powering AI-based applications like speech recognition. Baidu’s research shows that adding GPUs to the data center makes deploying big deep neural networks practical at scale. Deep learning based technologies benefit from batching user requests in the data center, which requires a different software architecture than traditional web applications.”
“Our goal is to enable HPC developers to easily port applications across all major CPU and accelerator platforms with uniformly high performance using a common source code base,” said Douglas Miles, director of PGI Compilers & Tools at NVIDIA. “This capability will be particularly important in the race towards exascale computing in which there will be a variety of system architectures requiring a more flexible application programming approach.”
Today the Hartree Centre announced plans for the UK’s first POWER Acceleration and Design Center (PADC) designed to help UK businesses exploit high performance computing on OpenPOWER systems for Modeling & Simulation and Big Data Analytics.
A successful example of how a well-managed GPU cluster allowed scientist to focus on obtaining results comes from the Tokyo University of Agriculture and Technology (TUAT) results. A research group lead by Dr. Akinori Yamanaka develops computation models and simulates engineering materials, for a variety of applications, using HPC. Using Bright Cluster Manager, Dr. Yamanaka and his team were able to immediately focus on algorithm development and not burden the team with cluster administration issues.
Today Cray announced the achievement of a new performance benchmark for reservoir simulations using Stone Ridge Technology’s ECHELON reservoir simulation software and the Cray CS-Storm cluster supercomputer.
Training the neural networks used in deep learning is an ideal task for GPUs because GPUs can perform many calculations at once (parallel calculations), meaning the training will take much less time than it used to take. More GPUs means more computational power so if a system has multiple GPUs, it can compute data much faster than a system with CPUs only, or a system with a CPU and a single GPU. One Stop System’s High Density Compute Accelerator is the densest GPU expansion system to date.
Professor Taisuke Boku from the University of Tsukuba presented this talk at the PBS User Group. “We have been operating a large scale GPU cluster HA-PACS with 332 computation nodes equipped with 1,328 GPUs managed by PBS Professional scheduler. The users are spread out across a wide variety of computational science fields with widely distributed resource sizes from single node to full-scale parallel processing. There are also several categories of user groups with paid and free scientific projects. It is a challenging operation of such a large system keeping high system utilization rate as well as keeping fairness over these user groups. We have successfully been keeping over 85%-90% of job utilization under multiple constraints.”
Today IBM announced launched a new LC series of servers that infuse technologies from members of the OpenPOWER Foundation and are part of IBM’s Power Systems portfolio of servers. According to IBM, the new LC systems perform data analytics workloads faster and cheaper than comparable x86-based servers.
“Although the use of GPUs has generalized nowadays, including GPUs in current HPC clusters presents several drawbacks mainly related with increased costs. In this talk we present how the use of remote GPU virtualization may overcome these drawbacks while noticeably increasing the overall cluster throughput. The talk presents real throughput measurements by making use of the rCUDA remote GPU virtualization middleware.”