MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Requirements in HPC Environments

Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. “For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.”

New NVIDIA Tesla P100 Brings Pascal Architecture to HPC Applications

“Accelerated computing is the only path forward to keep up with researchers’ insatiable demand for HPC and AI supercomputing,” said Ian Buck, vice president of accelerated computing at NVIDIA. “Deploying CPU-only systems to meet this demand would require large numbers of commodity compute nodes, leading to substantially increased costs without proportional performance gains. Dramatically scaling performance with fewer, more powerful Tesla P100-powered nodes puts more dollars into computing instead of vast infrastructure overhead.”

Challenges for Climate and Weather Prediction in the Era of Heterogeneous Architectures

Beth Wingate from the University of Exeter presented this talk at the PASC16 conference in Switzerland. “For weather or climate models to achieve exascale performance on next-generation heterogeneous computer architectures they will be required to exploit on the order of million- or billion-way parallelism. This degree of parallelism far exceeds anything possible in today’s models even though they are highly optimized. In this talk I will discuss the mathematical issue that leads to the limitations in space- and time-parallelism for climate and weather prediction models – oscillatory stiffness in the PDE.”

EXTOLL Network Chip Enables Network-attached Accelerators

Today EXTOLL in Germany released its new TOURMALET high-performance network chip for HPC. “The key demands of HPC are high bandwidth, low latency, and high message rates. The TOURMALET PCI-Express gen3 x16 board shows an MPI latency of 850ns and a message rate of 75M messages per second. The message rate value is CPU-limited, while TOURMALET is designed for well above 100M msg/s.”

E4 to Showcase GPU-Accelerated OpenPOWER Servers at ISC 2016

Today Italy’s E4 Computer Engineering announced plans to showcase of new NVIDIA GPU-accelerated OpenPOWER servers at ISC 2016 in Frankfurt. “For this edition of ISC16, we wanted to reinforce the message that E4 is a company that actively engages and pursues new technologies’ paths with the aim to deliver leading-edge solutions for a number of demanding environments,” said Piero Altoè, Marketing and BDM Manager, E4 Computer Engineering. “Our priority is to collaborate with organizations such as OpenPOWER Foundation and true visionaries like NVIDIA in order to obtain powerful, scalable and affordable solutions for a number of complex applications and contribute to the development of technologies that have a huge impact on many aspects of our lives.”

The GPUltima for Graphics-Intensive VDI Environments

For Universities and Colleges that have a traditional infrastructure, adding new programs and applications is a huge endeavor. The IT staff needs to determine if all of the hardware meets the installation requirements and how to deploy these new programs on different models of desktops and notebooks. With a VDI environment that utilizes simple boot-up devices that connect to virtual desktops on the school’s server, the IT staff doesn’t have to worry about the age and capability of each individual PC when installing new software.

InsideHPC Guide To Flexible HPC

While all users of HPC technology want the fastest performance available, price and power consumption always seem to come into play, whether in the initial planning or at a later time. Standard performance measures exist that may or may not relate to an end user’s application mix, but it is important to understand the various benchmark results that go into determining the performance of a CPU, a server or an overall cluster.

NVIDIA Inception Program Offers Tools for Deep Learning

Today NVIDIA unveiled a comprehensive global program to support the innovation and growth of startups that are driving new breakthroughs in artificial intelligence and data science. “The NVIDIA Inception Program provides unique tools, resources and opportunities to the waves of entrepreneurs starting new companies, so they can develop products and services with a first-mover advantage.”

Bright Cluster Manager Comes to GPUltima from One Stop Systems

Today One Stop Systems announced that its the GPUltima product line now employs Bright Computing’s HPC Cluster Manager software. Bright Computing is a provider of comprehensive software solutions for provisioning and managing HPC clusters. Where conventional computer cluster systems use CPUs as the primary data processor, the GPUltima employs numbers of GPU cards, providing 10 times the performance by adding thousands more cores,” said Steve Cooper, CEO of One Stop Systems. “The GPUltima is completely ‘application-ready’, configured and tested to the customer’s specifications, so that the customer can begin processing immediately. The unique cluster management and monitoring software and the service and support packages that accompany the GPUltima make this a user-friendly system that allows the customer to begin his work without having to configure the cluster.”

Video: Using GPUs for Electromagnetic Simulations of Human Interface Technology

Chris Mason from Acceleware presented this talk at GTC 2016. “This session will focus on real life examples including an RF powered contact lens, a wireless capsule endoscopy, and a smart watch. The session will also outline the basics of the subgridding algorithm along with the GPU implementation and the development challenges. Performance results will illustrate the significant reduction in computation times when using a localized subgridded mesh running on an NVIDIA Tesla GPU.”