MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


TYAN Solutions for Flexible HPC

Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. “For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.”

Supercomputing and the Search for Dark Matter

Over at CSCS, Simone Ulmer writes that Particle physicists using the Piz Daint supercomputer have determined what is known as the scalar quark content of the proton. The research will help efforts to detect and research dark matter.

Nor-Tech HPC Clusters Power Groundbreaking Projects

Today system integrator Nor-Tech disclosed that the company is working closely with some of the world’s top researchers and innovators to develop, build, deploy and support simulation clusters. “This has been an extremely exciting year for us that has allowed us to collaborate on innovations that promise to be groundbreaking and also discoveries that are changing the way we look at the universe,” said Nor-Tech President and CEO David Bollig.

Exxact to Build HPC Solutions Using NVIDIA Tesla P100 GPUs

Today Exxact Corporation announced its planned production of HPC solutions using the NVIDIA Tesla P100 GPU accelerator for PCIe. Exxact will be integrating the Tesla P100 into their Quantum family of servers, which are currently offered with either NVIDIA Tesla M40 or K80 GPUs. The NVIDIA Tesla P100 for PCIe-based servers was introduced at the recent 2016 International Supercomputing Conference and is anticipated to deliver massive leaps in performance and value compared with CPU-based systems. NVIDIA stated the new Tesla P100 will help meet unprecedented computational demands planted on modern data centers.

SGI to Power 1.9 Petaflop Supercomputer at University of Tokyo

The University of Tokyo has chosen SGI to perform advanced data analysis and simulation within its Information Technology Center. The center is one of Japan’s major research and educational institutions for building, applying, and utilizing large computer systems. The new SGI system will begin operation July 1, 2016. “The SGI integrated supercomputer system for data analysis and simulation will support the needs of scientists in new fields such as genome analysis and deep learning in addition to scientists in traditional areas of computational science,” said Professor Hiroshi Nakamura, director of Information Technology Center, the University of Tokyo. “The new system will further ongoing research and contribute to the development of new academic fields that combine data analysis and computational science.”

Thomas Sterling presents: HPC Achievement and Impact 2016

Thomas Sterling presented this keynote at ISC 2016 in Frankfurt. “Even as the hundred petaflops era is coming within sight, more dramatic programs to achieve exaflops capacity are now emerging with the expectation of this two orders of magnitude advance in the early part of the next decade. Yet the challenges of the end of Moore’s Law loom ever greater, threatening to impede further progress. Innovations in semiconductor technologies and processor socket architecture matched with application development environments improvements promise to overcome such barriers. This keynote presentation will deliver a rapid-fire summary of the major accomplishments of the last year that promises a renaissance in supercomputing in the immediate future.”

Video: Analyst Crossfire from ISC 2016

In this this lively panel discussion from ISC 2016, moderator Addison Snell asks visionary leaders from the supercomputing community to comment on forward-looking trends that will shape the industry this year and beyond.

Industries That Need Flexible HPC

Organizations that implement high-performance computing technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. “For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.”

Slidecast: Announcing the Nvidia Tesla P100 for PCIe Servers

In this slidecast, Marc Hamilton from describes the Nvidia Tesla P100 for PCIe Servers. “The Tesla P100 for PCIe is available in a standard PCIe form factor and is compatible with today’s GPU-accelerated servers. It is optimized to power the most computationally intensive AI and HPC data center applications. A single Tesla P100-powered server delivers higher performance than 50 CPU-only server nodes when running the AMBER molecular dynamics code, and is faster than 32 CPU-only nodes when running the VASP material science applications.”

Supermicro Launches Wide Range of HPC Solutions at ISC 2016

SuperMicro is showcasing its High Performance Computing solutions at ISC 2016 this week in Frankfurt, Germany. “With Supermicro HPC solutions deep learning, engineering, and scientific fields can scale out compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot, and per dollar,” said Charles Liang, President and CEO of Supermicro. “With our latest innovations incorporating Intel Xeon Phi processors in a performance and density optimized Twin architecture, 8-socket scalable servers, 100Gbps OPA switch for high bandwidth connectivity, and high-performance NVMe for Lustre based storage, our customers can accelerate their applications and innovations to address the most complex real world problems.”