MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Building the CLIMB Project – World’s Largest Single System for Microbial Bioinformatics


A new private Cloud HPC system will soon benefit bioinformatics researchers in their work on bacterial pathogens. The Cloud Infrastructure for Microbial Bioinformatics (CLIMB) project, a collaboration between the University of Birmingham, the University of Warwick, Cardiff University, and Swansea University, will create a free-to-use, world leading cyber infrastructure specifically designed for microbial bioinformatics research.

Apply Now for XSEDE’s Jetstream Shared Cloud Research Allocations


XSEDE’s new Jetstream shared cloud resource is coming online early next year, but you can now apply for Jetstream research allocations.

Video: Intel Commitment to Lustre


“For High Performance Computing users who leverage open-source Lustre software, a good file system for big data is now getting even better. Building on its substantial contributions to the Lustre community, Intel is rolling out new features that will make the file system more scalable, easier to use, and more accessible to enterprise customers.”

Executing Multiple Dynamically Parallelized Programs on Dynamically Shared Cloud Processors


“ThroughPuter PaaS is purpose-built for secure, dynamic cloud computing of parallel processing era. ThroughPuter offers unique, realtime application load and type adaptive parallel processing: Get the speed-up from parallel execution cost-efficiently where and when any given program/task benefits most from the parallel processing resources. Addressing the parallel processing challenge takes a full programming-to-execution platform approach.”

SDSC Steps up with Upgraded Cloud and Storage Services

The reliable and scalable architecture of the SDSC Cloud was designed for researchers and departments as a low cost and efficient alternative to public cloud service providers.  Image: Kevin Coakley, SDSC

Today the San Diego Supercomputer Center (SDSC) announced that it has made significant upgrades to its cloud-based storage system to include a new range of computing services designed to support science-based researchers, especially those with large data requirements that preclude commercial cloud use, or who require collaboration with cloud engineers for building cloud-based services.

Developing a Plan for Cloud Based GPU Processing


For some applications, cloud based clusters may be limited due to communication and/or storage latency and speeds. With GPUs, however, these issue are not present because application running on cloud GPUs perform exactly the same as those in your local cluster — unless the application span multiple nodes and are sensitive to MPI speeds. For those GPU applications that can work well in the cloud environment, a remote cloud may be an attractive option for both production and feasibility studies.

NVIDIA GRID 2.0 comes to Microsoft Azure


“Our vision is to deliver accelerated graphics and high performance computing to any connected device, regardless of location,” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. “We are excited to collaborate with Microsoft Azure to give engineers, designers, content creators, researchers and other professionals the ability to visualize complex, data-intensive designs accurately from anywhere.”

IBM Storage With OpenStack

IBM Cloud Service

While there is much discussion and products in the market regarding cloud computing and the ability to spin up a virtual machines quickly and efficiently, the fact remains that without planning for cloud based storage, the data will get lost. Simply put, without storage, there is no data.

Video: Leveraging Containers in Elastic Environments


In this video from the Disruptive Technologies session at the 2015 HPC User Forum, Nick Ihli from Adaptive Computing presents: Leveraging Containers in Elastic Environments.

IBM Cloud Services vs Amazon EC2


High Performance Computing (HPC) in the cloud has become a hot topic with new offerings targeted at this market. The demands of technical computing professional to use the cloud for HPC workloads are different than that of a general enterprise software requirement. Performance is key, which requires a different infrastructure at the cloud providers premises.