MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Microsoft Boosts Azure with GS-Series VMs for Compute-intensive Workloads

Screen Shot 2015-09-02 at 10.20.56 AM

Today Microsoft announced their GS-Series serious of premium VMs for Compute-intensive workloads. “Powered by the Intel Xeon E5 v3 family processors, the GS-series can have up to 64TB of storage, provide 80,000 IOPs (storage I/Os per second) and deliver 2,000 MB/s of storage throughput. The GS-series offers the highest disk throughput, by more than double, of any VM offered by another hyperscale public cloud provider.”

Dell Opens Line of Business for Hyperscale Datacenters

dell

Today Dell announced a new business unit aligned around hyperscale datacenters. “The Datacenter Scalable Solutions (DSS) group is designed to meet the specific needs of web tech, telecommunications service providers, hosting companies, oil and gas, and research organizations. These businesses often have high-volume technology needs and supply chain requirements in order to deliver business innovation. With a new operating model built on agile, scalable, and repeatable processes, Dell can now uniquely provide this set of customers with the technology they need, purposefully designed to their specifications, and delivered when they want it.”

Bare Metal to Application Ready in Less Than a Day

cluster complexity

There is big push for decreasing the complexity in setting up and managing HPC clusters in the data center. This IBM Webinar, “Bare Metal To Application Ready is Less Than a Day” provides excellent tips for preparing and managing the complexity of an HPC cluster.

Video: HPC Trends in the Trenches from Bio-IT World

2015-bioit-trends-from-the-trenches-1-638

In this video, Chris Dagdigian from Bioteam delivers his annual assessment of the best, the worthwhile, and the most overhyped information technologies for life sciences at the 2015 Bio-IT World Conference & Expo in Boston.

Solving Eight Constraints of Today’s Data Center

data center cooling

With the growth of big data, cloud and high performance computing, demands on data centers around the world are expanding every year. Unfortunately, these demands are coming up against significant opposition in the form of operating constraints, capital constraints, and sustainability goals. In this article, we look at 8 of these constraints and how direct-to-chip liquid cooling is solving them.

Clusters Drive Design Simulation

IBM Cluster

As design challenges become more complex and time to product launches are reduced, it is important to understand how to use a cluster for simulation, as compared to just a single node. “HPC Clusters Drive Design Optimization” is an excellent introduction on how to get the most out of a compute cluster.

Benefits of RackCDU D2C for High Performance Computing

DC2 Liquid Cooling

From bio-engineering and climate studies to big data and high frequency trading, HPC is playing an even greater role in today’s society. Without the power of HPC, the complex analysis and data driven decisions that are made as a result would be impossible. Because these super computers and HPC clusters are so powerful, they are expensive to cool, use massive amounts of energy, and can require a great deal of space.

Open Computing Drives Innovation and Efficiency

open-compute-project

The Open Compute Project Foundation was created to design the most efficient server, storage and related designs for the next generation of data centers in an open and collaborative development model. By sharing designs that maximize density, minimize power consumption and deliver expected performance, completely new computing environments can be developed, free from the limitations of legacy thinking.

Going from the Lab to the Data Center

Genomic Sequencing

In the late 1980s, genomic sequencing began to shift from wet lab work to a computationally intensive science; by end of the 1990s this trend was in full swing. The application of computer science and high performance computing (HPC) to these biological problems became the normal mode of operation for many molecular biologists.

Agenda Posted for Open Fabrics Workshop, March 15-18 in Monterey

Katie Antypas,
Services Department Head, National Energy Research Scientific Computing Center, Lawrence Berkeley National Laboratory

The OpenFabrics Alliance has published the agenda for their Developers’ Workshop in Monterey, CA. Beginning with a March March 15 keynote by Katie Antipas from NERSC, the agenda will center around three major themes: Applications Performance, Non-Volatile Memory, and Systems-on-a-Chip (SoCs).