Today Microsoft announced their GS-Series serious of premium VMs for Compute-intensive workloads. “Powered by the Intel Xeon E5 v3 family processors, the GS-series can have up to 64TB of storage, provide 80,000 IOPs (storage I/Os per second) and deliver 2,000 MB/s of storage throughput. The GS-series offers the highest disk throughput, by more than double, of any VM offered by another hyperscale public cloud provider.”
Today Dell announced a new business unit aligned around hyperscale datacenters. “The Datacenter Scalable Solutions (DSS) group is designed to meet the specific needs of web tech, telecommunications service providers, hosting companies, oil and gas, and research organizations. These businesses often have high-volume technology needs and supply chain requirements in order to deliver business innovation. With a new operating model built on agile, scalable, and repeatable processes, Dell can now uniquely provide this set of customers with the technology they need, purposefully designed to their specifications, and delivered when they want it.”
With the growth of big data, cloud and high performance computing, demands on data centers around the world are expanding every year. Unfortunately, these demands are coming up against significant opposition in the form of operating constraints, capital constraints, and sustainability goals. In this article, we look at 8 of these constraints and how direct-to-chip liquid cooling is solving them.
As design challenges become more complex and time to product launches are reduced, it is important to understand how to use a cluster for simulation, as compared to just a single node. “HPC Clusters Drive Design Optimization” is an excellent introduction on how to get the most out of a compute cluster.
From bio-engineering and climate studies to big data and high frequency trading, HPC is playing an even greater role in today’s society. Without the power of HPC, the complex analysis and data driven decisions that are made as a result would be impossible. Because these super computers and HPC clusters are so powerful, they are expensive to cool, use massive amounts of energy, and can require a great deal of space.
The Open Compute Project Foundation was created to design the most efficient server, storage and related designs for the next generation of data centers in an open and collaborative development model. By sharing designs that maximize density, minimize power consumption and deliver expected performance, completely new computing environments can be developed, free from the limitations of legacy thinking.
In the late 1980s, genomic sequencing began to shift from wet lab work to a computationally intensive science; by end of the 1990s this trend was in full swing. The application of computer science and high performance computing (HPC) to these biological problems became the normal mode of operation for many molecular biologists.