Oregon Bill Would Penalize Data Centers for Failure to Meet Emissions Requirements Starting in 2027

A bill before the Oregon state legislature would penalize data centers for not meeting emissions standards starting in 2027 and could — if approved in Oregon and similar measures are adopted by other states — have significant implications for hyperscalers and HPC organizations with heavy electrical requirements. A story in yesterday’s The Oregonian reported that […]

Why Hardware Acceleration Is The Next Battleground In Processor Design

In this special guest feature, Theodore Omtzigt from Stillwater Supercomputing writes that as workloads specialize due to scale, hardware accelerated solutions will continue to be cheaper than approaches that utilize general purpose components. “If you’re a CIO who manages integrations of third-party hardware and software, be aware of new hardware acceleration technologies that can reduce the cost of service delivery by orders of magnitude.”

Radio Free HPC Looks at New USA Supercomputing Map

In this podcast, the Radio Free HPC team looks at the new interactive USA Supercomputing Map from Hyperion Research. “As part of the discussion, Rich recaps Hyperion’s recent HPC User Forum in Tucson. The event featured an extended session on Quantum Computing with presentations by D-Wave Systems, Google, IBM, Intel, Microsoft, NIST, and Rigetti Computing.”

Radio Free HPC Looks at the Cryptocurrency Crash

In this podcast, the Radio Free HPC team looks at the recent cryptocurrency crash and why prices for these coins is so volatile. After that, we do our Catch of the Week, where the IBM Cloud is leading the company back to profitability.

Radio Free HPC Looks at High Performance Interconnects

In this podcast, the Radio Free HPC team looks at Dan’s recent talk on High Performance Interconnects. “When it comes to choosing an interconnect for your HPC cluster, what is the best way to go? Is offloading better than onloading? You can find out more by watching Dan’s talk from the HPC Advisory Council Australia conference.”

GPUs Power New AWS P2 Instances for Science & Engineering in the Cloud

Today Amazon Web Services announced the availability of P2 instances, a new GPU instance type for Amazon Elastic Compute Cloud designed for compute-intensive applications that require massive parallel floating point performance, including artificial intelligence, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering. With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud.

Amazon EC2 Computing Cloud and High-Performance Computing

2013 has been an exciting year for the field of Statistics and Big Data, with the release of the new R version 3.0.0. We discuss a few topics in this area, providing toy examples and supporting code for configuring and using Amazon’s EC2 Computing Cloud. There are other ways to get the job done, of course. But we found it helpful to build the infrastructure on Amazon from scratch, and hope others might find it useful, too.

Will the Cloud Change Scientific Computing?

“What is important to researchers is ‘time to science,’ not the length of time a job takes to compute. ‘If you can wait in line at a national supercomputing center and it takes five days in the queue for your job to run, and then you get 50,000 cores and your job runs in a few hours, that’s great. But what if you could get those 50,000 cores right now, no waiting, and your job takes longer to run but it would still finish before your other job would start on the big iron machine.”