Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

HPE Buys Startup Open Source ML-HPC Platform Determined AI

Hewlett Packard Enterprise (HPE) today announced that it has acquired Determined AI, a San Francisco-based startup with a software stack designed to train AI models faster using its open source machine learning (ML) platform. HPE will combine Determined AI’s \software solution with its AI and high performance computing (HPC) offerings to enable ML engineers to […]

The New and Evolving Role of the Chief Data Officer

In this sponsored post, Ken Grohe, President and Chief Revenue Officer for WekaIO, discusses the newest title in the C-Suite, the CDO: Chief Data Officer. As of 2018, almost 68 percent of Fortune 1000 companies have a CDO, yet according to Gartner, “The Chief Data Officer role is still new, untested, and amorphous.” More organizations will recognize that a savvy CDO delivers a return on investment, and as enterprise data stores grow, so do the returns.

Overcoming the Complexities of New Applications & Technologies in the New Era of HPC

In this contributed article, Bill Wagner, CEO of Bright Computing, discusses how as more organizations take the leap into HPC, Bright Computing aims to be the company that helps solve the challenge of complexity within the industry and replace it with flexibility, ease of use, and accelerated time to value.

The Case for ‘Center Class’ HPC: Think Tank Calls for $10B Fed Funding over Five Years

The Center for Data Innovation (CDI), a non-profit think tank that studies the intersection of data, technology and public policy, has gone to bat for increased federal funding for ‘center class’ and mid-range HPC systems, contending that “a decade of funding cuts at the National Science Foundation (NSF) has left the United States with an […]

Managing Complexity in the New Era of HPC

In this contributed article, Bill Wagner, CEO of Bright Computing, discusses how the HPC industry has entered an era of change in the past few years. New technologies, cloud, edge, and a broadening set of commercial use cases in the areas of data analytics and machine learning have set in motion a tsunami of change for HPC. This is no longer a tool for rocket scientists and the research elite. HPC is quickly becoming a strategic necessity for all industries that want to gain a competitive advantage in their markets, or at least keep pace with their industry peers in order to survive.

The Hyperion-insideHPC Interviews: Suzy Tichenor on the Need for Industrial HPC Users to Get on the GPU Bandwagon

Suzy Tichenor is a long-time champion of helping companies gain access to the country’s most powerful computers. At the Department of Energy’s Oak Ridge National Laboratory – site of Summit, no. 2 in the world, according to the latest Top500 supercomputing ranking – she is director of an industrial partnership program dedicated to that mission. […]

The Hyperion-insideHPC Interviews: Rich Brueckner and Mike Bernhardt Talk Exascale and HPC Marketing: How the HPC Community Tells its Story to the World

After more than three decades in supercomputing as a strategic marketing and communications executive, Mike Bernhardt has seen the HPC community evolve through the many phases of its existence. A “Perennial” (see below) at the annual SC industry conference, Bernhardt remains fascinated by the connection between leading-edge computation and scientific discovery. “In many ways, it’s […]

Full-spectrum HPC Scheduling

Our friends over at Altair explain how a meta-scheduler, or hierarchical scheduler, can be thought of as a private or team-based scheduler that uses shared underlying resources. Difficult workloads and special workloads are good candidates for meta-scheduling. Examples include sets of several hundred thousand short jobs, jobs with complex dependencies, and workflows that are continually introspected for status.

Are You Ready for the Exascale Era? Find Out at SC19

As we head into the biggest supercomputing event of the year, all eyes are on exascale. The frontrunners in the race to exascale, including our friends over at Altair, will convene at SC19 in Denver this November to share updates, address challenges, and help paint the picture of an exascale-fueled future for HPC.

A Liquid Cooling Petascale Supercomputing Site and GROMACS Workload Optimization Benchmark

Accelerated computing have been viewed as revolutionary breakthrough technologies for AI and HPC workloads. Significant accelerated computing power from GPUs paired with CPUs is the major contributor. Our friends over at Quanta Cloud Technology (QCT) provide QuantaGrid D52G-4U 8 NVLink GPUs servers with a liquid cooling platform successfully adopted by the National Center of High-performance Computing (NCHC) in Taiwan) for their Taiwania-II project. Rank 23rd on the Top500 as of June 2019.