MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Local or Cloud HPC?

Cloud computing has become another tool for the HPC practitioner. For some organizations, the ability of cloud computing to shift costs from capital to operating expenses is very attractive. Because all cloud solutions require use of the Internet, a basic analysis of data origins and destinations is needed. Here’s an overview of when local or cloud HPC make the most sense.

Faster SAS Analytics Using DDN Storage Solutions

Parallel file systems have become the norm for HPC environments. While typically used in high end simulations, these parallel file systems can greatly affect the performance and thus the customer experience when using analytics from leading organizations such as SAS. This whitepaper is an excellent summary of how parallel file systems can enhance the workflow and insight that SAS Analytics gives.

Understanding Your HPC Application Needs

Many HPC applications began as single processor (single core) programs. If these applications take too long on a single core or need more memory than is available, they need to be modified so they can run on scalable systems. Fortunately, many of the important (and most used) HPC applications are already available for scalable systems. Not all applications require large numbers of cores for effective performance, while others are highly scalable. Here is how to better understand your HPC application needs.

Who Is Using HPC (and Why)?

In today’s highly competitive world, High Performance Computing (HPC) is a game changer. Though not as splashy as many other computing trends, the HPC market has continued to show steady growth and success over the last several decades. Market forecaster IDC expects the overall HPC market to hit $31 billion by 2019 while riding an 8.3% CAGR. The HPC market cuts across many sectors including academic, government, and industry. Learn which industries are using HPC and why.

The Lustre Parallel File System—A Landscape of Topics and Insight from the Community

Since its beginnings in 1999 as a project at Carnegie Mellon University, Lustre, the high performance parallel file system, has come a long, long way. Designed and always focusing on performance and scalability, it is now part of nearly every High Performance Computing (HPC) cluster on the Top500.org’s list of fastest
computers in the world—present in 70 percent of the top 100 and nine out of the top ten. That’s an achievement for any developer—or community of developers, in the case of Lustre—to be proud of. Learn what the HPC Community is saying about Lustre.

Empowering Cloud Utilization with Cloud Bursting

Cloud computing has become a strong alternative to in house data centers for a large percentage of all enterprise needs. Most enterprises are adopting some form of could computing, with some estimates that as high as 90 % are putting workloads into a public cloud infrastructure. The whitepaper, Empowering Cloud Utilization with Cloud Bursting is an excellent summary of various options for enterprises that are planning for using a public cloud infrastructure.

Seismic Processing Places High Demand on Storage

Oil and gas exploration is always a challenging endeavor, and with today’s large risks and rewards, optimizing the process is of critical importance. A whole range of High Performance Computing (HPC) technologies need to be employed for fast and accurate decision making. This Intersect360 Research whitepaper, Seismic Processing Places High Demand on Storage, is an excellent summary of the challenges and solutions that are being address by storage solutions from Seagate.

How HPC is Helping Solve Climate and Weather Forecasting Challenges

Data accumulation is just one of the challenges facing today weather and climatology researchers and scientists. To understand and predict Earth’s weather and climate, they rely on increasingly complex computer models and simulations based on a constantly growing body of data from around the globe. “It turns out that in today’s HPC technology, the moving of data in and out of the processing units is more demanding in time than the computations performed. To be effective, systems working with weather forecasting and climate modeling require high memory bandwidth and fast interconnect across the system, as well as a robust parallel file system.”

NCSA Private Sector Program

National Center for Supercomputing Applications (NCSA) has a private sector program (PSP) which works with the smaller companies to help them adopt HPC technologies based on the expertise acquired over the past quarter century. By working with these organizations, NCSA can help them to determine the Return on Investment (ROI) of using more computing power to solve real world problems than is possible on smaller, less capable systems.

Enterprise HPC Storage System

In many HPC environments, the storage system is an afterthought. While the main focus is on the CPU’s the selection and implementation of the storage hardware and software is critical to an efficient and productive overall HPC environment. Without the ability to move data quickly into and out of the CPU system, the HPC users would not be able to obtain the performance that is expected.