Successful HPC computing depends on choosing the architecture that addresses both application and institutional needs. In particular, finding a simple path to leading edge HPC and Data Analytics is not difficult, if you consider the capabilities and limitations of various approaches to HPC performance, scaling, ease of use, and time to solution. Careful analysis and consideration of the following questions will help lead to a successful and cost-effective HPC solution. Here are three questions to ask to ensure HPC success.
Cloud computing has become another tool for the HPC practitioner. For some organizations, the ability of cloud computing to shift costs from capital to operating expenses is very attractive. Because all cloud solutions require use of the Internet, a basic analysis of data origins and destinations is needed. Here’s an overview of when local or cloud HPC make the most sense.
Parallel file systems have become the norm for HPC environments. While typically used in high end simulations, these parallel file systems can greatly affect the performance and thus the customer experience when using analytics from leading organizations such as SAS. This whitepaper is an excellent summary of how parallel file systems can enhance the workflow and insight that SAS Analytics gives.
Many HPC applications began as single processor (single core) programs. If these applications take too long on a single core or need more memory than is available, they need to be modified so they can run on scalable systems. Fortunately, many of the important (and most used) HPC applications are already available for scalable systems. Not all applications require large numbers of cores for effective performance, while others are highly scalable. Here is how to better understand your HPC application needs.
In today’s highly competitive world, High Performance Computing (HPC) is a game changer. Though not as splashy as many other computing trends, the HPC market has continued to show steady growth and success over the last several decades. Market forecaster IDC expects the overall HPC market to hit $31 billion by 2019 while riding an 8.3% CAGR. The HPC market cuts across many sectors including academic, government, and industry. Learn which industries are using HPC and why.
Since its beginnings in 1999 as a project at Carnegie Mellon University, Lustre, the high performance parallel file system, has come a long, long way. Designed and always focusing on performance and scalability, it is now part of nearly every High Performance Computing (HPC) cluster on the Top500.org’s list of fastest
computers in the world—present in 70 percent of the top 100 and nine out of the top ten. That’s an achievement for any developer—or community of developers, in the case of Lustre—to be proud of. Learn what the HPC Community is saying about Lustre.
Cloud computing has become a strong alternative to in house data centers for a large percentage of all enterprise needs. Most enterprises are adopting some form of could computing, with some estimates that as high as 90 % are putting workloads into a public cloud infrastructure. The whitepaper, Empowering Cloud Utilization with Cloud Bursting is an excellent summary of various options for enterprises that are planning for using a public cloud infrastructure.
Oil and gas exploration is always a challenging endeavor, and with today’s large risks and rewards, optimizing the process is of critical importance. A whole range of High Performance Computing (HPC) technologies need to be employed for fast and accurate decision making. This Intersect360 Research whitepaper, Seismic Processing Places High Demand on Storage, is an excellent summary of the challenges and solutions that are being address by storage solutions from Seagate.
Data accumulation is just one of the challenges facing today weather and climatology researchers and scientists. To understand and predict Earth’s weather and climate, they rely on increasingly complex computer models and simulations based on a constantly growing body of data from around the globe. “It turns out that in today’s HPC technology, the moving of data in and out of the processing units is more demanding in time than the computations performed. To be effective, systems working with weather forecasting and climate modeling require high memory bandwidth and fast interconnect across the system, as well as a robust parallel file system.”
National Center for Supercomputing Applications (NCSA) has a private sector program (PSP) which works with the smaller companies to help them adopt HPC technologies based on the expertise acquired over the past quarter century. By working with these organizations, NCSA can help them to determine the Return on Investment (ROI) of using more computing power to solve real world problems than is possible on smaller, less capable systems.