Terra: High-Performance Computing

High-performance computing applications, such as auto-tuners and domain-specific languages, rely on generative programming techniques to achieve high performance and portability. However, these systems are often implemented in multiple disparate languages and perform code generation in a separate process from program execution, making certain optimizations difficult to engineer. We leverage a popular scripting language, Lua, to stage the execution of a novel low-level language, Terra. Users can implement optimizations in the high-level language, and use built-in constructs to generate and execute high-performance code. To simplify meta- programming, Lua and Terra share the same lexical environment, but, to ensure performance, this code can execute independently of Lua’s runtime. We evaluate our design by reimplementing existing multi-language systems entirely in Terra. Our Terra-based auto- tuner for BLAS routines performs within 20% of ATLAS, and our DSL for stencil computations runs 2.3x faster than hand-written C.

Architectural Properties for HPC

High Performance computer systems can be regarded as the most powerful and flexible research instruments today. They are employed to model phenomena in fields so various as climatology, quantum chemistry, computational medicine, High-Energy Physics and many, many other areas.

High Performance Computing from Fujitsu

The developments that have taken place in high performance computing (HPC) over the decades can be described quite simply: Without Fujitsu research, the development of the supercomputer would have been quite different.

Component Architecture for Scientific HPC

The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance com- puting. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individu- als or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components

Security at Big Data and HPC Scale

This paper is the first to explore a recent breakthrough with the introduction of the High Performance Computing (HPC) industry’s first Intelligence Community Directive (ICD) 503 (DCID 6/3 PL4) certified compliant and secure scale-out parallel file system solution, Seagate ClusterStorTM Secure Data Appliance, which is designed to address government and business enterprise need for collaborative and secure information sharing within a Multi-Level Security (MLS) framework at Big Data and HPC Scale.

Square Kilometer Array in HPC

Next generation radio telescopes will require tremendous amounts of compute power. With the current state of the art, the Square Kilometer Array (SKA), currently entering its pre-construction phase, will require in excess of one ExaFlop/s in order to process and reduce the massive amount of data generated by the sensors. The nature of the processing involved means that conventional high performance computing (HPC) platforms are not ideally suited. Con- sequently, the Square Kilometer Array project requires active and intensive involvement from both the high performance computing research community, as well as industry, in order to make sure a suitable system is available when the telescope is built. In this paper, we present a first analysis of the processing required, and a tool that will facilitate future analysis and external involvement.

Algorithm Analysis for High Performance Computing

The Tarari® High-Performance Computing Processor accelerates the execution of complex algorithms used in high-performance computing (HPC) applications. The Processor allows high- performance computing users in industry, government and education to accelerate complex and compute-intensive applications

HPC Gateway FUJITSU Software HPC Cluster Suite

Engineers, analysts and researchers using HPC systems face their own challenges in optimizing designs, interpreting data and discovery. Dealing with everyday complications of working with HPC systems, infrastructure and tools should not be one of them. Enabling HPC Simplicity means capturing knowledge and expertise within a solution that broadens HPC access and eases work, even for experienced users. This white paper describes just some of the ways the HPC Gateway offers this simplicity.

Design Optimization for HPC Clusters

Advanced simulation software can dramatically shorten the design phase by allowing engineers to virtually optimize and validate new ideas earlier in the process, minimizing the expense of building physical prototypes and streamlining real-world testing.

Private Sector Performance and Insight

Through its Private Sector Program (PSP), NCSA has provided supercomputing, consulting, research, prototyping and development, and production services to more than one-third of the Fortune 50, in manufacturing, oil and gas, finance, retail/wholesale, bio/medical, life sciences, technology and other sectors. “We’re not the typical university supercomputer center, and PSP isn’t a typical group,” Giles says. “Our focus is on helping companies leverage high- performance computing in ways that make them more competitive.”