Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


FPGA Myths

As data center sprawl is now understood to be expensive and may not deliver performance increases for all types of applications, new technologies are coming to the rescue. A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence “field-programmable”. While the use of GPUs and HPC accelerators are generally understood today, there are a number of misconceptions about FPGAs that need to be understood.

Can FPGAs Help You?

FPGAs will become increasing important for organizations that have a wide range of applications that can benefit from performance increases. Rather than a brute force method to increasing performance in a data center by purchasing and maintaining racks of hardware and associated costs, FPGAs may be able to equal and exceed the performance of additional servers, while reducing costs as well.

Exascale: A race to the future of HPC

As exponential data growth reshapes the industry, engineering, and scientific discovery, success has come to depend on the ability to analyze and extract insight from incredibly large data sets. Exascale computing will allow us to process data, run systems, and solve problems at a totally new scale and this will become vitally important as problems grow ever larger, ever more difficult. Our unmatched ability to bring new technology to the mainstream will provide systems that are markedly more affordable, usable, and efficient at handling growing workloads. To learn more download this white paper.

The insideHPC Guide to Flexible HPC

Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment. To learn more down load this white paper.

Many-Task Computing for Grids

Many-task computing aims to bridge the gap between two computing paradigms, high throughput computing and high performance computing.

Amazon EC2 Computing Cloud and High-Performance Computing

2013 has been an exciting year for the field of Statistics and Big Data, with the release of the new R version 3.0.0. We discuss a few topics in this area, providing toy examples and supporting code for configuring and using Amazon’s EC2 Computing Cloud. There are other ways to get the job done, of course. But we found it helpful to build the infrastructure on Amazon from scratch, and hope others might find it useful, too.

CRAY – 2 Computer System

he CRAY-2 Computer System sets the standard for the next generation of supercomputers. It is characterized by a large Common Memory (256 million 64-bit words), four Background Processors, a clock cycle of 4.1 nanoseconds (4.1 billionths of a second) and liquid immersion cooling. It offers effective throughput six to twelve times that of the CRAY-1 and runs an operating system based on the increasingly popular UNIXTM operating system.

Supercomputers for All

Supercomputers may date back to the 1960s, but it is only recently that their vast processing power has begun to be harnessed by industry and commerce, to design safer cars, build quieter aeroplanes, speed up drug discovery, and subdue the volatility of the financial markets. The need for powerful computers is growing, says Catherine Rivière […]

Research for New Technology Using Supercomputers

This paper presents our approach to research and development in relation to four applications in which utilization of simulations in super-large-scale computation systems is expected to serve useful purposes.

Science and Industry using Supercomputers

This paper is intended for people interested in High Performance Computing (HPC) in general, in the performance development of HPC systems from the beginning in the 1970s and, above all, in HPC applications in the past, today and tomorrow. Readers do not need to be supercomputer experts.