This sponsored post looks at the insideHPC Research Report on FPGAs, which found promising results from our readers who were testing or had FPGAs in production.
The amount of data that is produced on an hourly/daily/yearly basis is growing tremendously. A serious question for many organizations is how to deal with all of the data in order to make intelligent business decisions. The Internet of Things(IoT) and Big Data have become technologies that CIOs are struggling to understand and in many cases determine the proper and most efficient technologies to implement.
Traditional computing platforms, while much more powerful and power efficient than in the past, have their limitations. The limited number of processing elements as well as fixed structure, while necessary for any computing solution, may not be the most optimal for this new world of significant amounts of data produced. While flexible in terms of running an Operating System and applications diverse from web servers to accounting software to high performance computing, there are still a limited number of general purpose core available to use.
In order to implement more computing power, additional servers could be added to a data center. However, that results in more systems management, more power, more cooling, more networking and additional square footage in order to house more systems. New technologies, typically referred to as accelerators can speed up certain applications by one or two orders of magnitude. While the newest accelerators can also run single threaded applications as well as the operating system, the tasks are limited by the design of the hardware itself. Cores remain cores and memory remains memory. This inherent structure that was created by the designers remains and cannot be changed. Applications can be rewritten to take advantage of lots of cores, but the traditional computing model remains.
An ideal scenario would be to design a CPU that knows about the application that needs to be run and designing the circuits for that specific application. Although embedded processors may satisfy this need for a single application, this is not feasible for the wide range of applications that an organization may need to run for their business.
A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence “field-programmable”. This technology can be an answer to increasing the performance of a wide range of applications, especially as the application may evolve over time. By giving the developer access to reconfiguring the FPGA, various applications can be accelerated, not just those that fit nicely into structured GPU or existing accelerator.
While the programming of an FPGA may take more time than just recompiling and using an existing accelerator, many developers of applications can benefit from a range of reference design that could be supplied by FPGA vendors. While a reference design many not exactly match the application, it should be close enough that an experienced developer will be able to adapt their needs from the reference design.
FPGAs will become increasing important for organizations that have a wide range of applications that can benefit from performance increases. Rather than a brute force method to increasing performance in a data center by purchasing and maintaining racks of hardware and associated costs, FPGAs may be able to equal and exceed the performance of additional servers, while reducing costs as well.