FPGA Myths

Print Friendly, PDF & Email

Earlier this year, insideHPC commissioned Gabriel Consulting Group to conduct the “HPC and Large Enterprise Purchasing Sentiment” survey, we asked 175 readers to tell us how they’re using technology, what new technologies they’re looking at, and how they expect to spend their budget dollars over the next 18 months. This sponsored post looks at FPGA myths and facts. You can download the complete insideHPC Research Report on FPGA from our white paper library.

As data center sprawl is now understood to be expensive and may not deliver performance increases for all types of applications, new technologies are coming to the rescue.  A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence “field-programmable”.  While the use of GPUs and HPC accelerators are generally understood today, there are a number of misconceptions about FPGAs that need to be understood.

The first is that FPGAs are only good for embedded devices. However, this is not the case. FPGAs can be used to sift through the massive amounts of data that are created in any given time frame for Internet of Things (IoT) environments as well as a wide range of Big Data applications.  FPGAs can be programmed to do many different tasks and are becoming more mainstream.

Another myth is that these massive amounts of data can be handled just by adding more traditional servers. While adding more general compute servers can increase application throughput (handling more data), certain applications will benefit more by local (connected directly to a server) than by adding to data center sprawl.

The third myth is that programming FPGAs are difficult to program. In the past, programming an FPGA might have required intimate knowledge of the hardware and programing for it required a different skill set from typical application developers,  todays application development environments are more closely related to C/C++ and OpenCL. This opens the doors to many more developers who will be able to take advantage of FPGAs in a time  efficient way.

The cost of FPGAs is too expensive to use. While the base cost of an FPGA product may be more expensive than the equivalent performance per $ of a traditional compute core, when all of the other factors are considered, an FPGA solution is competitive.  FPGAs are very energy efficient and can deliver more performance per watt than other alternative. It is important to look at all of the costs when calculating a performance/price metric for comparison.

In the past FPGAs were considered to be very energy hungry. The fact is that the FPGA will use varying amounts of power depending on what work is being done at a give time. While idle, almost nothing, and while performing tasks, under 100 watts.

An FPGA can be reconfigured for the task at hand. While accelerators may have 100’s of cores, an FPGA can be configured to have many more. In addition, the building blocks can be configured to match what the application needs to do. This can result in tremendous performance increases for a range of tasks that GPUs cannot match.

Organizations should consider using FPGAs for specific tasks where existing CPUs and GPUs are not delivering the expected performance. Additional programming will be required, but the benefits will outweigh the costs.

Download this Special Report on FPGA contributed by Intel

Comments

  1. Michael Wolfe says

    FPGAs have been used with great success in some important applications, but there is one currently insurmountable problem for using them in most general applications. When you write a program for an FPGA, you are designing a circuit. Loading that circuit onto the FPGA is a time-consuming process. If your application has one key kernel where all the time is spent, and you can implement that kernel on an FPGA, it might be well worth the effort to explore FPGAs. Most applications have several or many important kernel operations. In that case, your options are to (a) select the one most key kernel to implement on an FPGA (a variant of Amdahl’s Law applies here), (b) divide the FPGA into two or more subcomponents, one per kernel (though this means the whole FPGA is never used), (c) devise a super circuit that uses some extra control logic to implement two or more kernels (a challenge), or (d) reprogram the FPGA during the application. The last option might work if the application goes through long phases with one kernel per phase. Just as it’s important to understand the benefits of FPGAs and promote them where they are appropriate, it’s important to understand the challenges and not oversell them.