In this research report from insideHPC, we explore the world of accelerators, primarily FPGAs, to see if they’re the right answer to fill the ‘compute gap’.
One of the biggest challenges in IT today is getting enough compute resources to handle current processing tasks and keep up with the deluge of data from big data and IOT. There is a compute gap developing between what is available today and what is needed to stay competitive. This problem impacts both in scientific computing (High Performance Computing or HPC) and commercial enterprise computing.
On the HPC side, there is always the need to model more complex interactions and develop models with more variables and higher accuracy. Even though HPC computing capability has increased considerably there is still significant unmet demand for more compute power. Enterprises are facing much the same challenges. The advent of Big Data gives them the ability to slice and dice customer purchasing patterns in a myriad of ways. But harnessing these actionable insights takes a lot more compute horsepower and a shortage of processing capability becomes the bottleneck.
Simply adding more of the same servers in the existing data center, is one solution to the ‘compute gap’ problem. However, our research finds many data centers are running out of power or space. Some readers are finding by replacing traditional systems in the installed base with the Industry leading energy efficient performance of servers fueled by Intel® Xeon® processors with FPGA accelerators can help organizational agility while reducing operational costs (TCO). Adding accelerators and inline processing to relatively freshly deployed (or new) servers is a good means to get more compute capacity without contributing to server sprawl or heavy electrical load.
Download this Special Research Report on FPGAs and learn the results of our readers who are testing FPGAs.