A new paper from ORNL’s Sparsh Mittal presents a survey of approximate computing techniques. Recently published in ACM Computing Surveys 2016, A Survey Of Techniques for Approximate Computing reviews nearly 85 papers on this increasingly hot topic.
Approximate computing is a promising approach to energy-efficient design of digital systems. Approximate computing relies on the ability of many systems and applications to tolerate some loss of quality or optimality in the computed result.
“As rising performance demands confront with plateauing resource budgets, approximate computing (AC) has become, not merely attractive, but even imperative. AC is based on the intuitive observation that while performing exact computation or maintaining peak-level service demand require high amount of resources, allowing selective approximation or occasional violation of the specification can provide disproportionate gains in efficiency. AC leverages the presence of error-tolerant code regions and perceptual limitations of users to trade-off implementation, storage and result accuracy for performance and energy gains. Thus, AC has the potential to benefit a wide range of applications/frameworks e.g. data analytics, scientific computing, multimedia and signal processing, machine learning and MapReduce, etc. This survey paper reviews techniques for approximate computing in CPU, GPU and FPGA and various processor components (e.g. cache, main memory), along with approximate storage in SRAM, DRAM/eDRAM, non-volatile memories, e.g. Flash, STT-RAM, etc.”