MBP on TO: Mixing Floating- and Fixed-Point Formats in BP Learning

TitleMBP on TO: Mixing Floating- and Fixed-Point Formats in BP Learning
Publication TypeTechnical Report
Year of Publication1994
AuthorsAnguita, D., & Gomes B.
Other Numbers908
Abstract

We examine the efficient implementation of back prop type algorithms on T0 [4], a vector processor with a fixed point engine, designed for neural network simulation. A matrix formulation of back prop, Matrix Back Prop [1], has been shown to be very efficient on some RISCs [2]. Using Matrix Back Prop, we achieve an asymptotically optimal performance on T0 (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line method. Since high efficiency is futile if convergence is poor (due to the use of fixed point arithmetic), we use a mixture of fixed and floating point operations. The key observation is that the precision of fixed point is sufficient for good convergence, if the range is appropriately chosen. Though the most expensive computations are implemented in fixed point, we achieve a rate of convergence that is comparable to the floating point version. The time taken for conversion between fixed and floating point is also shown to be reasonable.

URLhttp://www.icsi.berkeley.edu/ftp/global/pub/techreports/1994/tr-94-038.pdf
Bibliographic Notes

ICSI Technical Report TR-94-038

Abbreviated Authors

D. Anguita and B. Gomes

ICSI Publication Type

Technical Report