Вы находитесь на странице: 1из 2

Logarithmic Multiplier in Hardware

Implementation of Neural Networks

Uroš Lotrič and Patricio Bulić

Faculty of Computer and Information Science,


University of Ljubljana, Slovenia
{uros.lotric,patricio.bulic}@fri.uni-lj.si

Abstract. Neural networks on chip have found some niche areas of ap-
plications, ranging from massive consumer products requiring small costs
to real-time systems requiring real time response. Speaking about latter,
iterative logarithmic multipliers show a great potential in increasing per-
formance of the hardware neural networks. By relatively reducing the size
of the multiplication circuit, the concurrency and consequently the speed
of the model can be greatly improved. The proposed hardware implemen-
tation of the multilayer perceptron with on chip learning ability confirms
the potential of the concept. The experiments performed on a Proben1
benchmark dataset show that the adaptive nature of the proposed neural
network model enables the compensation of the errors caused by inexact
calculations by simultaneously increasing its performance and reducing
power consumption.

Keywords: Neural network, Iterative logarithmic multiplier, FPGA.

1 Introduction
Artificial neural networks are commonly implemented as software models run-
ning in general purpose processors. Although widely used, these systems usually
operate on von-Neumann architecture which is sequential in nature and as such
can not exploit the inherent concurrency present in artificial neural networks.
On the other hand, hardware solutions, specially tailored to the architecture of
neural network models, can better exploit the massive parallelism, thus achiev-
ing much higher performances and smaller power consumption then the ordinary
systems of comparable size and cost. Therefore, the hardware implementations
of artificial neural network models have found its place in some niche applica-
tions like image processing, pattern recognition, speech synthesis and analysis,
adaptive sensors with teach-in ability and so on.
Neural chips are available in analogue and digital hardware designs [1,2]. The
analogue designs can take advantage of many interesting analogue electronics el-
ements which can directly perform the neural networks’ functionality resulting
in very compact solutions. Unfortunately, these solutions are susceptible to noise,
which limits their precision, and are extremely limited for on-chip learning. On the
other hand, digital solutions are noise tolerant and have no technological
obstacles for on-chip learning, but result in larger circuit size. Since the design of

A. Dobnikar, U. Lotrič, and B. Šter (Eds.): ICANNGA 2011, Part I, LNCS 6593, pp. 158–168, 2011.

c Springer-Verlag Berlin Heidelberg 2011
Logarithmic Multiplier in Hardware Implementation of Neural Networks 159

the application specific integrated circuits (ASIC) is time consuming and


requires a lot of resources, many hardware implementations use of programmable
integrated circuit technologies, like field programmable gate array (FPGA) tech-
nology.
The implementation of neural network models in integrated circuits is still a
challenging task due to the complex algorithms involving a large number of mul-
tiplications. Multiplication is a resource, power and time consuming arithmetic
operation. In artificial neural network designs, where many concurrent multipli-
cations are desired, the multiplication circuits should be as small as possible.
Due to the complexity of circuits needed for floating-point operations, the de-
signs are constrained to the fixed-point implementations, which can make use of
integer adders and multipliers.
The integer multiplier circuits can be further optimized. Many practical so-
lutions like truncated and logarithmic multipliers [3,4,5] consume less space
and power and are faster then ordinary multipliers for the price of introduc-
ing small errors to the calculations. These errors can cause serious problems in
neural network performance if the teaching is not performed on-chip. However,
if the neural network learning is performed on-chip, the erroneous calculations
should be compensated in the learning phase and should not seriously degrade its
performance.
All approximate multipliers discard some of the less significant partial prod-
ucts and introduce a sort of a compensation circuit to reduce the error. The main
idea of logarithmic multipliers is to approximate the operands with their loga-
rithms thus replacing the multiplication with one addition. Errors introduced by
approximation are usually compensated by some lookup-table approach, interpo-
lations or based on Mitchell’s algorithm [3]. The one stage iterative logarithmic
multiplier [5] follows the ideas of Mitchell but uses different error-correction
circuits. The final hardware implementation involves only one adder and few
shifters, resulting in reduced usage of logic resources and power consumption.
In this paper the behaviour of hardware implementation of neural network us-
ing iterative logarithmic multipliers is considered. In the next section the iterative
logarithmic multiplier is introduced, outlining its advantages and weaknesses. Fur-
thermore, a highly parallel processing unit specially suited for feed-forward neural
networks is proposed. Its design allows it to be used in the forward pass as well as
the backward pass during the learning phase. In section four the performance of
the proposed solution is tested on many benchmark problems. The results are com-
pared with the hardware implementation using exact matrix multipliers as well as
floating-point implementation. Main findings are summarized in the end.

2 Iterative Logarithmic Multiplier

The iterative logarithmic multiplier (ILM) was proposed by Babic et al. in [5].
It simplifies the logarithm approximation introduced in [3] and introduces an
iterative algorithm with various possibilities for achieving an error as small as
required and the possibility of achieving an exact result.

Вам также может понравиться