Computer Arithmetic 85
4.7.8 IEEE 754 Format
IEEE 754 format of oating-point representation using scienti c notation scheme is adopted by most of the
computers world wide. This scheme offers the 32-bit format for single precision numbers [ Figure 4.27 (a)]
and 64-bit format for double precision numbers [Figure 4.27 (b)]. The base is taken as 2 and the most sig-
ni cant bit is reserved as sign bit.
Figure 4.27 IEEE 754 format (a) Single precision and (b) Double precision
For single precision numbers, 8 bits are allowed for storing the biased exponent in two’s comple-
ment form. For double precision representation, three more bits are allowed for it, making the width of
biased exponent as 11 bits. The mantissa part, in both cases of single and double precision, consists of
the binary representation of normalized fractional part, omitting the leading 1. Thus, the binary point is
assumed to be at the beginning of mantissa.
One important feature of IEEE 754 format is the representation of zero. We have already observed
that in scienti c notation scheme used by us, it is not able to cover a narrow range at either side of 0
(Figure 4.25 ). In IEEE 754 format, all zeros in exponent as well as mantissa eld (for single as well as
double precision schemes) would be taken as either +0 or 0, depending upon the value of the sign bit.
4.8 FLOATING-POINT ARITHMETIC
AND UNIT OPERATIONS
In Section 4.2 through Section 4.6, we have discussed how to perform basic arithmetic operations (addition,
subtraction, multiplication and division) with signed integers, expressed as two’s complement scheme.
These are fundamental arithmetic operations and in computer arithmetic these are known as unit opera-
tions . However, when these unit operations have to be performed using oating-point numbers represented
through scienti c notation, some amount of pre-processing and post-processing would be necessary.
This need is generated from the fact that at the input and output stages, all real numbers would be
using the standard decimal format. Just think of any computer program and the data set used there.
De nitely, during input stage, we do not convert the numbers to their scienti c notation using IEEE 754
format with biased exponent and mantissa. We simply express it in standard decimal form of abcd.pqrs
or at the most a.bcd × 10
n
. Moreover, it would be dif cult for us to interpret the results, if they come out
in the form of 01111010010110101001000010000100, a 32-bit representation. [The reader might have
to spend 10 minutes or more to convert it to our familiar decimal system.]
M04_GHOS1557_01_SE_C04.indd 85M04_GHOS1557_01_SE_C04.indd 85 4/29/11 5:04 PM4/29/11 5:04 PM
86 Computer Architecture and Organization
Therefore, the need of pre- and post-processing of all numerical values would be essential to main-
tain the harmony between the number representation familiar to the human world and binary format
suitable for the machine (processor) universe. Otherwise, the techniques described so far, for arithmetic
operations, would be perfectly applicable by the processor. For example, for a multiplication operation,
when both multiplier and multiplicand are expressed in IEEE 754 format, the result may be obtained
by algebraically adding the exponents of both and multiplying the mantissa part. The nal sign of the
product would depend on the initial sign-bits of two parameters. In this section, we would discuss about
some special features of arithmetic operations related with oating-point representation.
4.8.1 Floating-point Unit Operations
Believe it or not, it is true that oating-point addition and subtraction are more complex than oating-
point multiplication and division. Let us explain this with a simple example. Let us assume that two
operands are 12.896 and 9.3741. Note that we have to change these two numbers in their scienti c nota-
tions and would be converted to 0.12896 × 10
2
and 0.93741 × 10
1
, respectively. If we like to multiply
these two operands then it would be like
(0.12896 × 0.93741) × 10
2+1
= 0.1208883936 × 10
3
= 120.8883936
Now, let us add these two operands. We may express this addition as
0.12896 × 10
2
+ 0.93741 × 10
1
Now, observe that as the exponents are of different values, therefore, we have to transform them to
equal values. We may either transform both of them as 10
2
or both of them as 10
1
. However, in either
case the remaining decimal part has to be adjusted accordingly. Let us assume that we have decided to
make both exponents as 10
2
. Therefore, our addition operation now may be rewritten as
0.12896 × 10
2
+ 0.093741 × 10
2
The reader may note that the second operand is now expressed with six places after decimal. We may
now perform the addition operation as follows
(0.12896 + 0.093741) × 10
2
= 0.222701 × 10
2
= 22.2701
4.8.2 Exponent Overflow or Underflow
In case of oating-point multiplications or divisions, there are chances that the exponent part may face
an over ow or under ow and thus may become out-of-the-range. For example if we are multiplying two
numbers represented by A × 10
X
and B × 10
Y
, then if the values of X and Y are very large it would cause a
over ow to calculate X + Y. Similarly, if these two numbers are (A × 10
X
) and (B × 10
Y
) and we are mul-
tiplying both, then (X Y) may generate an under ow (i.e., very small number). The same situation may
arise in the case of division also. We have already mentioned (Figure 4.27 ) that IEEE754 format allows 8-bit
In later chapters the reader would find that most processors are equipped with two ALUs, one
for integer operations and another for floating-point operations.
F
O
O
D
F
O
R
T
H
O
U
G
H
T
M04_GHOS1557_01_SE_C04.indd 86M04_GHOS1557_01_SE_C04.indd 86 4/29/11 5:04 PM4/29/11 5:04 PM
Computer Arithmetic 87
for single precision and 11-bit for double precision exponent value storage. If the nal value of the exponent
may not be accommodated within this area, then over ow or under ow error has to be reported. In certain
cases over ow is reported as +α (plus in nity) or −α (minus in nity) and under ow is reported as zero.
4.8.3 Mantissa Overflow or Underflow
During the addition of two oating-point numbers of same sign, the resultant mantissa part may be large
enough to generate a carry. This has to be adjusted by realignment. Similarly, during subtraction or
division or during realignment at pre-processing stage, the right-shift may generate an under ow, which
has to be recti ed by properly rounding off the concerned value.
4.8.4 Floating-point Addition and Subtraction
Addition and subtraction operations using oating-point are performed in the same method taking the sign-
bit into consideration. Pre-processing is necessary before the operation and if any one of the two operands
is found to be zero, the other operand is directly taken as the nal result including its sign-bit. For example
7.65 + 0 = 7.65
7.65 0 = 7.65
0 + 7.65 = 7.65
0 7.65 = 7.65
The owchart to perform oating-point addition or subtraction is presented in Figure 4.28 .
Figure 4.28 Flowchart for floating-point addition or subtraction (Y = A ± B)
Y = 0
Increment
smaller
exponent and
shift its
mantissa right
M04_GHOS1557_01_SE_C04.indd 87M04_GHOS1557_01_SE_C04.indd 87 4/29/11 5:04 PM4/29/11 5:04 PM
88 Computer Architecture and Organization
Figure 4.29 Flowchart for floating-point multiplication (Y = A × B)
At a rst glance, the owchart may appear to be a complex one. However, the steps are easy to follow
and already explained in the beginning of this section. In this owchart, we have assumed that we are
about to calculate the value of Y, where Y = A ± B. To start with, rst, it has to be checked whether it is
an addition operation or a subtraction. If it is a subtraction, then the sign-bit of B has to be reversed or
complemented. After that, both operands A and B have to be ensured to be non-zero. If any one of these
two is found to be zero, then the other operand is taken as the result.
The next step is to check the equality of the exponent values of A and B. If they are not equal then
the smaller exponent to be incremented by one and the mantissa is to be adjusted by shifting towards
right. If in this shifting process mantissa becomes zero, then the other operand is reported as the result.
When exponents of both operands become equal, then signed addition of mantissa of both operands
are performed. If the result of addition is zero, then, that is reported as the nal result. Otherwise, check
for any mantissa over ow is performed. In case of any eventual mantissa over ow, the mantissa is
shifted right one place and the exponent is incremented by one. If there is any exponent over ow in this
process, then the over ow is reported.
The nal step is to normalize the result for which mantissa has to be shifted left and the exponent
has to be decremented by one. Any exponent under ow occurring at this stage has to be reported as
under ow, otherwise the result Y is reported.
4.8.5 Floating-point Multiplication
Flowchart for oating-point multiplication is presented in Figure 4.29 , which may be compared with the
owchart of addition and subtraction (Figure 4.28 ) to visualize relative simplicity of multiplication opera-
tion in oating-point. In this case also, initial checks are performed to ensure non-zero values for multiplier
and multiplicand. If any one is found to be zero the result becomes zero and the operation ends there.
M04_GHOS1557_01_SE_C04.indd 88M04_GHOS1557_01_SE_C04.indd 88 4/29/11 5:04 PM4/29/11 5:04 PM
Computer Arithmetic 89
The next step is to add exponents and check for any eventual over ow. The bias is then subtracted
again, checking that there is no under ow. Finally, the mantissas are to be multiplied, normalized and
rounded off in the same manner as carried out for additionsubtraction operation.
4.8.6 Floating-point Division
The method of oating-point division is similar to multiplication with a few differences. In this case,
if the divisor is zero, then the result becomes indeterminate and, generally, a division by zero error is
reported. For example, in case of 8086, and similar processors, it becomes a case of INT 3 interrupt.
After checking the divisor and dividend for non-zero values, exponents are subtracted and any even-
tual under ow is checked. Thereafter, the bias is added and any case of over ow is checked. Finally,
the division is performed with two mantissas and the result is normalized and rounded off. A complete
owchart of oating-point division is presented in Figure 4.30 .
Due to the adoption of scientific notation, approximation would always be there in repre-
senting decimal number system in computers. However, till a better technique evolves, this
approximation would remain as part of computer arithmetic.
F
O
O
D
F
O
R
T
H
O
U
G
H
T
Figure 4.30 Flowchart for floating-point division (Y = A ÷ B)
M04_GHOS1557_01_SE_C04.indd 89M04_GHOS1557_01_SE_C04.indd 89 4/29/11 5:04 PM4/29/11 5:04 PM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.138.178