00 голосов за00 голосов против

24 просмотров58 стр.Oct 16, 2013

© Attribution Non-Commercial (BY-NC)

DOC, PDF, TXT или читайте онлайн в Scribd

Attribution Non-Commercial (BY-NC)

24 просмотров

00 голосов за00 голосов против

Attribution Non-Commercial (BY-NC)

Вы находитесь на странице: 1из 58

There are three basic logic gates each of which performs a basic logic function, they are called NOT, AND and OR. All other logic functions can ultimately be derived from combinations of these three. For each of the three basic logic gates a summary is given including the logic symbol , the corresponding truth table and the Boolean expression.

The NOT gate: The NOT gate is unique in that it only has one input. It looks like

The input to the NOT gate is inverted i.e the binary input state of 0 gives an output of 1 and the binary input state of 1 gives an output of 0. is known as "NOT A" or alternatively as the complement of The truth table for the NOT gate appears as below .

0 1

1 0

The AND gate has two or more inputs. The output from the AND gate is 1 if and only if all of the inputs are 1, otherwise the output from the gate is 0. The AND gate is drawn as follows

The output from the AND gate is written as (the dot can be written half way up the line as here or on the line. Note that some textbooks omit the dot completely). The truth table for a two-input AND gate looks like

0 0 1 1

0 1 0 1

0 0 0 1

It is also possible to represent an AND gate with a simple analogue circuit, this is illustrated as an animation.

The OR gate

The OR gate has two or more inputs. The output from the OR gate is 1 if any of the inputs is 1. The gate output is 0 if and only if all inputs are 0. The OR gate is drawn as follows

The output from the OR gate is written as The truth table for a two-input OR gate looks like

0 0 1 1

0 1 0 1

0 1 1 1

The three basic logic gates can be combined to provide more complex logical functions. Four important logical functions are described here, namely NAND, NOR, XOR and XNOR. In each case a summary is given including the logic symbol for that function, the corresponding truth table and the Boolean expression.

The NAND gate has two or more inputs. The output from the NAND gate is 0 if and only if all of the inputs are 1 otherwise the output is 1. Therefore the output from the NAND gate is the NOT of A AND B (also known as the complement or inversion of

where the small circle immediately to the right of the gate on the output line is known as an invert bubble. The output from the NAND gate is written as (the same rules apply regarding the placement and appearance of the dot as for the AND gate - see the section on basic logic gates). The Boolean expression reads as "A NAND B".

0 0 1 1

0 1 0 1

1 1 1 0

The NOR gate has two or more inputs. The output from the NOR gate is 1 if and only if all of the inputs are 0, otherwise the output is 0. This output behaviour is the NOT of A OR B. The NOR gate is drawn as follows

The output from the NOR gate is written as The truth table for a two-input NOR gate looks like

0 0 1 1

0 1 0 1

1 0 0 0

The exclusive-OR or XOR gate has two or more inputs. For a two-input XOR the output is similar to that from the OR gate except it is 0 when both inputs are 1. This cannot be extended to XOR gates comprising 3 or more inputs however. In general, an XOR gate gives an output value of 1 when there are an odd number of 1's on the inputs to the gate. The truth table for a 3-input XOR gate below illustrates this point. The XOR gate is drawn as

The output from the XOR gate is written as The truth table for a two-input XOR gate looks like

0 1 0 1

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

0 1 1 0 1 0 0 1

The exclusive-NOR or XNOR gate has two or more inputs. The output is equivalent to inverting the output from the exclusive-OR gate described above. Therefore an equivalent circuit would comprise an XOR gate, the output of which feeds into the input of a NOT gate. In general, an XNOR gate gives an output value of 1 when there are an even number of 1's on the inputs to the gate. The truth table for a 3-input XNOR gate below illustrates this point. The XNOR gate is drawn using the same symbol as the XOR gate with an invert bubble on the output line as is illustrated below

0 1 0 1

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

1 0 0 1 0 1 1 0

Consider the circuit below:

The circuit has two outputs labelled SUM and CARRY. A truth table for the circuit looks like: A 0 1 0 1 0 0 1 1 B 0 1 1 0 SUM 0 0 0 1 CARRY

Cleary this circuit is performing binary addition of B to A (recalling that in binary 1+1 = 0 carry 1). Such a circuit is called a half-adder, the reason for this is that it enables a carry out of the current arithmetic operation but no carry in from a previous arithmetic operation. A full adder is made by combining two half-adders and an additional OR-gate. A full adder has the carry in capability (denoted as CIN in the diagram below) and so allows cascading which results in the possibility of multi-bit addition. The circuit diagram for a full adder is given below, note that the two separate half-adders are each enclosed in a box to help understand this circuit.

The final truth table for a full adder looks like the following: A 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 B 0 1 0 1 0 1 0 1 CIN 0 1 1 0 1 0 0 1 S 0 0 0 1 0 1 1 1 COUT

Addition is a very fast operation and multi-number addition can be performed simply by successively adding each new number to the running summation.

Consider the circuit below:

The circuit has two outputs labelled DIFF and BORROW. A truth table for the circuit looks like: A 0 1 0 1 0 0 1 1 B 0 1 1 0 DIFF 0 0 1 0 BORROW

Cleary this circuit is performing binary subtraction of B from A (A-B, recalling that in binary 0-1 = 1 borrow 1). Such a circuit is called a half-subtractor, the reason for this is that it enables a borrow out of the current arithmetic operation but no borrow in from a previous arithmetic operation. As in the case of the addition using logic gates, a full subtractor is made by combining two half-subtractors and an additional OR-gate. A full subtractor has the borrow in capability (denoted as BORIN in the diagram below) and so allows cascading which results in the possibility of multi-bit subtraction. The circuit diagram for a full subtractor is given below.

The final truth table for a full subtractor looks like: A 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 B 0 1 0 1 0 1 0 1 BORIN 0 1 1 0 1 0 0 1 D 0 1 0 0 1 1 0 1 BOROUT

For a wide range of operations many circuit elements will be required. A neater solution will be to use subtraction via addition using complementing as was discussed in the Binary Arithmetic topic. In this case only adders are needed.

Question

An analogue to digital converter (ADC) measures voltages in the range 0 to 25 V and has 12-bit accuracy. What is the smallest voltage step that the ADC can resolve?

Answer

12 bits = 212 = 4096 therefore the ADC can measure 4096 different values of voltage (from 0 to 4095 inclusive), the number of voltage steps is thus 4095 (one fewer than the number of different values available). Assuming that we set digital 0 to be equivalent to 0V and digital 4095 to be equivalent to 25V then each voltage step is simply given by: 25V / 4095 = 0.006105V = 6.105mV

Question

Give a truth table for a 4-input NOR gate using the "don't care" condition, where appropriate, to simplify the truth table.

Answer

A B C D Y=A+B+C+D 0 1 x x x 0 0 x x 1 x x 1 x x 0 x x x 1 1 0 0 0 0

_______

Question

What would the output pulse train at the output, Y, look like if the input at C is always 1?

Answer

The gate in the circuit is a NAND gate and so gives 0 at the output when all the inputs are 1 and gives 1 at the output otherwise. Therefore the output will be: a=0;b=1;c=1;d=0;e=1;f=0;g=1

Sequential logic circuits differ from combinatorial logic circuits in two main respects.

The output of the system depends not only on the present external input(s) but on the previous inputs also The same external input(s) can give a different output response.

An important feature of sequential logical circuits, not present in combinatorial logic circuits, is the presence of feedback where the output from one or more logic gates is fed back into the input(s) of logic gates further back in the circuit.

The SR Flip-Flop

Consider a circuit comprising two NOR gates as illustrated below

here R and S are known as the external inputs, Q is known as the output or external output and Q' is known as an internal input. Q' is called the state of the system or state variable and is related to Q, R and S via

To investigate the behaviour of the circuit we develop a truth table assuming that the feedback loop is open circuit (i.e. Q' is an external input). The corresponding truth table is then given by S 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 R 0 1 0 1 0 1 0 1 Q 0 1 0 0 1 1 0 0 Q'

When the feedback loop is closed this forces Q=Q'. For those instances where Q=Q' in the truth table above then nothing changes when feedback is applied and so the circuit is said to be stable. In those cases where Q and Q' are different then the application of feedback causes the inputs to change (even though R and S have remained the same) and so the circuit is said to be unstable and a new output is generated.

The circuit stability is indicated in the truth table below where S=Stable, U=Unstable and the number corresponds to a stable state number. For example S 4 means stable state 4, U 3 means this is an unstable state which, upon the application of feedback, will become stable state 3. S 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 R 0 1 0 1 0 1 0 1 Q 0 1 0 0 1 1 0 0 Q' Stability State S S S U U S S U 1 2 3 3 4 4 5 5

The stability conditions are summarised in a flow table where each circled number represents a stable condition.

In general for flow tables columns are labelled with external inputs and rows are labelled with internal states.

Consider the following sequence of inputs applied to the circuit above SR=00 stable state 1 [Q=0] SR=10 switch to unstable state 4 switch to stable state 4 [Q=1] SR=01 switch to stable state 2 [Q=1] SR=01 switch to unstable state 3 switch to stable state [Q=0] Therefore, if S is taken to 1 (SET condition) then the output Q is set to 1. Q is subsequently held at 1 regardless of what happens to S (HOLD condition) until the input R is taken to 1 (RESET condition). When R=1 then the output Q is cleared back to 0. The condition SR=11 is prohibited for the reasons discussed below. This is the action of a SET-RESET FLIP-FLOP (SRFF) or one-bit memory element.

A problem arises with the SRFF in going from SR=11 to SR=00. Ideally the circuit would switch from stable state 5 to stable state 1. In practise however S and R will not switch at the same instant. If S switches to 0 before R then the circuit goes first to stable state 3 and then to stable state 1 where Q=0. If R switches to 0 before S however then the circuit first goes to unstable state 4 and hence to stable state 4 and then on to stable state 2 where Q=1. Therefore the behaviour is uncertain and so the input state SR=11 must be disallowed, it is called the prohibited state.

The SRFF is asynchronous and operates uniquely on the input data R and S. No control data are required. The logic symbol for the SRFF looks like

In practise flip-flops usually provide two outputs i.e. Q the standard output discussed above and its complement as is illustrated above. The latter is not to be confused with the internal state variable Q'. The SRFF can therefore be constructed either of two NOR gates plus feedback or two NAND gates plus feedback as shown in the circuit diagrams below. In the case of the NAND version it should be noted that the flip-flop is drived by the complements of S and R and so is driven by 0s rather than 1s.

Switch Debouncing

In the circuit below when the switch is in position (a) then = +5V and = 0V i.e. Similarly, when the switch is in position (b) then = 0V and = +5V i.e. = 1 and = 0 and =0 =1

In reality, when the switch moves from (a) to (b) (or vice versa) then two things happen

for a brief moment both and are 0. The switch may "bounce", that is make and break the connection a number of times before settling correctly on the contact. to the S input and to the R input of an SRFF. In this case, once

has made the transition from 0 to 1 this is equivalent to the Set feature of the SRFF. As has been seen, once this has happened no matter how many times S toggles between 0 and 1 then the output remains at 1 until R is set to 1. Hence once the first contact is made the output remains stable despite the subsequent bounces of the switch and so the circuit behaves as intended. This is illustrated in the timing diagram below.

The clocked RS flip-flop is like an SR flip-flop but with an extra third input of a standard clock pulse CLK. The logic symbol for this flip-flop is shown below

Bearing in mind that the NAND implementation of an SRFF is driven by 0s then it can be seen that the extra two NAND gates in front of the standard SRFF circuitry mean that the circuit will function as a usual SRFF when S or R are 1 and the clock pulse is also 1 ("high"). Therefore this flip-flop is synchronous. Specifically, a 0 to 1 transition on either of the inputs S or R will only be seen at the output if the clock is 1. An example timing diagram is given below.

The delay flip-flop (DFF) is unique in that it only has one external input along with a clock input. The logic symbol for this flip-flip is given below

where the two asynchronous inputs, PRESET and CLEAR enable the flip-flop to be set to a predetermined state, independent of the CLOCK. Note the invert bubble on these lines which indicates that these lines are normally held at 1 and that the function (CLEAR or PRESET) is performed by taking the line to 0. The delay flip-flop transfers whatever is at the external input D to the output Q. This does not happen immediately however and only happens on an rising clock pulse (i.e. as CLK goes from 0 to 1). The input is thus delayed by up to a clock pulse before appearing at the output. This is illustrated in the timing diagram below. The DFF is an edge-triggered device which means that the change of state occurs on a clock transition (in this case the rising clock pulse as it goes from 0 to 1).

here the function of the asynchronous inputs can clearly be seen, taking PRESET momentarily to 0 sets Q=1 and taking CLEAR momentarily to 0 sets Q=0. The delay flip-flop can also be configured from a JK flip-flop where the input connected to J and the complement of the input is connected to K.

If the DFF is configured such as is indicated below then there is no external input (D has become an internal input) and only the clock pulse (CLK) is operated on. Note that for clarity the asynchronous inputs PRESET and CLEAR have been omitted from this logic diagram.

assuming the inital state of CLK=0 and Q=0 then it follows that since is connected to then D=1. As seen above then whatever is at D is transferred to Q at the next rising clock pulse so, as CLK goes from 0 to 1 Q becomes 1 and so D becomes 0. At the next rising CLK pulse the input at D (which is 0) is transferred to Q and so Q becomes 0 and hence D=1, etc. etc. This cycle is illustrated in the timing diagram below.

It can be seen that for every two clock pulses in then there is only one clock pulse out, the circuit is therefore performing division by 2. It should be noted that this behaviour only takes places when the clock pulses are reasonably short (but at least long enough for the output to change state). If the clock pulse is long then oscillation may occur.

The JK Flip-Flop

The JK flip-flop is an SRFF with some additional gating logic on the inputs which serve to overcome the SR=11 prohibited state in the SRFF. A simple JKFF is illustrated below

The SR=11 is not allowed in this configuration because both and are fed back, one into each of the AND gates. Since each AND gate requires all inputs to be 1 to give an output of 1 then clearly it is impossible for both of these AND gates to be 1 at the same time and so S and R cannot both be 1. It should be noted that the circuit above is just one implementation of a JKFF. Another can be formed using the NAND gate version of the SRFF as illustrated in the lower circuit in the section Design of the SR flip-flop. In this case, since the SR inputs are complemented i.e. driven by 0 instead of 1, then the input gating logic would require NAND gates in place of the AND gates in the circuit above. In this case the full circuit, including the two asynchronous inputs for PRESET and CLEAR appears as below

A truth table can be developed for the output Q at time t (before a clock pulse) and at time t+1 (after a clock pulse), this is given below (clearly, the output is just the complement of Q). J 0 0 1 1 0 0 0 0 0 1 K 0 1 0 1 0 Qt 0 1 1 1 0 Qt+1

0 1 1

1 1 1

1 0 1

0 1 0

The final two lines in the truth table represent oscillation between the two states on each rising CLK pulse. As was the case with the Delay flip-flop this results in division by two of the incoming CLK pulse as long as the clock pulse is short, otherwise oscillation may occur. This behaviour can be summarised as follows J not equal to K takes the value of J on CLK pulse takes the value of K on CLK pulse J=K=0 all transitions inhibited ("no change") J=K=1 binary divider

One way of overcoming the problem with oscillation that occurs with a JK Flip-when J=K=1 is to use a socalled master-slave flip-flop which is illustrated below.

The master-slave flip-flop is essentially two back-to-back JKFFs, note however, that feedback from this device is fed back both to the master FF and the slave FF. Any input to the master-slave flip-flop at J and K is first seen by the master FF part of the circuit while CLK is High (=1). This behaviour effectively "locks" the input into the master FF. An important feature here is that the complement of the CLK pulse is fed to the slave FF. Therefore the outputs from the master FF are only "seen" by the slave FF when CLK is Low (=0). Therefore on the High-to-Low CLK transition the outputs of the master are fed through the slave FF. This means that the at most one change of state can occur when J=K=1 and so oscillation between the states Q=0 and Q=1 at successive CLK pulses does not occur.

Ripple Counters

Both the Delay flip-flop and the JK flip-flop both enable a pulse train at the input to be divided by two. If these flip-flops as cascaded together it follows that division by 4, 8, 16, etc. can take place. In general, for n cascaded flip-flops then division by 2n is possible. The following circuit comprises 4 JKFFs cacsaded such that the Q output from each flip-flop forms the clock input to the following flip-flop.

Note here that the invert bubbles on the clock inputs mean that the flip-flops trigger on the falling edge of each clock pulse. For all of the 4 JKFFs the J and K inputs are count enabled i.e. held at 1. Assuming an initial state where all outputs are 0 it is possible to develop a truth table for the four outputs Qa,Qb,Qc and Qd on successive clock pulses. This is given below. Qd 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 Qc 0 0 1 1 0 0 1 1 0 0 1 1 Qb 0 1 0 1 0 1 0 1 0 1 0 1 Qa

1 1 1 1

1 1 1 1

0 0 1 1

0 1 0 1

So, on successive clock pulses the output from the four JKFFs is exactly the same as the pure binary coded representation of the decimal numbers 0 to 15. Here, Qa is weighted by 1, Qb is weighted by 2 and so on. Such a device is known as a ripple counter or a modulo-16 (mod-16) counter.

The Modulo-16 ripple counter can be modified with additional logic gates to provide a base 10 decade counter for use in standard decimal counting and arithmetic. A decade counter requires resetting to 0 when the count reaches decimal 10. In the case of the ripple counter this corresponds to triggering a CLEAR signal to all 4 flip-flops when the state 1010bin is reached. It is therefore necessary to take action when all of the following are true

Qa = 0 Qb = 1 Qc = 0 Qd = 1

In principle this would require the logical AND of the outputs Qb and Qd and the complements of Qa and Qc. In practise however, from the mod-16 counter truth table it can be seen that decimal 10 corresponds to the first time that Qb and Qd are 1 and so the logic is simplified. The subsequent circuit for a decade counter is illustrated below where some of the labelling has been omitted for clarity. The additional logic that has been added compared to the modulo-16 ripple counter is indicated in red.

When the count reaches 1010bin then Qb = Qd = 1 and so the output from the NAND gate changes from 1 to 0. In this case CLEAR goes from 1 to 0 and causes all of the Q outputs to be reset to 0. At the same time the NOT gate provides a binary 1 to indicate the CARRY condition out of the counter. Once Qb = Qd = 0 then the output of the NAND gate returns to 1 and the count can restart.

In considering the sequential logic discussed in the course it has always been assumed that the flip-flops react instantanously to the inputs applied to them. This is an idealised situation and, in reality there is an inherent propagation delay (tpd) which corresponds to the time taken for the output to respond to the inputs applied to it. This propagation delay can be as high as 65ns for certain commercial flip-flops. When these flip-flops are cascaded together to produce, e.g. Ripple Counters, these delays cumulate. This has two effects on the ripple counter circuit.

In the case where tpd is much less than the clock frequency there is a short glitch between the different flip-flops in the ripple counter. When the flip-flops are subsequently decoded the effect is to introduce momentary out of sequence glitches in the counting sequence. These problems are illustrated in the timing circuit below which illustrates the situation for a 4 stage ripple counter being fed with a clock frequency of 770 kHz and a tpd of 65ns. The orange stripes correspond to where glitches occur, the numbers in the boxes at the bottom of the diagram indicate the decoded values for those glitches (note, D=MSB, A=LSB).

As the clock pulse frequency increases the timing propagation delay becomes a significant fraction of the clock period in which case further problems arise. In this case whole counts are either lost or occur out of sequence. This is illustrated in the timing diagram below for a 4 stage ripple counter being fed with a clock frequency of 7.7 MHz and a tpd of 65ns.

In this case, clearly the ripple counter is being driven too fast (specifically, the CLK pulse is too fast for the counter). One way of avoiding this is to always ensure that the clock period (TCLK) is longer than the total propagation time of the counter, this can be expressed as TCLK>N*tpd where N is the number of stages in the ripple counter.

Another method to tackle the problems outlined above is to ensure that all flip-flops are fed with the CLK pulse synchronously. This principle is illustrated in the circuit below where all CLK inputs are wired in parallel.

The operation of this ripple counter is as follows: All CLK inputs are wired in parallel; J and K of FF1 are tied to 1; As previously, FF triggering on the falling edges are assumed; J and K of FF2 are tied to QA, therefore the state of QA(FF1) determines whether or not FF2 changes state (toggles); If QA=0 before CLK then QB(FF2) does not toggle, if QA=1 before CLK then QB(FF2) toggles; the FF3 inputs are fed from QA and QB via an AND gate and so QC (FF3) toggles only when QA=QB=1 before CLK; Similarly, QD (FF4) is arranged such that it toggles only when QA=QB=QC=1; This arrangement leads to all output states toggling together in a synchronous manner.

Numeric Representation

Number Systems

The 0s and 1s present in the logic circuits discussed in the course can be used to represent real data inside logic circuitry in, for example, microprocessors. To do this a binary format has to be adopted. The usual practise is to use so-called pure binary coding whereby each binary digit (either 0 or 1) carries a certain weight according to its position in the binary number. So, for example 110100 = 1x25 + 1x24 + 0x23 + 1x22 + 0x21 + 0x20 = 32 + 16 + 0 + 4 + 0 + 0 = 52 The same approach applies to non-integral numbers so, for example 110.101 = 1x22 + 1x21 + 0x20 + 1x2-1 + 0x2-2 + 1x2-3 = 4 + 2 + 0 + 0.5 + 0 + 0.125 = 6.625 These examples illustrate binary to decimal conversion. To convert a fractional decimal number to binary then the procedure to follow is

first divide the number at the decimal point and treat the two parts separately. For the integer part then repeatedly divide it by 2 and store the remainder until nothing is left. The remainders when reverse-ordered gives the first part of the binary number. The reverse-ordering comes about since the first division by 2 gives the least significant bit (lsb) and so on until the last division which gives the most significant bit (msb). For the fractional part repeatedly multiply by 2 and record the carries i.e. when the resulting number is greater than 1. Repeat this process until the desired precision is achieved.

An full example of this technique is given in the Solved Problems. A useful way of expressing long pure binary coded numbers is by the use of hexadecimal numbers i.e. base 16. This is because each group of four bits (called a nibble since 2 nibbles make a byte!) can be converted into one hexadecimal number. The mapping between binary, decimal and hexadecimal (hex.) numbers is shown below. Decimal Binary Hex Decimal Binary Hex 0 0000 0 8 1000 8 1 0001 1 9 1001 9 2 0010 2 10 1010 A 3 0011 3 11 1011 B 4 0100 4 12 1100 C 5 0101 5 13 1101 D 6 0110 6 14 1110 E 7 0111 7 15 1111 F To convert a binary number into its hexadecimal equivalent first ensure that the binary number has a number of digits that is a multiple of 4, if not add zeros to the left hand side of the number until it does. Then split the number into nibbles and convert each nibble into its hexadecimal counterpart.

Binary Arithmetic

Here the rules for standard arithmetic for binary numbers are dicsussed

Binary Addition

Binary addition is completely straightforward and is done in the same way as standard decimal addition remembering that, in binary terms "one plus one equals zero carry one". This is also true for fractional binary numbers as illustrated below. Binary Decimal 101 +110 ____ 1011 5 +6 __ 11 Binary Decimal 1001.1 1100.1 9.5 +12.5 Binary 110.1101 Decimal 6.8125

Binary Subtraction

Binary subtraction usually takes place by complementing i.e. subtraction is via the addition of negative numbers. This technique requires the use of the so-called ones (1's) complement and twos (2's) complement of a binary number. The 1's complement of a binary number is formed simply by complmenting each digit in turn. The 2's complement of a binary number is formed by adding 1 to the least significant bit of the 1's complement (Note in the case of fractional binary numbers this is not the same as adding 1 to the 1's complement number - see below). Decimal 5 27 76 4.625 Binary 1's Complement 2's Complement 11111011 11100101 10110100

0100.101 1011.0101 1011.0110 0 Note that in order to correctly express the 1's complement and 2's complement binary numbers a fixed length format must be chosen (8-bit in the case above) and leading zeroes must be included when writing the original pure binary format number. Finally, in order to represent a negative binary number the MSB becomes a sign bit i.e. if the MSB=1 then the number is negative, if the MSB=0 then the number is positive, and so, e.g. 00010011 = +19 and 10010011 = -19. This is called true magnitude format. In order to perform binary subtraction the rules are as follows:

When the sum to be performed is A-B then the number to be subtracted (B) is converted to its 2's complement form and then added to A using standard binary addition. If, after the addition, the sign bit = 1 then a further 2 steps must be performed : o first take the 2's complement of the result; o then make the sign bit of the new number equal to 1;

interpret the result in true magnitude format. For sums of the form -A-B then take the 2's complement of A, add it to the 2's complement of B and then proceed as above; Sums of the form -A-(-B) can be converted to B-A before proceeding as above.

o

Examples of binary subtraction using this method can be found in the Solved Problems.

Binary multiplication and binary division are both most easily done by long multiplication and division methods as often taught for standard decimal numbers. In both cases all numbers must be in true magnitude format but with sign bits removed. For multiplication each partial product is calculated and then all partial products are summed using standard binary addition. For division it proceeds like decimal division. Finally the sign of the product or quotient is determined by summing all sign bits and retaining the LSB only of the resultant sum.

Parity

In any electronic system involving the transfer of data (in the form of binary digits) then data transmission errors are possible. The method of parity is widely used as a method of error detection. An extra bit, known as the parity bit is attached as the least-significant bit to the binary data word (or code group) to be transferred. The new data word to be transmitted (known as the total group) is thus the original code group with the parity bit appended. Two systems are used, namely even parity the value of the parity bit is set such that the total number of 1s in the data word is even e.g. 11001 which has an odd number of 1s. The new total group is thus 110011 11110 which has an even number of 1s. The new total group is thus 111100 odd parity the value of the parity bit is set such that the total number of 1s in the data word is odd

e.g. 11001 which has an odd number of 1s. The new total group is thus 110010 11110 which has an even number of 1s. The new total group is thus 111101 At the receiving end, a check is made on the parity of the whole code to detect an error before stipping the parity off to recover the original data word. Examples of circuits for transmitting (coding) and receiving (decoding) parity-encoded data are given below.

This is a coding circuit for a 4 bit data word ABCD it will create the correct parity bit for transmission with the code group.

This is the decoding circuit for a code group comprising a 4 bit data word ABCD along with a Parity Bit. The Check Bit will equal 0 if even parity has been used and will equal 1 if odd parity has been used. The method of parity does not pinpoint the error, rather it acts as an error flag i.e. indicating an error has occured somewhere. Often the code group has to be re-transmitted when a parity error is detected. Similarly, this method only safeguards against single errors and not multiple errors which could conspire to leave the parity check valid. In order to detect complex errors and to pinpoint where the errors have occured then more sophisticated and complex error-checking algorithms have to be employed - which subsequently requires more data bits to be transmitted for each code group.

Binary Codes

The usual way of expressing a decimal number in terms of a binary number is known as pure binary coding and is discussed in the Number Systems section. A number of other techniques can be used to represent a decimal number. These are summarised below.

In the 8421 Binary Coded Decimal (BCD) representation each decimal digit is converted to its 4-bit pure binary equivalent. For example: 57dec = 0101 0111bcd Addition is analogous to decimal addition with normal binary addition taking place from right to left. For example, 6 +3 011 0 001 1 ___ _ 100 1 BCD for 9 BCD for 6 BCD for 3 42 0100 0010 BCD for 42 BCD for 27

BCD for 69

Where the result of any addition exceeds 9(1001) then six (0110) must be added to the sum to account for the six invalid BCD codes that are available with a 4-bit number. This is illustrated in the example below 8 1001 BCD for 8 BCD for 7

BCD for 15

Note that in the last example the 1 that carried forward from the first group of 4 bits has made a new 4-bit number and so represents the "1" in "15". In the examples above the BCD numbers are split at every 4-bit boundary to make reading them easier. This is not necessary when writing a BCD number down. This coding is an example of a binary coded (each decimal number maps to four bits) weighted (each bit represents a number, 1, 2, 4, etc.) code.

The 4221 BCD code is another binary coded decimal code where each bit is weighted by 4, 2, 2 and 1 respectively. Unlike BCD coding there are no invalid representations. The decimal numbers 0 to 9 have the following 4221 equivalents Decimal 0 1 2 3 4 5 6 7 8 9 422 1 000 0 000 1 001 0 001 1 100 0 011 1 110 0 110 1 111 0 111 1 1's complement 1111 1110 1101 1100 0111 1000 0011 0010 0001 0000

the 1's complement of a 4221 representation is important in decimal arithmetic. In forming the code remember the following rules

Below decimal 5 use the right-most bit representing 2 first Above decimal 5 use the left-most bit representing 2 first Decimal 5 = 2+2+1 and not 4+1

Gray Code

Gray coding is an important code and is used for its speed, it is also relatively free from errors. In pure binary coding or 8421 BCD then counting from 7 (0111) to 8 (1000) requires 4 bits to be changed simultaneously. If this does not happen then various numbers could be momentarily generated during the transition so creating spurious numbers which could be read. Gray coding avoids this since only one bit changes between subsequent numbers. To construct the code there are two simple rules. First start with all 0s and then proceed by changing the least significant bit (lsb) which will bring about a new state. The first 16 Gray coded numbers are indicated below. Decimal Gray Code 0 0000

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0001 0011 0010 0110 0111 0101 0100 1100 1101 1111 1110 1010 1011 1001 1000

To convert a Gray-coded number to binary then follow this method 1. The binary number and the Gray-coded number will have the same number of bits 2. The binary MSB (left-hand bit) and Gray code MSB will always be the same 3. To get the binary next-to-MSB (i.e. next digit to the right) add the binary MSB and the gray code nextto-MSB. Record the sum, ignoring any carry. 4. Continue in this manner right through to the end. An example of converting a gray code number to its pure binary equivalent is available in the Solved Problems Gray coding is a non-BCD, non-weighted reflected binary code.

Question

Perform the following sums using binary numbers: 1. 75-42 2. -75-42 3. -75-(-42)

Answer

First we choose to use 8-bit true magnitude format. In this format: +75 = 01001011 ; +42 = 00101010 1 . 01001011 +11010110 ________ 00100001 this is +33 in true magnitude format +75 in true magnitude format 2's complement of +42

2 .

sign bit is 1 so perform 2's complement 2's complement of last line replace sign bit

3 .

-75-(-42) = -75+42 = 42-75 and so 00101010 +10110101 ________ 11011111 00100001 sign bit is 1 so perform 2's complement 2's complement of last line +42 in true magnitude format 2's complement of +75

Question

Convert 57.4801dec to its binary equivalent to 14-bit accuracy.

Answer

As the decimal number has both an integer and a fractional part the problem has to be done in two steps First take the integer part i.e. 57 and repeatedly divide by 2 noting the remainders of each division. 57/ 28/ 14/ 7/ 3/ 1/ 2 2 2 2 2 2 = 28 remainder 1 lsb = 14 remainder 0 = 7 remainder 0 = 3 remainder 1 = 1 remainder 1 = 0 remainder 1 msb

The binary equivalent of 57dec is therefore given by the remainders ordered from most significant bit (msb) to least significant bit (lsb) and is hence 1110001bin. The fractional part is given by repeatedly multiplying by 2 and storing the carries (when the result of the multiplication exceeds 1) until the required bit accuracy is reached. .4801x .9602x .9204x .8408x .6816x .3632x .7264x and so 57.4801dec = 111001.0111101bin 2 2 2 2 2 2 2 = = = = = = = .9602+ .9204+ .8408+ .6816+ .3632+ .7264+ .4528+ 0 1 1 1 1 0 1

Question

Convert the binary number 1101101101101 to its hexadecimal equivalent.

Answer

The binary number 1101101101101 has 13 digits. First extend this to a multiple of 4 (i.e. 16) by adding three leading 0s to the number, i.e. 11011011001101 becomes 0001101101101101 Next break the binary number up into nibbles (4-bit groups) and convert each nibble to its hexadecimal equivalent. Binary 0001 1011 0110 Hexadecimal 1 B 6 hence 1101101101101bin = 1B6Dhex 1101 D

Question

Convert the Gray coded number 10011011 to its binary equivalent.

Answer

The rules of conversion from Gray code to binary are given in the Gray code summary

The Gray code number has 8 bits so the Binary equivalent will also have 8 bits The most significant bit (left-most bit) is the same in both cases The next-to-most significant bit in the binary number comes from adding the most significant bit in the binary number to the next-to-most significant bit in the Gray coded number, the sum is noted and any carry ignored. Generally, working from left to right i.e. from most significant bit to least significant bit, then the nth bit (counting from right to left) in the binary number is formed from summing the n+1th bit in the binary number with the nth bit in the Gray coded number. The corresponding sum is noted and any carry ignored.

In the case of this example Gray Code Gray Code Gray Code Gray Code Gray Code Gray Code Gray Code Gray Code 10011011 10011011 10011011 10011011 10011011 10011011 10011011 10011011 Binary Digit 1 = 1 (same as Gray code) Binary Digit 2 = 0 + 1 = 1 Binary Digit 3 = 0 + 1 = 1 Binary Digit 4 = 1 + 1 = 0 (carry 1) Binary Digit 5 = 1 + 0 = 1 Binary Digit 6 = 0 + 1 = 1 Binary Digit 7 = 1 + 1 = 0 (carry 1) Binary Digit 8 = 1 + 0 = 1 and so 10011011gray = 11101101bin

Truth Tables

Often the logical functionality of a gate or a series of gates is illustrated by a truth table. With a truth table all possible combinations of input states are considered and the output value for each of these input states is listed in a table. Examples of truth tables can be found in the descriptions of basic logic gates and other logic gates.

It can easily be seen that for a logic gate with n inputs then the corresponding truth table requires 2n rows or entries. For devices with 4 or more inputs, representing each input state in this way can be a lengthy procedure. To simplify this representation of a logic function the concept of the "dont care" condition is introduced. Consider, for example the truth table for a three-input OR gate, it looks like

+ +

0 0 0 0 1 1 1 1 This table can be simplified by writing 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 1

+ +

0 1 x x where x can equal 0 or 1. 0 x 1 x 0 x x 1 0 1 1 1

Venn Diagrams

Venn diagrams are a useful technique to demonstrate equivalence relationships in Boolean expressions. In a Venn diagram, the binary variables of a function are represented as overlapping areas in a Universe. Complementing or the NOT function is represented as the remainder of the Universe outside a given area. The Venn diagrammatic representations of , , and are illustrated below

Universe

In a Venn diagram the OR function is taken as the combination or union or areas while the AND function is the intersection or common part between two or more overlapping areas. Two functions are said to be equivalent if they define identical areas on a Venn diagram. For an example of the use of Venn diagrams see the Proof of de Morgan's Theorems by Venn diagrams in the Solved Problems at the end of this topic.

de Morgan's Theorems

de Morgan's theorems state that

and

The first equation reads "NOT A AND NOT B EQUALS A NOR B" and the second reads "NOT A OR NOT B EQUALS A NAND B" Proof of these equations by two methods, i.e. by truth tables and by Venn diagrams are available in the Solved Problems at the end of this topic. The general rules for converting a Boolean expression comprised entirely of ORs(ANDs) to to one consisting entirely of ANDs(ORs) is available in the Lecture 3 summary on Boolean Algebra.

In order to design a logic circuit there are generally four basic steps involved, these are 1. 2. 3. 4. Choose the representation of the binary data states 0 (False) and 1 (True) for the electronic ciruit. Define the truth table for the circuit. Express the truth table in one of two alternative canonical forms. (see Canonical forms) Construct the circuit according to this Boolean expression.

Note that combinatorial logic circuits consist of an output that is independent of the input conditions as a function of time. The representation of the binary data in the electronic cicuit is usually pre-defined according to some convention. Most modern-day circuits employ so-called positive coding whereby the binary state 1 or True is represented by +5 Volts and the binary state of zero is represented by 0 Volts. The choice of coding is important however. Consider for example a circuit which behaves as the following truth table under positive coding A 0V 0V 5V 5V B 0V 5V 0V 5V Y 0V 0V 0V 5V

by changing to negative coding, (+5V = 0 (False), 0V = 1 (True)) the truth table becomes A 1 1 0 0 which is equivalent to the OR function. It is important to recognise here that the circuit has not been changed, only the choice of the representation of the binary numbers. It is also important to note that in switching from positive coding to negative coding the circuit functionality has gone from AND to OR and not, as might be expected, from AND to NAND. B 1 0 1 0 Y 1 1 1 0

Canonical Forms

The functionality of any logic circuit can be expressed in one of two alternative and equivalent canonical forms. These canonical forms consist of a Boolean algebraic expression. They are generally developed from a truth table. They are

The minterm form Known as the first canonical form, this a pure OR combination of minterms where a minterm is an AND function that includes each variable once in its normal or complemented form. The first canonical form is also known as the sum of products. The maxterm form Known as the second canonical form, this a pure AND combination of maxterms where a maxterm is an OR function that includes each variable once in its normal or complemented form. The second canonical form is also known as the product of sums.

Consider the truth table below for the output Y from a combinatorial logic circuit comprising three inputs, A, B and C A B C Y 1 1 1 1 0 0 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 0 1 0 0 0 0

The first canonical form is developed from the output 1's in the truth table. As can be seen, Y is only 1 for the 1st and 4th rows of the truth table. Therefore the minterm (AND function) expressions for these two rows are formed and OR-ed together to give the minterm form for the circuit as

The corresponding circuit would be implemented with AND-OR logic i.e. with the outputs from one or more AND gates being OR-ed together to give the final output.

Consider the truth table below for the output Y from a combinatorial logic circuit comprising three inputs, A, B and C A B C Y 1 1 1 1 0 0 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 0 1 0 0 1 1 0 1 1 1 1

The second canonical form is developed from the output 0's in the truth table. As can be seen, Y is only 0 for the 1st and 4th rows of the truth table. Developing the maxterm expression here is slightly more complicated and there are two approaches. In the first approach we first develop the minterm expression for the output 0's (not 1's) in the truth table. For the truth table above this will be given by

Then it is necessary to apply the rules of Boolean algebra for converting minterm expressions to maxterm expressions as is described in the Boolean Algebra summary. This leads to the final maxterm form for this truth table of

The second approach allows the maxterm form to be derived directly from the output 0's in the truth table using the following rules.

Take each line in the truth table where the output is 0 and Invert the variables (e.g. if is 1 then write etc.) OR these variables together to form the maxterm build the second canonical form from the AND of these maxterms

In the case of the truth table above it is possible to go directly to the final maxterm form using this approach. The corresponding circuit would be implemented with OR-AND logic i.e. with the outputs from one or more OR gates being AND-ed together to give the final output.

BOOLEAN ALGEBRA

de Morgans theorems: and In general this may be expressed as: AND is exchanged for OR (and vice versa) each variable is complemented the whole expression is complemented e.g: Order of precedence: expressions in brackets first AND before OR Commutative and associative laws apply, i.e:

Distributive law:

Most Boolean algebra relations fall into pairs each being the dual of the other:

Karnaugh Maps

Karnaugh maps or K-maps are a useful graphic technique to perform the minimization of a canonical form. They utilise Boolean theorems in a mapping procedure which results in a simplified Boolean expression being developed. There are five basic steps in the minimization procedure, these are 1. Develop the first canonical expression (minterm form) from the associated truth table. This has already been described in the section on canonical forms. 2. Plot 1s in the Karnaugh map for each minterm in the expression. Each AND-ed set of variables in the minterm expression is placed in the corresponding cell on the K-map. See below for more information on labelling the K-map. 3. Loop adjacent groups of 2, 4 or 8 1s together. A more detailed summary of looping rules can be found below. 4. Write one minterm per loop, eliminating variables where possible. When a variable and its complement are contained inside a loop then that variable can be eliminated (for that loop only), save the variables that are left. 5. Logically OR the remaining minterms together to give the simplified minterm expression. A detailed example of using Karnaugh maps for circuit simplification is available in the Solved Problems. In the case of simplifying a maxterm expression the steps are very similar with only slight differences due to the OR-AND nature of the maxterm expressions. The steps involved are 1. Develop the second canonical expression (maxterm form) from the associated truth table. This has already been described in the section on canonical forms. 2. Plot 1s in the Karnaugh map for each maxterm in the expression. Each OR-ed set of variables in the minterm expression is placed in the corresponding cell on the K-map. See below for more information on labelling the K-map. Note here you are propogating 0s in the truth table through as 1s in the K-map. 3. Loop adjacent groups of 2, 4 or 8 1s together. A more detailed summary of looping rules can be found below. 4. Write one maxterm per loop, eliminating variables where possible. When a variable and its complement are contained inside a loop then that variable can be eliminated (for that loop only), save the variables that are left. 5. Logically AND the remaining maxterms together to give the simplified maxterm expression.

Special care must be taken when labelling Karnaugh maps. An incorrectly-labelled K-map will not allow the correct minimization to occur. The sequence used along each axis is binary 00,01,11,10. This is an example of Gray coding where only one bit is changed at a time. Gray coding will be discussed in a later lecture. For minterm expressions, the correct labelling for a Karnaugh map corresponding to a 4-input circuit (inputs A, B, C and D) is

whilst the correct labelling for a maxterm K-map for the same circuit is

the labelling for 2- and 3-input maps follows logically from this.

Specific rules apply to looping on a Karnaugh Map, they are summarised here

Loops must contain 2n cells set to 1. A single cell (loop of 20) cannot be simplified. A loop of 2 (21) is independant of 1 variable, a loop of 4 (22) is independant of 2 variables. In general a loop of 2n cells is independant of n variables. Using the largest loops possible will give the simplest functions. All cells in the K-map set to 1 must be included in at least one loop when developing the minterm or maxterm form. Loops may overlap if they contain at least one other unlooped cell in the K-map. Any loop that has all of its cells included in other loops is redundant. Loops must be square or rectangular. Diagonal or L-shaped loops are invalid. The edges of a K-map are considered to be adjacent. Therefore a loop can leave at the opt of a K-map and re-enter at the bottom, similarly for the two sides. There may be different ways of looping a K-map since for any given circuit there may not be a unique minimal form.

Question

Prove de Morgan's theorems

and

Answer

For the first expression the relevant truth table is given below, the equivalence between the entries in the red columns prove the first theorem.

.

0 0 1 1 0 1 0 1 1 1 0 0 1 0 1 0 1 0 0 0 0 1 1 1

+

1 0 0 0

The truth table for the second expression is given below, again the two columns proving the theorem are highlighted in red.

+

0 0 1 1 0 1 0 1 1 1 0 0 1 0 1 0 1 1 1 0 0 0 0 1

.

1 1 1 0

Question

Prove de Morgan's theorems

and

Answer

Considering the first expression above, the Venn Diagrams for and are

respectively. The Boolean expression requires the logical AND of these two variables which, in terms of a Venn diagram, is given by that part which is common to both diagrams. This is drawn as

For the right hand side of the expression first the logical OR of and is required. In a Venn diagram representation, a logical OR is performed by taking any shaded part of both of the Venn diagrams. The three diagrams below denote , and respectively.

OR

(i.e

NOR

represented by all those parts of the Universe not populated by the original diagram. Therefore represented as

In the case of the second expression above the approach is identical. Here

+ +

and

= = +

. .

and so

= = . =

Question

A circuit with inputs A, B, C and D is to be designed such that its output Y is given according to the following truth table. A 0 0 0 0 0 0 0 0 B 0 0 0 0 1 1 1 1 C 0 0 1 1 0 0 1 1 D 0 1 0 1 0 1 0 1 Y 0 1 1 1 1 1 0 0 A 1 1 1 1 1 1 1 1 B 0 0 0 0 1 1 1 1 C 0 0 1 1 0 0 1 1 D 0 1 0 1 0 1 0 1 Y 0 1 1 1 0 1 0 0

Answer

The first step is to develop the first canconical expression (minterm form) for the truth table, this is given by

Next draw a correctly-labelled 4-input Karnaugh map and populate it with 1s corresponding to the individual minterms in the above expression. This gives

At this stage it is necessary to loop all of the 1s in the K-map. Remember that in order to simplify the Boolean expression as much as possible it is necessary to draw the largest loops possible. It is also necessary to include each 1 in the K-map in at least one loop. A summary of Karnaugh map looping rules is available in the lecture summary. The looped Karnaugh map looks like

The red loop is a loop of 4 i.e. 22 and therefore allows two variables to be eliminated. The loop contains both A and B and their complements and so the resulting minterm is NOT C AND D. The green loop is a loop of 2 and so one variable can be eliminated leaving 3 variables in the minterm. The Kmap shows that this loop is independent of D and so the minterm is given by NOT A AND B AND NOT C. The purple loop is again a loop of 4. It contains A, NOT A, D and NOT D and so these two variables can be eliminated. The corresponding minterm is NOT B AND C. The blue loop is totally redundant as all of the 1s in the blue loop are also contained in another loop. Therefore there is no minterm from the blue loop in the simplified minterm form. Finally the simplified minterm expression is derived by writing one term per loop and OR-ing the final terms together to give

Question

The executive board of a small company consists of

In order for a decision to be made A requires the support of one other board member, similarly B requires the support of two other board members. Construct a truth table for the decision-making strategies and indicate clearly where voting results in the decision going i. ii. against A against B

Answer

In the truth table below, A=1 means that A voted in favour of a decision, etc. etc.

Y=0 means that the decision is not carried Y=1 means that the decision is carried Y=A means that the decision is carried against A's wishes Y=B means that the decision is carried against B's wishes A B C D 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1 0 1 0 1 0 1 0 1 Y 0 0 0 0 0B 0B 0B 1A A B C D 1 1 1 1 1 1 1 1 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1 0 1 0 1 0 1 0 1 Y 0A 1B 1B 1B 1 1 1 1

## Гораздо больше, чем просто документы.

Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.

Отменить можно в любой момент.