Вы находитесь на странице: 1из 9

Q Big-endian and little-endian are terms that describe the order in which a

Ans: sequence of bytes are stored in computer memory. Little-endian is an order


in which the "little end" (least significant value in the sequence) is stored
first.Big-endian and little-endian are terms that describe the order in which a
sequence of bytes are stored in computer memory.
1. Big-endian is an order in which the "big end" (most significant value in
the sequence) is stored first (at the lowest storage address).
2. Little-endian is an order in which the "little end" (least significant value in
the sequence) is stored first.For example In a big-endian computer, the two
bytes required for the hexadecimal number 4F52 would be stored as 4F52 in
storage (if 4F is stored at storage address 1000, for example, 52 will be at
address 1001). In a little-endian system, it would be stored as 524F (52 at
address 1000, 4F at 1001).
Big or little needs to be specified for Unicode/UTF-16 encoding because for
character codes that use more than a single byte, there is a choice of whether
to read/write the most significant byte first or last. Unicode/UTF-16, since
they are variable-length encodings (i.e. each char can be represented by one
or several bytes) require this to be specified. (Note however that UTF-8
"words" are always 8-bits/one byte in length [though characters can be
multiple points], therefore there is no problem with endianness.) If the
encoder of a stream of bytes representing Unicode text and the decoder
aren't agreed on which convention is being used, the wrong character code
can be interpreted. For this reason, either the convention of endianness is
known beforehand or more commonly a byte order mark is usually specified
at the beginning of any Unicode text file/stream to indicate whethere big or
little endian order is being used.
Q: Why are transfer of control instructions needed?
Ans: The reasons behind transfer of control insructions are:
i) There are chances of executing the instructions more than once to
implement the application.
ii) If the application is small, then writing the same instructions again and
again is limited.
iii) In case of complex application, it’s not possible. Hence, loops are used to
execute set of instructions repeatedly to process the data.
Q: A digital computer has a common bus system for 16 registers of 32 bits each.
The bus is constructed with multiplexers.
(i) How many selection inputs are there in each multiplier?
(ii) What sizes of multiplexers are needed?
(iii) How many multiplexers are there in the bus?

Q: How is the syndrome for the Hamming code interpreted?


How hamming code is superior in terms of error correction.
Ans: When data are to be read into memory, a calculation, i.e. a function f, is
performed on the data to produce a code. Both the code and data are stored.
Thus, if an M-bit word of datahas to be stored and the code of length k bits,
then the actual size of the stored word is M+K bits.
When the previously stored word read out, then the code is used to detect
and correct errors. A new set K code bits is generated from the M data bits
and compared with the fetched code bits. The comparison gives following
three results:
i) No errors are detected. The fetched data bits are sent out.
ii) An error is detected, and it is possible to correct the error. The data bits
and error correction bits are fed into a corrector, which produces a corrected
set of M bits to be sent out.
iii) An error is detected, but it is not possible to correct it. This condition is
reported.
Codes that operate in the above way are called error-correcting codes. A
code is characterized by the number of bit errors in a word that it can correct
and detect. The simplest of error correcting codes is the Hamming code
developed by richard Hamming at Bell laboratories. In venn diagram ofbook
William Stallings show the use of this code on 4-bit words(M=4). With
three intersecting circles, there are seven compartments. We assign the 4
data bits to the inner compartments. The remaining compartments are filled
with parity bits. Each parity bit is chosen so that the total numbar of 1’s in its
circle is even. As circle A includes three data 1s, the parity bit in that circle is
set to 1. Now, if an error changes one of the data bits, then it is easily found.
By checking the parity bits, problems or errors are found in circle A and C
but not in circle B. But only one of the compartments is in A and C but not
in B. The error can be corrected by changing that bit. Hamming code is used
to detect and correct single bit errors in 8-bit words. A bit by bit comparison
is done by taking the EX-OR of the two inputs. The result is called
syndrome word. Each bit of the syndrome is 0 or 1 according to if there is or
is not a match in that position for the two inputs. The syndrome word is K-
bits wide and has a range between 0 to 2 k -1. The value 0 indicates that no
error was detected, leaving 2 k -1 values to indicate, if there is an error,
which bit was in error.
Q: How to generate a 4-bit syndrome for an 8-bit data word?
Ans: i) If the syndrome contains all 0s, no error has been detected.
ii) If the syndrome contains one and only one bit set to 1, then an error has
occurred in one of the four check bits. No correction is needed.
iii) If the syndrome contains more than one bit set to 1, then the numerical
value of the syndrome indicates the position of the data bit in error. This data
bit is inverted for correction.
Q: Discuss difference between dynamic and static RAM in terms of
characteristics such as speed,size and cost.
Ans: RAM, or random access memory, is a kind of computer memory in which
any byte of memory can be accessed without needing to access the previous
bytes as well. RAM is a volatile medium for storing digital data, meaning
the device needs to be powered on for the RAM to work. DRAM, or
Dynamic RAM, is the most widely used RAM that consumers deal with.
DDR3 is an example of DRAM. SRAM, or static RAM, offers better
performance than DRAM because DRAM needs to be refreshed periodically
when in use, while SRAM does not. However, SRAM is more expensive and
less dense than DRAM, so SRAM sizes are orders of magnitude lower than
DRAM. Dynamic RAM size is 1GB to 2GB in smartphones and tablets
while 4GB to 16GB in laptops. Static RAM is 1MB to 16MB in size.

Q: What is the basic advantage of using interrupt initiated data transfer over
transfer under program control without an interrupt?
Ans: In the interrupt initiated data transfer, the processor verifies the request and
transfer the control ISR to perform the task and its resumes back with the
useful task while, the processor has to waste its time by performing all the
task, for example when a print command is given in the interrupt initiated , it
gives control over to ISR and resumes the work back where as without
interrupt the processor has to wait unless the print document is transferred to
the printer.

Q: How the memory is organized?


Ans: a) Memory: many millions of storage cells, each of which can store a bit of
information, having the value 0 or 1.As a single bit represents very small
amount of information.
b) For this purpose the memory is organized so that a group of n-bits can be
stored or retrieved in a single basic operation. Each group of n-bits is
referred to as a word of information and n is called the word length.
c) Memory word diagram is expected here. Refer page no 34 of CO book
author name Zaky.
d) Memory organisation
i) Memory unit is an essential component in digital computers, since it is
needed for storing programs and data. Two or three levels of memory such
as Main memory,Secondary memory and Cache memory are provided in a
digital computer. The main memory is a fast memory.
ii) Main memory stores the programs along with data,which are to be
executed. It also stores necessary programs of system software. The cache
memory is placed in between the CPU and the main memory.Secondary
memory is permanent storage used to store programs and data that are
used infrequently.
iii) To identify the behavior of various memories certain characteristics are
considered. These are as follows- Memory types : On the basis of their
location inside the computer, memory can be placed in four groups :
1) CPU Registers: these high speed registers in the CPU work as memory
for temporary storage of instruction and data. The data can be read from or
written into a register within a single clock cycle.
2) Main Memory or Primary Memory: Main memory size is large and fast
accessing external memory stores programs and data. This memory is slower
compared to CPU registers because of main memory has large storage
capacity is typically 1 and 21 0 megabyte.
Q: Give the difference between sequential, random and direct access.
Q: What is the difference between isolated I/O and memory mapped I/O?
Why does DMA have priority over the CPU when both request a memory
transfer?
Q: Explain the difference between hardwired and control and micro
programmed control. How we relate instructions and micro-operations?
What is the overall function of a processor’s control unit?
Q: Many pipelined processors use four to six stages. Others divide instruction
execution into smaller steps and use more pipeline stages and a faster clock.
Considering the above scenario, for fast operations what would you suggest
in terms of pipeline stages? and discuss the need of instruction pipe-lining.
Q: What are different stages of a pipe?Why does an assembly line in a
manufacturing plant refer to as pipe-lining ?
Q: A processor has 16 registers, an ALU with 16 logic and 16 arithmetic
functions and a shifter with 8 operations, all connected by an internal
processor bus. Design a microinstruction format to specify the various
micro-operations for the processor.
Q: How three techniques have defined and differentiated for performing
Input/Output?
Ans: Three techniques for performing I/O are given bellow :
i) Programmed I/O: The processor issues an I/O command, on behalf of a
process, to an I/O module; that process then busy-waits for the operation to
be completed before proceeding.
ii) Interrupt-driven I/O: The processor issues an I/O command on behalf of a
process, continues to execute subsequent instructions, and is interrupted by
the I/O module when the latter has completed its work. The subsequent
instructions may be in the same process, if it is not necessary for that process
to wait for the completion of the I/O. Otherwise, the process is suspended
pending the interrupt and other work is performed.
Iii) Direct memory access (DMA): A DMA module controls the exchange of
data between main memory and an I/O module.The processor sends a
request for the transfer of a block of data to the DMA module and is
interrupted only after the entire block has been transferred.
Memory Mapped I/O and Isolated I/O are two methods of performing input-
output operations between CPU and installed peripherals in the system.
Memory mapped I/O uses the same address bus to connect both primary
memory and memory of hardware devices. Thus the instruction to address a
section or portion or segment of RAM can also be used to address a memory
location of a hardware device. Isolated I/O uses separate instruction classes
to access primary memory and device memory. In this case, I/O devices
have separate address space either by separate I/O pin on CPU or by entire
separate bus. As it separates generalmemory addresses with I/O devices, it is
called isolated I/O.
We can broadly classify external devices into three categories:
i) Human readable: Suitable for communicating with the computer user.
ii) Machine readable: Suitable for communicating with equipment.
iii) Communication: Suitable for communicating with remote devices.
Q: When a device interrupt occurs, how does the processor determine which
device issued the interrupt?
Ans: Basically for processing i/p device interrupts there is special dedicated chip
called as PIC (Programmable Interrupt Controller ).
When you hit interrupt through keyboard it will be initially sent to PIC then
PIC sends a signal on INT pin of processor ( due to which processor will
aware that there is something interrupt due to some device ), when processor
will free it will send READY signal to PIC then PIC sends information
regarding interrupt device and interrupt type to processor via data bus line
and finally interrupt is executed.
There are various types of interrupts which have different priorities e.g. your
keyboard interrupt have higher priority than autorun interrupt of pen drive.
Hence, for controlling interrupts according to their priorities, there are total 8
levels of priority in PIC.
Q: When a DMA module takes control of a bus and while it retains control of
the bus, what does the processor do?
Ans: DMA is a peripheral which does not interact or interfere with any other part
of the microprocessor or microcontroller, including the main bus. All the
ARM processors are similar to this concept.
Some low-power or low-cost microprocessors and microcontrollers have
feature which call as DMA, but which occupies the bus. These have been
implemented by various different methods, and even though they do prevent
any other core activity, these DMAs are still useful. Some small, low power
uC have worked with this.
Q: What is parity bit? How does SDRAM differ from ordinary DRAM?
Ans: i)A parity bit is just a 1 or 0 that is placed on a drive. The parity check
method can detect one error and is used in both the asynchronous
transmission method and the character-oriented synchronous
transmission method. A parity bit is an extra bit that the transmitter adds to
the information. A parity bit is a bit added to a binary word that is to be
stored and retrieved or transmitted and received. An optional bit called the
Parity bit may be transmitted along with the data to provide a small amount
of error detection.
ii) SRAM is static RAM used in processor memory caches, and SDRAM is
dynamic RAM used on DIMMs. ... for each beat of the system clock, as
regular SDRAM does, it processes data when the beat rises and again when
it falls, doubling the data rate of memory. ... They are not compatible, and
the different notch positions keep someone from installing a DDR2 or DDR3
DIMM in the wrong memory slot.
Q: Many pipelined processors use four to six stages. Others divide instruction
execution into smaller steps and use more pipeline stages and a faster clock.
Considering the above scenario, for fast operations what would you suggest
in terms of pipeline stages?
Ans: Many pipelined processors use four to six stages. Others divide instruction
execution into smaller steps and use more pipeline stages and a faster
clock.Considering the above scenario, for fast operations, there are two
pipeline stages in one clock cycle.
The processing of an instruction need not be divided into only two steps. To
gain further speed up, the pipeline must have more stages.
Let us consider the following decomposition of the instruction execution:
 Fetch instruction (FI): Read the next expected instruction into a
buffer.
 Decode instruction ((DI): Determine the opcode and the operand
specifiers.
 Calculate operand (CO): calculate the effective address of each
source operand.
 Fetch operands(FO): Fetch each operand from memory.
 Execute Instruction (EI): Perform the indicated operation.
 Write operand(WO): Store the result in memory.
There will be six different stages for these six subtasks. For the sake of
simplicity, let us assume the equal duration to perform all the subtasks. It the
six stages are not of equal duration, there will be some waiting involved at
various pipeline stages. Instruction pipelining is similar to use of an
assembly line in a manufacturing plant. An assembly line takes advantage of
the fact that a product goes through various satges of production.
By laying the production process out in an assembly line, products at various
stages can be worked simultaneously. This process is referred to as pipeling,
because as in a pipeline, new inputs are accepted at one end before
previously accepted inputs appear as outpus at other end.
Q: Consider a statement of the form IF A>B THEN action 1 ELSE action 2.
Write a sequence of assembly language instructions, first using branch
instructions only, then using conditional instructions those available on the
ARM processor. Assembly language instruction sequence for if then else.
Use Booth algorithm to multiply 25(multiplicand) by 13(multiplier), where
Q: each number is represented using 6-bits.
Booth's multiplication algorithm
Ans: is a multiplication algorithm that multiplies two signed binary numbers in
two's complement notation.
Booth's algorithm can be implemented by repeatedly adding (with ordinary
unsigned binary addition) one of two predetermined values A and S to a
product P, then performing a rightward arithmetic shift on P. Let m and r be
the multiplicand and multiplier, respectively; and let x and y represent the
number of bits in m and r.
1. Determine the values of A and S, and the initial value of P. All of
these numbers should have a length equal to (x + y + 1).
1. A: Fill the most significant (leftmost) bits with the value of
m. Fill the remaining (y + 1) bits with zeros.
2. S: Fill the most significant bits with the value of (−m) in
two's complement notation. Fill the remaining (y + 1) bits
with zeros.
3. P: Fill the most significant x bits with zeros. To the right of
this, append the value of r. Fill the least significant (rightmost) bit with a
zero.
Determine the two least significant (rightmost) bits of P.
1. If they are 01, find the value of P + A. Ignore any overflow.
2. If they are 10, find the value of P + S. Ignore any overflow.
3. If they are 00, do nothing. Use P directly in the next step.
4. If they are 11, do nothing. Use P directly in the next step.
Arithmetically shift the value obtained in the 2nd step by a single
place to the right. Let P now equal this new value.
Repeat steps 2 and 3 until they have been done y times.
Drop the least significant (rightmost) bit from P. This is the product
of m and r. Example:
Find 3 × (−4), with m = 3 and r = −4, and x = 4 and y = 4:
m = 0011, -m = 1101, r = 1100.
A = 0011 0000 0
S = 1101 0000 0
P = 0000 1100 0
Perform the loop four times:
1. P = 0000 1100 0. The last two bits are 00.
• P = 0000 0110 0. Arithmetic right shift.
2. P = 0000 0110 0. The last two bits are 00.
• P = 0000 0011 0. Arithmetic right shift.
3. P = 0000 0011 0. The last two bits are 10.
• P = 1101 0011 0. P = P + S.
• P = 1110 1001 1. Arithmetic right shift.
4. P = 1110 1001 1. The last two bits are 11.
• P = 1111 0100 1. Arithmetic right shift.
• The product is 1111 0100, which is −12.
The above-mentioned technique is inadequate when the multiplicand is the
most negative number that can be represented (e.g. if the multiplicand has 4
bits then this value is −8). One possible correction to this problem is to add
one more bit to the left of A, S and P. This then follows the implementation
described above, with modifications in determining the bits of A and S; e.g.,
the value of m, originally assigned to the first x bits of A, will be assigned to
the first x+1 bits of A. Below, the improved technique is demonstrated by
multiplying −8 by 2 using 4 bits for the multiplicand and the multiplier:
A = 1 1000 0000 0
S = 0 1000 0000 0
P = 0 0000 0010 0
Perform the loop four times:
1. P = 0 0000 0010 0. The last two bits are 00.
• P = 0 0000 0001 0. Right shift.
2. P = 0 0000 0001 0. The last two bits are 10.
• P = 0 1000 0001 0. P = P + S.
• P = 0 0100 0000 1. Right shift.
3. P = 0 0100 0000 1. The last two bits are 01.
• P = 1 1100 0000 1. P = P + A.
• P = 1 1110 0000 0. Right shift.
4. P = 1 1110 0000 0. The last two bits are 00.
• P = 1 1111 0000 0. Right shift.
• The product is 11110000 (after discarding the first and the last bit)
which is −16.
Need of variable length instruction format:
Q: One of the advantages that x86 has over most RISC chips is instruction
Ans: density. x86 instructions are variable-length, which means that common
instructions typically have a shorter encoding and so take up less space in
instruction cache. Therefore, x86 chips need smaller instruction caches for
the same performance. An instruction cache miss can cause the processor to
stall for 150 or so cycles—if that happens often, your processor throughput
drops dramatically. The cost of supporting multiple instruction sets is
increased complexity of the instruction decoder. The ARM instruction
decoder takes a 32-bit word and just needs to test a few bits to know where
to dispatch the instruction. The x86 decoder needs to read the bits in
sequence, find breaks between instructions. Like the Atom, the decoder can
account for around 20% of the total power consumption.
How is redundancy achieved in a RAID system?
Q: The basic idea behind RAID is to combine multiple small, inexpensive disk
Ans: drives into an array to accomplish performance or redundancy goals not
attainable with one large and expensive drive. This array of drives appears to
the computer as a single logical storage unit or drive. There is no
redundancy in RAID 0. For RAID 1, redundancy is achieved by having two
identical copies of all data. For higher levels, redundancy is achieved by the
use of error-correcting codes.
Level 4 -- Level 4 uses parity[3] concentrated on a single disk drive to
protect data. It's better suited to transaction I/O rather than large file
transfers. Because the dedicated parity disk represents an inherent
bottleneck, level 4 is seldom used without accompanying technologies such
as write-back caching. Although RAID level 4 is an option in some RAID
partitioning schemes, it is not an option allowed in Red Hat Linux RAID
installations[4]. Array capacity is equal to the capacity of member disks,
minus capacity of one member disk. Level 5 -- The most common type of
RAID. By distributing parity across some or all of an array's member disk
drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The
only bottleneck is the parity calculation process. With modern CPUs and
software RAID, that isn't a very big bottleneck. As with level 4, the result is
asymmetrical performance, with reads substantially outperforming writes.
Level 5 is often used with write-back caching to reduce the asymmetry.
Array capacity is equal to the capacity of member disks, minus capacity of
one member disk.
RAID is Redundant Array of Independent Disc. In RAID, with the use of
multiple disks, there are wide variety of ways in which the data can be
organized and in which the redundancy is added to improve reliability.
In RAID, the technologies like gripping, mirroring and gripping with parity
are present to achieve redundancy.
i) In RAID level 0, there is no redundancy, because it does not contain any
duplicate and parity bits.
ii) In RAID level 1, the redundancy is achieved by having two identical
duplicate copies of data.
iii) In RAID level 2, the redundancy is achieved by data stripping with error
correction.
iv) In RAID level 3, the redundancy is achieved by parity check
information.
v) In level 4, to achieve redundancy, block level stripping is used.
vi) In level 5. stripping with parity is used to achieve redundancy.
vii) In RAID level 6, stripping with double parity is used to achieve
redundancy.
How we relate instructions and micro-operations?
Q: The main task performed by the CPU is the execution of instructions. The
Ans: main question is:how these instructions will be executed by the CPU? The
above question can be broken down into two simpler questions. These are:
What are the steps required for the execution of an instruction? How are
these steps performed by the CPU? The answer to the first question lies in
the fact that each instruction execution consists of several steps. Together
they constitute an instruction cycle. A micro-operation is the smallest
operation performed by the CPU. These operations put together execute an
instruction. For answering the second question, we must have an
understanding of the basic structure of a computer. The CPU consists of an
Arithmetic Logic Unit, the control unit and operational registers.

What is the overall function of a processor’s control unit?


Q: The control unit (CU) is a component of a computer's central processing unit
Ans: (CPU) that directs the operation of the processor. It tells the computer's
memory, arithmetic/logic unit and input and output devices on how to
respond to a program's instructions.
A particular system is controlled by an operator through commands entered
Q: from a keyboard. The average number of commands entered in an 8 hour
interval is 60.
a) Suppose the processor scans the keyboard every 100 ms. How many times
will the keyboard be checked in an 8-hour period?
b) By what fraction would the number of processor visits to the keyboard be
reduced if interrupt driven input/output were used?
Why is RISC architecture better suited for pipeline processing than CISC?
Q: RISC uses simple instructions which can be executed within one clock
Ans: cycle, whereas primary goal of CISC is to complete a task in as few lines as
possible. For example, consider task of multiplying two numbers. CISC
architecture would come prepared with a specific instruction for doing this
task. Whereas in RISC you would need to perform three or four instructions
for completing the same task.
RISC architectures lend themselves more towards pipelining than CISC
architectures for many reasons. As RISC architectures have a smaller set of
instructions than CISC architectures in a pipeline architecture the time
required to fetch and decode for CISC architectures is unpredictable.
The difference in instruction length with CISC will hinder the fetch decode
sections of a pipeline, a single byte instruction following an 8 byte
instruction will need to be handled so as not to slow down the whole
pipeline. In RISC architectures the fetch and decode cycle is more
predictable and most instructions have similar length.
Which architecture is more common in mobile phones RISC or CISC?
Q: Most mobile phones (like Apple's iPhone) use the ARM family of
Ans: processors. These rely on a RISC load/store architecture.