Вы находитесь на странице: 1из 7

NAME : Abdul Gani

ROLL NO : 201DDE1184
COURSE : MCA
YEAR/SEM : 2nd / 3rd
PAPER CODE : MCA 301
PAPER NAME : (COMPUTER ORGANIZATION)
Q-1. What is Addressing Modes?

Addressing Modes– The term addressing modes refers to the way in which the operand of
an instruction is specified. The addressing mode specifies a rule for interpreting or
modifying the address field of the instruction before the operand is actually executed.

Addressing modes are an aspect of the instruction set architecture in most central
processing unit (CPU) designs. The various addressing modes that are defined in a given
instruction set architecture define how the machine language instructions in that
architecture identify the operand(s) of each instruction. An addressing mode specifies how
to calculate the effective memory address of an operand by using information held in
registers and/or constants contained within a machine instruction or elsewhere .

Number of addressing modes

Different computer architectures vary greatly as to the number of addressing modes they
provide in hardware. There are some benefits to eliminating complex addressing modes and
using only one or a few simpler addressing modes, even though it requires a few extra
instructions, and perhaps an extra register It has proven much easier to design pipelined
CPUs if the only addressing modes available are simple ones.

Most RISC architectures have only about five simple addressing modes, while CISC
architectures such as the DEC VAX have over a dozen addressing modes, some of which are
quite complicated. The IBM System/360 architecture had only three addressing modes; a
few more have been added for the System/390.

When there are only a few addressing modes, the particular addressing mode required is
usually encoded within the instruction code (e.g. IBM System/360 and successors, most
RISC). But when there are lots of addressing modes, a specific field is often set aside in the
instruction to specify the addressing mode. The DEC VAX allowed multiple memory
operands for almost all instructions, and so reserved the first few bits of each operand
specifier to indicate the addressing mode for that particular operand. Keeping the
addressing mode specifier bits separate from the opcode operation bits produces an
orthogonal instruction set.
Even on a computer with many addressing modes, measurements of actual
programs[ indicate that the simple addressing modes listed below account for some 90% or
more of all addressing modes used. Since most such measurements are based on code
generated from high-level languages by compilers, this reflects to some extent the
limitations of the compilers being used.

Types of Addressing Modes-

 In computer architecture, there are following types of addressing modes-

1. Implied / Implicit Addressing Mode


2. Stack Addressing Mode
3. Immediate Addressing Mode
4. Direct Addressing Mode
5. Indirect Addressing Mode
6. Register Direct Addressing Mode
7. Register Indirect Addressing Mode
8. Relative Addressing Mode
9. Indexed Addressing Mode
10. Base Register Addressing Mode
11. Auto-Increment Addressing Mode
12. Auto-Decrement Addressing Mode

Q-2. What is Machine Language?

Machine code is a computer program written in machine language instructions that can be
executed directly by a computer's central processing unit (CPU). Each instruction causes the
CPU to perform a very specific task, such as a load, a store, a jump, or an arithmetic logic
unit (ALU) operation on one or more units of data in the CPU's registers or memory.

Machine code is a strictly numerical language which is intended to run as fast as possible,
and may be regarded as the lowest-level representation of a compiled or assembled
computer program or as a primitive and hardware-dependent programming language.
While it is possible to write programs directly in machine code, managing individual bits and
calculating numerical addresses and constants manually is tedious and error-prone. For this
reason, programs are very rarely written directly in machine code in modern contexts, but
may be done for low level debugging, program patching (especially when assembler source
is not available) and assembly language disassembly.

Machine code is by definition the lowest level of programming detail visible to the
programmer, but internally many processors use microcode or optimise and transform
machine code instructions into sequences of micro-ops. This is not generally considered to
be a machine code.
Sometimes referred to as machine code or object code, machine language is a collection of
binary digits or bits that the computer reads and interprets. Machine language is the only
language a computer is capable of understanding.

The exact machine language for a program or action can differ by operating system on the
computer. The specific operating system will dictate how a compiler writes a program or
action into machine language.

Computer programs are written in one or more programming languages, like C++, Java, or
Visual Basic. A computer cannot directly understand the programming languages used to
create computer programs, so the program code must be compiled. Once a program's code
is compiled, the computer can understand it because the program's code is turned into
machine language.

Machine language example

Below is an example of machine language (binary) for the text "Hello World".

01001000 01100101 01101100 01101100 01101111 00100000 01010111 01101111 01110010 01101100
01100100

Q-3. Define Radix Conversion Algorithm?

1-Radix Conversion

Radix conversions are less important than other algorithms. A program dominated by
conversions should probably use a different data representation

In a positional numeral system, the radix or base is the number of unique digits, including
the digit zero, used to represent numbers. For example, for the decimal/denary system (the
most common system in use today) the radix (base number) is ten, because it uses the ten
digits from 0 through 9.

In any standard positional numeral system, a number is conventionally written as (x)y with x
as the string of digits and y as its base, although for base ten the subscript is usually
assumed (and omitted, together with the pair of parentheses), as it is the most common
way to express value. For example, (100)10 is equivalent to 100 (the decimal system is
implied in the latter) and represents the number one hundred, while (100) 2 (in the binary
system with base 2) represents the number four.

A recursive formula for number conversion from one radix representation to another radix
representation is presented. This formula differs from the existing ones in two major
aspects. First, it utilizes a digit shift technique which provides faster accumulation of higher
significant digits in the final result. Second, it is suitable for parallel computation, so that the
length of time necessary for number conversion can be shortened. Thus the longer the digit
number, the more appreciation in conversion time-saving will result.
Applications of the recursive formula are studied in multiplication and division for negative
radix numbers as well as for positive radix numbers. The multiplication and division
presented here are especially useful for computations of η word precision, since successive
bit-carrying propagation to the most significant digit hardly ever occurs.

Q-4. Explain Instruction Fetching Registers?

The instruction cycle (also known as the fetch–decode–execute cycle, or simply the fetch-
execute cycle) is the cycle that the central processing unit (CPU) follows from boot-up until
the computer has shut down in order to process instructions. It is composed of three main
stages: the fetch stage, the decode stage, and the execute stage.

In simpler CPUs, the instruction cycle is executed sequentially, each instruction being
processed before the next one is started. In most modern CPUs, the instruction cycles are
instead executed concurrently, and often in parallel, through an instruction pipeline: the
next instruction starts being processed before the previous instruction has finished, which is
possible because the cycle is broken up into separate steps.

In computing, the instruction register (IR) or current instruction register (CIR) is the part of
a CPU's control unit that holds the instruction currently being executed or decoded. In
simple processors, each instruction to be executed is loaded into the instruction register,
which holds it while it is decoded, prepared and ultimately executed, which can take several
steps.

Some of the complicated processors use a pipeline of instruction registers where each stage
of the pipeline does part of the decoding, preparation or execution and then passes it to the
next stage for its step. Modern processors can even do some of the steps out of order as
decoding on several instructions is done in parallel.

Decoding the op-code in the instruction register includes determining the instruction,
determining where its operands are in memory, retrieving the operands from memory,
allocating processor resources to execute the command (in super scalar processors), etc.

The output of the IR is available to control circuits, which generate the timing signals that
control the various processing elements involved in executing the instruction.

In the instruction cycle, the instruction is loaded into the instruction register after the
processor fetches it from the memory location pointed to by the program counter.

Fetching and decoding an instruction


To start processing the cpu needs to fetch the first instruction in the program from the main
memory. The Program Counter is the key register here. The PC always shold the address of
the next program instruction in the main memory. It is said to point to the next instruction1.
But remember that the memory address register acts as agate keeper to the memory, so the
first thing to happen is that the program counter gets copied into the memory address
register. The register transfer is MAR ←PC Because it is the MAR that is clocked, this leaves
the PC unaltered. Now read the memory into theMBR.MBR←〈MAR 〉The next step is to
copy the instruction from the MBR to the instruction register .IR← MBR In our standard
architecture the IR is split into two parts, IR (opcode) and IR(address).As far as the
instruction fetch is concerned it their (opcode) that is important. The opcode is decoded by
the control unit, as described later .Last comes a touch of housekeeping. Usually the next
instruction in the program islocated in the next memory location, so the program counter is
incremented. PC ←PC+ 11Any memory address points to the memory contents at that
address.

THE CPU, INSTRUCTION FETCH & EXECUTE


So to summarize, the instruction fetch requires the following in RTL, where you should note
that the program counter can be incremented at the same clock tick as loading the
instruction register. Instruction fetch1. MAR←PC2.MBR← 〈 MAR 〉 3.IR←MBR;PC←PC+
1(Then decode the opcode) NB: these line numbers will soon turn into RTL Control Steps!
CPU Outside the CPU SET alu Address Bus Data Bus CLK mem SPMARACIR (opcode)
IR(address)Status MBR IRALU CU Memory Control Lines PCIN Cpc/LOAD pc to Registers,
ALU, Memory, etc2.4 A few instructions Our CPU uses 8-bit opcodes, so could distinguish
256 different instructions. For the purpose of explanation we give just nine from ou r
instruction set. Column 1 contains the assembler language mnemonic, which is shorthand
for several lines of RTL. Column2 gives an overall “RTL-like” description. Column 3 is the
binary opcode. Inst Overall RTOpcodeMeaningHALT00000000Stop the clock LDA
xAC← 〈 x 〉 00000001Load AC with contents of mem address xSTA x 〈 x 〉
←AC00000010Store AC in memory at address xADD xAC←AC+ 〈x〉00000011 A dd mem
contents at x to AC AND x AC=AC∧ 〈x〉00000100Logical and ...JMP xPC←x00000101Jump
to instruction at address Xbz xif Z=1 thenPC←x00000110if Z-flag is set then
jumpNOTAC←AC00000111Two’s complement the ACSH RAC←Right Shift (AC)00001000Shift
the AC 1bit to right An assembler language is designed around a particular cpu, and there is
no standard set of mnemonics.

Q-5. What do you mean by Replacement Techniques

Virtual memory is the lifeline of modern computer operating systems (OS). An OS makes use
of a process called paging for virtual memory management (VMM). Paging is a process of
reading data from, and writing data to, the secondary storage. Whenever a process refers to
a page that is not present in memory, a page fault occurs. Subsequently, the OS replaces
one of the existing pages with the referred page. Page replacement algorithms are an
important part of virtual memory management and it helps the OS to decide which memory
page can be moved out, making space for the currently needed page.

However, the ultimate objective of all page replacement algorithms is to reduce the number
of page faults. A page fault is a computer hardware raised interrupt or an exception when a
running program accesses a memory page that is not currently mapped into the virtual
address space of the program.

Page Replacement Algorithms

In this section, we shall discuss some of the important page replacement algorithms that
can be used by OS:

Not recently used

The not recently used (NRU) page replacement algorithm is an algorithm that favours
keeping pages in memory that have been recently used. This algorithm works on the
following principle: when a page is referenced, a referenced bit is set for that page, marking
it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The
setting of the bits is usually done by the hardware, although it is possible to do so on the
software level as well.

At a certain fixed time interval, a timer interrupt triggers and clears the referenced bit of all
the pages, so only pages referenced within the current timer interval are marked with a
referenced bit. When a page needs to be replaced, the operating system divides the pages
into four classes:

3. referenced, modified

2. referenced, not modified

1. not referenced, modified

0. not referenced, not modified

First-In-First-Out (FIFO)

With the FIFO algorithm, the OS maintains a queue to keep track of all the pages in memory,
with the most recent arrival at the back (tail of the queue), and the oldest arrival in front
(head of the queue). When the system needs space, a page will be replaced. With FIFO, the
page at the front of the queue (the oldest page) is selected for replacement. However, FIFO
is known to suffer from a problem known as Belady's anomaly, which occurs when
increasing the number of page frames results in an increase in the number of page faults for
a given memory access pattern.

Least Recently Used (LRU)

The LRU page replacement algorithm keeps track of the usage of pages over a defined time-
window. When its time to replace a page, it replaces the page that is least recently used.
Not frequently used (NFU)
The not frequently used (NFU) page replacement algorithm requires a counter, and every page has
one counter of its own which is initially set to 0. At each clock interval, all pages that have been
referenced within that interval will have their counter incremented by 1. In effect, the counters keep
track of how frequently a page has been used. Thus, the page with the lowest counter can be
swapped out when necessary.

Вам также может понравиться