Вы находитесь на странице: 1из 4

Multicore Architecture and Programming

ASSIGNMENT - 1
Name : SEETHAI SELVI.M
Register Number : 310615104090
Dept : CSE Year : IV Sec : B

Q1) Cache Mapping - Directly Associative, 2-way associative, Full associative.

1) Direct Mapped : A cache block can only go in one spot in the cache. It makes a cache block very
easy to find, but it's not very flexible about where to put the blocks. That is, In a direct mapped
cache, lower order line address bits are used to access the directory. Since multiple line addresses
map into the same location in the cache directory, the upper line address bits (tag bits) must be
compared with the directory address to ensure a hit. If a comparison is not valid, the result is a cache
miss, or simply a miss. The address given to the cache by the processor actually is subdivided into
several pieces, each of which has a different role in accessing data.

2) Two-way Associative : This cache is made up of sets that can fit two blocks each. The index is now
used to find the set, and the tag helps find the block within the set.

3) Full Associative : No index is needed, since a cache block can go anywhere in the cache. Every tag
must be compared when finding a block in the cache, but block placement is very flexible. That is, In
fully associative mapping, when a request is made to the cache, the requested address is compared in
a directory against all entries in the directory. If the requested address is found (a directory hit), the
corresponding location in the cache is fetched and returned to the processor; otherwise, a miss
1
occurs.

Q2) Virtual Memory


In computing, virtual memory is a memory management technique that provides an "idealized
abstraction of the storage resources that are actually available on a given machine" which "creates the
illusion to users of a very large memory."

Benefits of having Virtual Memory


● Large programs can be written, as virtual space available is huge compared to physical memory.
● Less I/O required, leads to faster and easy swapping of processes.
● More physical memory available, as programs are stored on virtual memory, so they occupy very
less space on actual physical memory.

What is Demand Paging?


The basic idea behind demand paging is that when a process is swapped in, its pages are not
swapped in all at once. Rather they are swapped in only when the process needs them(On demand). This is
termed as lazy swapper, although a pager is a more accurate term.

Initially only those pages are loaded which will be required the process immediately.

The pages that are not moved into the memory, are marked as invalid in the page table. For an invalid entry
the rest of the table is empty. In case of pages that are loaded in the memory, they are marked as valid along

2
with the information about where to find the swapped out page.

Page Replacement
As studied in Demand Paging, only certain pages of a process are loaded initially into the memory.
This allows us to get more number of processes into the memory at the same time. but what happens when a
process requests for more pages and no free memory is available to bring them in. Following steps can be
taken to deal with this problem :
● Put the process in the wait queue, until any other process finishes its execution thereby freeing
frames.
● Or, remove some other process completely from the memory to free frames.
● Or, find some pages that are not being used right now, move them to the disk to get free frames. This
technique is called Page replacement and is most commonly used. We have some great algorithms to
carry on page replacement efficiently.

Thrashing
A process that is spending more time paging than executing is said to be thrashing. In other words it
means, that the process doesn't have enough frames to hold all the pages for its execution, so it is swapping
pages in and out very frequently to keep executing. Sometimes, the pages which will be required in the near
future have to be swapped out.
Initially when the CPU utilization is low, the process scheduling mechanism, to increase the level of
multiprogramming loads multiple processes into the memory at the same time, allocating a limited amount of
frames to each process. As the memory fills up, process starts to spend a lot of time for the required pages to
be swapped in, again leading to low CPU utilization because most of the processes are waiting for pages.
Hence the scheduler loads more processes to increase CPU utilization, as this continues at a point of time the
complete system comes to a stop.

Types of virtual memory


A computer's memory management unit (MMU) handles memory operations, including managing
virtual memory. In most computers, the MMU hardware is integrated into the CPU. There are two ways in
which virtual memory is handled: paged and segmented.

Paging divides memory into sections or paging files, usually approximately 4 KB in size. When a
computer uses up its RAM, pages not in use are transferred to the section of the hard drive designated for
virtual memory using a swap file. A swap file is a space set aside on the hard drive as the virtual memory
extensions of the computer's RAM. When the swap file is needed, it's sent back to RAM using a process
called page swapping. This system ensures that computer's OS and applications don't run out of real memory.
The paging process includes the use of page tables, which translate the virtual addresses that the OS
and applications use into the physical addresses that the MMU uses. Entries in the page table indicate
whether or not the page is in real memory. If the OS or a program doesn't find what it needs in RAM, then
the MMU responds to the missing memory reference with a page fault exception to get the OS to move the
3
page back to memory when it's needed. Once the page is in RAM, its virtual address appears in the page
table. Segmentation is also used to manage virtual memory. This approach divides virtual memory into
segments of different lengths. Segments not in use in memory can be moved to virtual memory space on the
hard drive. Segmented information or processes are tracked in a segment table, which shows if a segment is
present in memory, whether it's been modified and what its physical address is. Some virtual memory
systems combine segmentation and paging. In this case, memory gets divided into frames or pages. The
segments take up multiple pages and the virtual address includes both the segment number and the page
number.

Q3) Difference between Semaphore and Monitor

BASIS FOR
SEMAPHORE MONITOR
COMPARISON

Basic Semaphores is an integer variable S. Monitor is an abstract data


type.

Action The value of Semaphore S indicates the The Monitor type contains
number of shared resources available in the shared variables and the set of
system procedures that operate on the
shared variable.

Access When any process access the shared When any process wants to
resources it perform wait() operation on S access the shared variables in
and when it releases the shared resources it the monitor, it needs to access
performs signal() operation on S. it through the procedures.

Condition Semaphore does not have condition Monitor has condition


variable variables. variables.

Key Differences Between Semaphore and Monitor

The basic difference between semaphore and monitor is that the semaphore is an integer variable S
which indicate the number of resources available in the system whereas, the monitor is the abstract data type
which allows only one process to execute in critical section at a time. The value of semaphore can be
modified by wait() and signal() operation only. On the other hand, a monitor has the shared variables and the
procedures only through which shared variables can be accessed by the processes. In Semaphore when a
process wants to access shared resources the process performs wait() operation and block the resources and
when it release the resources it performs signal() operation. In monitors when a process needs to access
shared resources, it has to access them through procedures in monitor. Monitor type has condition variables
which semaphore does not have.

Conclusion:
Monitors are easy to implement than semaphore, and there is little chance of mistake in monitor in
comparison to semaphores.

Вам также может понравиться