Академический Документы
Профессиональный Документы
Культура Документы
OSD Page 1
some attention. In a situation like that the CPU will stop
whatever it was doing (i.e. pauses the current program),
provides the service required by the device and will get back to
the normal program.
What is the difference between System Call and
Interrupt?
System call is a call to a subroutine built in to the system, while
Interrupt is an event, w hich causes the processor to
temporarily hold the current execution. However one major
difference is that system calls are synchronous, whereas
interrupts are not. That means system calls occur at a fixed
time (usually determined by the programmer), but interrupts
can occur at any time due an unexpected event such as a key
press on the keyboard by the user. Therefore, when ever a
system call occurs the processor only has to remember where
to return to, but in the event of an interrupt, the processor has
to remember both the place to return to and the state of the
system. Unlike a system call, an interrupt usually does not have
anything to do with the current program.
OSD Page 3
What is Operating System?
An Operating system is software that manages a computer. It is
a collection of data and programs that manages the systems
(hardware) resources. Furthermore, it accommodates the
execution of application software (such as word processors etc.)
by acting as an interface layer between the hardware and the
applications (for functions such as input/output and memory
related operations). It is the main system software running on a
computer. Because users are unable to run any other system or
application software without a properly running operating
system, an operating system can be considered the most
important system software for a computer.
Operating systems are present in all types of machines (not just
computers) that have processors such as mobile phones,
console based gaming systems, super computers and servers.
Most popular operating systems are Microsoft Windows, Mac OS
X, UNIX, Linux and BSD. Microsoft operating systems are mostly
used within commercial enterprises, while UNIX based
operating systems are more popular with the academic
professionals, because they are free and open source (unlike
Windows, which is very costly).
What is Kernel?
Kernel is the main part of a computer Operating system. It is
the actual bridge between the hardware and the application
software. The kernel is usually responsible for the management
of system resources including the hardware and software
communication. It provides a very low level abstraction layer
between processors and input/output devices. Inter-process
communication and system calls are the main mechanisms in
which these low level facilities are offered to other applications
(by the kernel). Kernels are divided in to different types based
on the design/implementation and how each operating system
task is performed. All the system code is executed in the same
address space (for performance improvement reasons) by the
monolithic kernels. But, most services are run in the user space
by the microkernels (maintainability and modularity can be
increased with this approach). There are many other
approaches between these two extremes.
What is the difference between Kernel and Operating
System?
Kernel is the core (or the lowest level) of the operating system.
All other parts that make up the operating system (graphical
OSD Page 4
user interface, file management, shell, etc.) rely on the kernel.
Kernel is responsible for the communication with the hardware,
and it is actually the part of the operating system that talks
directly with the hardware. Numerous callable routines that can
be used for accessing files, displaying graphics, getting
keyboard/mouse inputs are provided by the kernel to be used
by other software.
What is a deadlock?
Deadlock is a situation when two processes, each having a lock
on one piece of data, attempt to acquire a lock on the
other's piece. Each process would wait indefinitely for the other
to release the lock, unless one of the user processes is
terminated. SQL Server detects deadlocks and terminates one
user's process.
A deadlock occurs when two or more tasks permanently block
each other by each task having a lock on a resource which the
other tasks are trying to lock. For example:
Transaction A acquires a share lock on row 1.
Transaction B acquires a share lock on row 2.
Transaction A now requests an exclusive lock on row 2, and is
blocked until transaction B finishes and releases the share lock
it has on row 2.
Transaction B now requests an exclusive lock on row 1, and is
blocked until transaction A finishes and releases the share lock
it has on row 1.
Transaction A cannot complete until transaction B completes,
but transaction B is blocked by transaction A. This condition is
also called a cyclic dependency: Transaction A has a
dependency on transaction B, and transaction B closes the
circle by having a dependency on transaction A.
Both transactions in a deadlock will wait forever unless the
deadlock is broken by an external process. The Microsoft SQL
Server Database Engine deadlock monitor periodically checks
OSD Page 5
for tasks that are in a deadlock. If the monitor detects a cyclic
dependency, it chooses one of the tasks as a victim and
terminates its transaction with an error. This allows the other
task to complete its transaction. The application with the
transaction that terminated with an error can retry the
transaction, which usually completes after the other
deadlocked transaction has finished.
What is a LiveLock?
A livelock is one, where a request for an exclusive lock is
repeatedly denied because a series of overlapping shared locks
keeps interfering. SQL Server detects the situation after four
denials and refuses further shared locks. A livelock also occurs
when read transactions monopolize a table or page, forcing a
write transaction to wait indefinitely.
A Live lock is one, where a request for exclusive lock is denied
continuously because a series of overlapping shared locks
keeps on interfering each other and to adapt from each other
they keep on changing the status which further prevents them
to complete the task. In SQL Server Live Lock occurs when read
transactions are applied on table which prevents write
transaction to wait indefinitely. This is different then deadlock
as in deadlock both the processes wait on each other.
A human example of live lock would be two people who meet
face-to-face in a corridor and each moves aside to let the other
pass, but they end up moving from side to side without making
any progress because they always move the same way at the
same time and never cross each other. This is good example of
live lock.
NO LOCK
If you add WITH (NOLOCK) after a table name in the FROM
clause, then no locks will be held on that table for the duration
of the query. However, you may also get dirty results, as
updates and deletes from other processes are being ignored.
Ex. SELECT COUNT(UserID)
FROM Users WITH (NOLOCK)
WHERE Username LIKE 'foobar'
OSD Page 6
regard to one another, none progressing. Livelock is a special
case of resource starvation; the general definition only states
that a specific process is not progressing.
OSD Page 7
The difference is as follows:
Binary semaphores are binary, they can have two values
only; one to represent that a process/thread is in the
critical section(code that access the shared resource) and
others should wait, the other indicating the critical section
is free.
On the other hand, counting semaphores take more than
two values, they can have any value you want. The max
value X they take allows X process/threads to access the
shared resource simultaneously.
Context Switch
Interrupt handling[edit]
OSD Page 9
User and kernel mode switching[edit]
Steps[edit]
The state of the process includes all the registers that the
process may be using, especially the program counter, plus any
other operating system specific data that may be necessary.
This data is usually stored in a data structure called a process
control block (PCB), or switchframe.
In order to switch processes, the PCB for the first process must
be created and saved. The PCBs are sometimes stored upon a
per-process stack in kernel memory (as opposed to the user-
mode call stack), or there may be some specific operating
system defined data structure for this information.
OSD Page 10
Software vs hardware context switching[edit]
OSD Page 11
provides interface to be used by the high level machine
software."
A little later practically all that is said for device driveres:
" Device driveres implement a similar interface, so a higher
level machine need not to be espacially concerned with the
details of the particular device. The goal is to simplify software
interfaces to devices as much as possible..."
Is that means that only device drivers provides interface that
enables application programmer sees all devices in the same
way, and not device controller. I'm confused because on a lot of
places what is said for device controller later is said for device
drivers. Can I say:
Device drivers translates request from application to a device
controller depended instructiont which then translate them to
device instruction on physical level?
OSD Page 12
Linux, a thread weighs as little as 4KiByte, in Erlang/BEAM even
just 400Byte. In .NET, it's 1MiByte!)
I think that what you are talking about when you say Task is
a System.Threading.Task. If that's the case then you can think
about it this way:
OSD Page 14
What is the difference between multiprocessing,
multiprogramming,multitasking and multithreading?
Multitasking:
The ability to execute more than one task at the same time
Tasks sharing a common resource (like 1 CPU)
More than one task/program/job/process can reside into the
same CPU at one point of time.
This ability of the OS is called multitasking.
Multiprogramming:
The allocation of a computer system and its resources to more
than one concurrent application, job or user
A computer running more than one program at a time (like
running Excel and Firefox simultaneously)
More than one task/program/job/process can reside into the
main memory at one point of time.
This ability of the OS is called multiprogramming.
Multithreading:
Executing more than one thread parallely using a single
processor
Multiprocessing:
Simultaneous execution of instructions by multiple processors
within a single computer
A computer using more than one CPU at a time
OSD Page 15
Multiprogramming is a rudimentary form of parallel processing
in which several programs are run at the same time on a
uniprocessor. Since there is only one processor , there can be
no true simultaneous execution of different programs. Instead,
the operating system executes part of one program, then part
of another, and so on. To the user it appears that all programs
are executing at the same time.
OSD Page 16
multiprogramming, or the interleaved execution of two or more
programs by a processor. Today, the term is rarely used since
all but the most specialized computer operating systems
support multiprogramming. Multiprocessing can also be
confused with multitasking, the management of programs and
the system services they request as tasks that can be
interleaved, and with multithreading, the management of
multiple execution paths through the computer or of multiple
users sharing the same copy of a program.
OSD Page 17
theme, and the definition of multiprocessing can vary with
context, mostly as a function of how CPUs are defined (multiple
cores on one die, multiple chips in one package, multiple
packages in one system unit, etc.).
Contents
1 When to switch?
o 1.1 Multitasking
OSD Page 18
o 1.3 User and kernel mode switching
4 References
5 External links
When to switch?
Multitasking
Interrupt handling
OSD Page 19
The kernel services the interrupts in the context of the
interrupted process even though it may not have caused the
interrupt. The interrupted process may have been executing in
user mode or in kernel mode. The kernel saves enough
information so that it can later resume execution of the
interrupted process and services the interrupt in kernel mode.
The kernel does not spawn or schedule a special process to
handle interrupts.
Portability of Linux:
OSD Page 20
The code for switch_to() and switch_mm() is uniquely
implemented by each architecture that Linux supports. When
Linux is ported to a new architecture, the new architecture
must provide an implementation for these functions.
OSD Page 21
Shared copy-on-write pages among executables. This
means that multiple process can use the same memory to
run in. When one tries to write to that memory, that page
(4KB piece of memory) is copied somewhere else. Copy-on-
write has two benefits: increasing speed and decreasing
memory use.
Virtual memory using paging (not swapping whole
processes) to disk: to a separate partition or a file in the file
system, or both, with the possibility of adding more
swapping areas during runtime (yes, they're still called
swapping areas). A total of 16 of these 128 MB (2GB in
recent kernels) swapping areas can be used at the same
time, for a theoretical total of 2 GB of useable swap space.
It is simple to increase this if necessary, by changing a few
lines of source code.
A unified memory pool for user programs and disk cache,
so that all free memory can be used for caching, and the
cache can be reduced when running large programs.
All source code is available, including the whole kernel and
all drivers, the development tools and all user programs;
also, all of it is freely distributable. Plenty of commercial
programs are being provided for Linux without source, but
everything that has been free, including the entire base
operating system, is still free.
Multiple virtual consoles: several independent login
sessions through the console, you switch by pressing a hot-
key combination (not dependent on video hardware). These
are dynamically allocated; you can use up to 64.
Supports several common file systems, including minix,
Xenix, and all the common system V file systems, and has
an advanced file system of its own, which offers file
systems of up to 4 TB, and names up to 255 characters
long.
Many networking protocols: the base protocols available in
the latest development kernels include TCP, IPv4, IPv6,
AX.25, X.25, IPX, DDP (AppleTalk), Netrom, and others.
Stable network protocols included in the stable kernels
currently include TCP, IPv4, IPX, DDP, and AX.25.
1. Free - All code can be seen, and edited, and recomplied,
which allows the community at large to help the main
developers fix bugs etc. You can also download 99% of
OSD Page 22
linux distros completely free of charge.
Advantages of linux :
OSD Page 23
these years. Sure, the argument of the Linux desktop not
being as widely used is a factor as to why there are
no viruses. My rebuttle is that the Linux operating system is
open source and if there were a
widespread Linux virusreleased today, there would be
hundreds of patches released tomorrow, either by ordinary
people that use the operating system or by the distribution
maintainers. We wouldnt need to wait for a patch from a
single company like we do with Windows.
Choice (Freedom) The power of choice is a
great Linux advantage. With Linux, you have the power to
control just about every aspect of the operating system. Two
major features you have control of are your desktops look
and feel by way of numerous Window Managers, and
the kernel. In Windows, your either stuck using the boring
default desktop theme, or risking corruption or failure by
installing a third-party shell.
Software - There are so many software choices when it
comes to doing any specific task. You could search for a text
editor on Freshmeat and yield hundreds, if not thousands of
results. My article on 5 Linux text editors you should know
about explains how there are so many options just for editing
text on the command-line due to the open source nature
of Linux. Regular users and programmers contribute
applications all the time. Sometimes its a simple modification
or feature enhancement of a already existing piece of
software, sometimes its a brand new application. In addition,
software on Linux tends to be packed with more features and
greater usability than software on Windows. Best of all, the
vast majority of Linux software is free and open source. Not
only are you getting the software for no charge, but you have
the option to modify the source codeand add more features if
you understand the programming language. What more
could you ask for?
Hardware - Linux is perfect for those old computers with
barely any processing power or memory you have sitting in
your garage or basement collecting dust. Install Linux and
use it as a firewall, a file server, or a backup server. There
are endless possibilities. Old 386 or 486 computers with
barely any RAM run Linux without any issue. Good luck
running Windows on these machines and actually finding a
use for them.
OSD Page 24
Disadvantages of Linux:
OSD Page 25
peripheral hardware drivers (for printers, scanners, and
other devices) in Linux as compared to Windows, though
many new Linux hardware drivers are constantly being
added. Closely related to this issue is the fact that not all
Linux distros work with all sets of computer hardware, so a
person may need to try more than one distro to find one
which works well with his/her computer. When it comes to
printers, some manufacturers offer better Linux support
than others; for example, HP offers excellent printer
support for Linux. Click here to learn more about Linux
hardware compatibility [15].
There is a learning curve for people who are new to
Linux. Despite this, most Linux distros, especially the
major ones, are very intuitive and user-friendly. Also, the
desktop environments in Linux are in many ways similar to
Windows in their appearance. One thing which should be
emphasized is that there is also a learning curve for
Windows XP users who switch to Windows 7 or Windows
8. Click here to learn more about the major Linux desktop
environments and to see pictures of them [16].
Advantages of Linux:
Freedom! Most Linux distros are free..... users do not
need to pay for a copy, but this is only one aspect of
freedom enjoyed by Linux users! In addition, Linux distros
can be freely downloaded and legally installed on as many
computers as you want and freely (and legally) given to
other people. Because most distros are open source, you
have access to the source code and can customize Linux
to be whatever you want it to be; you can even create
your own distro if you like!
OSD Page 26
users is not a worry for Linux users.
OSD Page 27
in NTFS. On the other hand, because Linux is normally
formatted in a different way using ext4 among others, there is
no need to defragment a Linux hard drive.
OSD Page 28
Ubuntu, so a user can be confident that the software will be
compatible with Ubuntu and will not include malware.
Both cache and buffer are types of temporary storage that are
utilized in computer science. However, they differ in the
methods and the capabilities in which they are used. A cache
transparently stores data so that future requests for that data
can be served faster. A buffer, on the other hand, temporarily
stores data while the data is the process of moving from one
place to another, i.e. the input device to the output device.
There are two main types of caches, memory caching and disk
caching. Memory caching is when the cache is part of the main
memory, whereas disk caching is when the cache is part of
some other separate storage area, such as a hard disk. Caching
is the process of storing data in a cache so that the data can be
accessed faster in the future. The data that is stored within a
cache might be values that have been computed earlier or
duplicates of original values that are stored elsewhere. When
some data is requested, the cache is first checked to see
whether it contains that data. The data can be retrieved more
quickly from the cache than from its source origin.
OSD Page 29
images. This is mainly done to reduce bandwidth usage, server
load, and perceived lag. When a web page is loaded, the data
on the pages is cached; hence the next time the page is loaded
it is quicker, as data is already present, and only the changes
made to the page need to be loaded, which are in turn cached
for next time. Google's cache link in its search results provides
a way of retrieving information from websites that have
recently gone down and a way of retrieving data more quickly
than by clicking the direct link.
The buffer, on the other hand, is found mainly in the RAM and
acts as an area where the CPU can store data temporarily. This
area is used mainly when the computer and the other devices
have different processing speeds. Typically, the data is stored in
a buffer as it is retrieved from an input device (such as a
mouse) or just before it is sent to an output device (such as
speakers). However, the buffer may also be used when moving
data between processes within a computer.
So, the computer writes the data up into a buffer, from where
the device can access the data, as its own speed. This allows
the computer to be able to focus on other matters after it
writes up the data in the buffer; as oppose to constantly focus
on the data, until the device is done.
OSD Page 30
or FIFO algorithm in memory. Hence, it is often writing data into
the queue at one rate and reading it at another rate.
Buffers are also often used with I/O to hardware, such as disk
drives, sending or receiving data to or from a network, or
playing sound on a speaker. Buffers are used for many
purposes, such as interconnecting two digital circuits operating
at different rates, holding data for use at a later time, allowing
timing corrections to be made on a data stream, collecting
binary data bits into groups that can then be operated on as a
unit, and delaying the transit time of a signal in order to allow
other operations to occur.
OSD Page 31
OSD Page 32
Paging and Demand Paging:
Demand paging
Basic concept[edit]
Demand paging follows that pages should only be brought into memory if the executing
process demands them. This is often referred to as lazy evaluation as only those pages
demanded by the process are swapped from secondary storage to main memory. Contrast this
to pure swapping, where all memory for a process is swapped from secondary storage to main
memory during the process startup.
OSD Page 33
Commonly, to achieve this process a page table implementation is used. The page table
maps logical memory to physical memory. The page table uses a bitwise operator to mark if a
page is valid or invalid. A valid page is one that currently resides in main memory. An invalid
page is one that currently resides in secondary memory. When a process tries to access a
page, the following steps are generally followed:
OSD Page 34
Possible security risks, including vulnerability to timing attacks; see Percival
2005 Cache Missing for Fun and Profit (specifically the virtual memory attack in section
2).
Thrashing which may occur due to repeated page faults.
When pure demand paging is used, page loading only occurs at the time of the data request,
and not before. In particular, when demand paging is used, a program usually begins
execution with none of its pages pre-loaded in RAM. Pages are copied from the executable
file into RAM the first time the executing code references them, usually in response topage
faults. As a consequence, pages of the executable file containing code not executed during a
particular run will never be loaded into memory.
Page faults
The main functions of paging are performed when a program tries to access pages that are not
currently mapped to physical memory (RAM). This situation is known as a page fault. The
operating system must then take control and handle the page fault, in a manner invisible to
the program. Therefore, the operating system must:
If there is not enough available RAM when obtaining an empty page frame, a page
replacement algorithm is used to choose an existing page frame for eviction. If the evicted
page frame has been dynamically allocated during execution of a program, or if it is part of a
program's data segment and has been modified since it was read into RAM (in other words, if
it has become "dirty"), it must be written out to a location in secondary storage before being
freed. Otherwise, the contents of the page's frame in RAM are the same as the contents of the
OSD Page 35
page in its secondary storage, so it does not need to be written out to secondary storage. If, at
a later stage, a reference is made to that memory page, another page fault will occur and
another empty page frame must be obtained so that the contents of the page in secondary
storage can be again read into RAM.
Efficient paging systems must determine the page frame to empty by choosing one that is
least likely to be needed within a short time. There are various page replacement
algorithms that try to do this. Most operating systems use some approximation of the least
recently used (LRU) page replacement algorithm (the LRU itself cannot be implemented on
the current hardware) or a working set-based algorithm.
To further increase responsiveness, paging systems may employ various strategies to predict
which pages will be needed soon. Such systems will attempt to load pages into main memory
preemptively, before a program references them.
Page Faults :
A page fault (sometimes #pf or pf) is a trap to the software raised by the hardware when a
program accesses a page that is mapped in the virtual address space, but not loaded in
physical memory. In the typical case the operating system tries to handle the page fault by
making the required page accessible at a location in physical memory or terminates the
program in the case of an illegal access. The hardware that detects a page fault is the memory
management unit in a processor. The exception handling software that handles the page fault
is generally part of the operating system.
Contrary to what the name "page fault" might suggest, page faults are not always errors and
are common and necessary to increase the amount of memory available to programs in any
operating system that utilizes virtual memory, including OpenVMS, Microsoft
Windows, Unix-like systems (including Mac OS X, Linux, *BSD, Solaris, AIX, and HP-
UX), andz/OS. Microsoft uses the term hard fault in more recent versions of the Resource
Monitor (e.g., Windows Vista) to mean "page fault".[1]
Types[edit]
OSD Page 36
Minor[edit]
If the page is loaded in memory at the time the fault is generated, but is not marked in
the memory management unit as being loaded in memory, then it is called a minor or soft
page fault. The page fault handler in the operating system merely needs to make the entry for
that page in the memory management unit point to the page in memory and indicate that the
page is loaded in memory; it does not need to read the page into memory. This could happen
if the memory is shared by different programs and the page is already brought into memory
for other programs.
The page could also have been removed from a process's Working Set, but not yet written to
disk or erased, such as in operating systems that use Secondary Page Caching. For example,
HP OpenVMS may remove a page that does not need to be written to disk (if it has remained
unchanged since it was last read from disk, for example) and place it on a Free Page List if
the working set is deemed too large. However, the page contents are not overwritten until the
page is assigned elsewhere, meaning it is still available if it is referenced by the original
process before being allocated. Since these faults do not involve disk latency, they are faster
and less expensive than major page faults.
Major[edit]
This is the mechanism used by an operating system to increase the amount of program
memory available on demand. The operating system delays loading parts of the program from
disk until the program attempts to use it and the page fault is generated. If the page is not
loaded in memory at the time of the fault, then it is called a major or hard page fault. The
page fault handler in the OS needs to find a free location: either a page in memory, or another
non-free page in memory. This latter might be used by another process, in which case the OS
needs to write out the data in that page (if it has not been written out since it was last
modified) and mark that page as not being loaded in memory in its process page table. Once
the space has been made available, the OS can read the data for the new page into memory,
add an entry to its location in the memory management unit, and indicate that the page is
loaded. Thus major faults are more expensive than minor faults and add disk latency to the
interrupted program's execution.
Invalid[edit]
OSD Page 37
If a page fault occurs for a reference to an address that is not part of the virtual address space,
meaning there cannot be a page in memory corresponding to it, then it is called an invalid
page fault. The page fault handler in the operating system will then generally pass
a segmentation fault to the offending process, indicating that the access was invalid; this
usually results in abnormal termination of the code that made the invalid reference. A null
pointer is usually represented as a pointer to address 0 in the address space; many operating
systems set up the memory management unit to indicate that the page that contains that
address is not in memory, and do not include that page in the virtual address space, so that
attempts to read or write the memory referenced by a null pointer get an invalid page fault.
Illegal accesses and invalid page faults can result in a segmentation fault or bus error,
resulting in programming termination (crash) or core dump, depending on the operating
system environment. Often these problems are caused by software bugs, but hardware
memory errors, such as those caused by overclocking, may corrupt pointers and make correct
software fail.
Operating systems such as Windows and UNIX (and other UNIX-like systems) provide
differing mechanisms for reporting errors caused by page faults. Windows uses structured
exception handling to report page fault-based invalid accesses as access violation exceptions,
and UNIX (and UNIX-like) systems typically use signals, such as SIGSEGV, to report these
error conditions to programs.
If the program receiving the error does not handle it, the operating system performs a default
action, typically involving the termination of the running process that caused the error
condition, and notifying the user that the program has malfunctioned. Recent versions of
Windows often report such problems by simply stating something like "this program must
close" (an experienced user or programmer with access to a debugger can still retrieve
detailed information). Recent Windows versions also write a minidump (similar in principle
to a core dump) describing the state of the crashed process. UNIX and UNIX-like operating
systems report these conditions to the user with error messages such as "segmentation
violation", or "bus error", and may also produce a core dump.
OSD Page 38
Page faults, by their very nature, degrade the performance of a program or operating system
and in the degenerate case can cause thrashing. Optimization of programs and the operating
system that reduce the number of page faults improve the performance of the program or
even the entire system. The two primary focuses of the optimization effort are reducing
overall memory usage and improving memory locality. To reduce the page faults in the
system, programmers must make use of an appropriate page replacement algorithm that suits
the current requirements and maximizes the page hits. Many have been proposed, such as
implementing heuristic algorithms to reduce the incidence of page faults. Generally, making
more physical memory available also reduces page faults.
Major page faults on conventional (hard disk) computers can have a significant impact on
performance. An average hard disk has an average rotational latency of 3ms, a seek-time of
5ms, and a transfer-time of 0.05 ms/page. So the total time for paging is near 8ms (8 000 s).
If the memory access time is 0.2 s, then the page fault would make the operation about
40,000 times slower.
Page Fragment :
There are three different but related forms of fragmentation: external fragmentation, internal
fragmentation, and data fragmentation, which can be present in isolation or conjunction.
Fragmentation is often accepted in return for improvements in speed or simplicity.
OSD Page 39
Basic principle[edit]
When a computer program requests blocks of memory from the computer system, the blocks
are allocated in chunks. When the computer program is finished with a chunk, it can free the
chunk back to the system, making it available to later be allocated again to another or the
same program. The size and the amount of time a chunk is held by a program varies. During
its lifespan, a computer program can request and free many chunks of memory.
When a program is started, the free memory areas are long and contiguous. Over time and
with use, the long contiguous regions become fragmented into smaller and smaller
contiguous areas. Eventually, it may become impossible for the program to obtain large
contiguous chunks of memory.
Types of fragmentation[edit]
Internal fragmentation[edit]
Unlike other types of fragmentation, internal fragmentation is difficult to reclaim; usually the
best way to remove it is with a design change. For example, in dynamic memory
allocation, memory pools drastically cut internal fragmentation by spreading the space
overhead over a larger number of objects.
External fragmentation[edit]
External fragmentation arises when free memory is separated into small blocks and is
interspersed by allocated memory. It is a weakness of certain storage allocation algorithms,
when they fail to order memory used by programs efficiently. The result is that, although free
OSD Page 40
storage is available, it is effectively unusable because it is divided into pieces that are too
small individually to satisfy the demands of the application. The term "external" refers to the
fact that the unusable storage is outside the allocated regions.
External fragmentation also occurs in file systems as many files of different sizes are created,
change size, and are deleted. The effect is even worse if a file which is divided into many
small pieces is deleted, because this leaves similarly small regions of free spaces.
OSD Page 41
OSD Page 42