Вы находитесь на странице: 1из 20

c 


Thrashing is a degenerate case that occurs when there is insufficient memory at one level in
the memory hierarchy to properly contain the working set required by the upper levels of
the memory hierarchy. This can result in the overall performance of the system dropping to
the speed of a lower level in the memory hierarchy. Therefore, thrashing can quickly reduce
the performance of the system to the speed of main memory or, worse yet, the speed of the
disk drive.

There are two primary causes of thrashing: (1) insufficient memory at a given level in the
memory hierarchy, and (2) the program does not exhibit locality of reference. If there is
insufficient memory to hold a working set of pages or cache lines, then the memory system
is constantly replacing one block (cache line or page) with another. As a result, the system
winds up operating at the speed of the slower memory in the hierarchy. A common example
occurs with virtual memory. A user may have several applications running at the same time
and the sum total of these programs' working sets is greater than all of physical memory
available to the program. As a result, as the operating system switches between the
applications it has to copy each application's data to and from disk and it may also have to
copy the code from disk to memory. Since a context switch between programs is often
much faster than retrieving data from the disk, this slows the programs down by a
tremendous factor since thrashing slows the context switch down to the speed of swapping
the applications to and from disk.

If the program does not exhibit locality of reference and the lower memory subsystems are
not fully associative, then thrashing can occur even if there is free memory at the current
level in the memory hierarchy. For example, suppose an eight kilobyte L1 caching system
uses a direct-mapped cache with 16-byte cache lines (i.e., 512 cache lines). If a program
references data objects 8K apart on each access then the system will have to replace the
same line in the cache over and over again with each access. This occurs even though the
other 511 cache lines are currently unused.

If insufficient memory is the cause of thrashing, an easy solution is to add more memory (if
possible, it is rather hard to add more L1 cache when the cache is on the same chip as the
processor). Another alternative is to run fewer processes concurrently or modify the
program so that it references less memory over a given time period. If lack of locality of
reference is causing the problem, then you should restructure your program and its data
structures to make references local to one another.

1
Actually, virtual memory is really only supported by the 80386 and later processors. We'll
ignore this issue here since most people have an 80386 or later processor.

2
Strictly speaking, you actually get a 36-bit address space on Pentium Pro and later
processors, but Windows and Linux limits you to 32-bits so we'll use that limitation here.

|  

      
| 
 |  
Thrashing occurs when a processor spends so much of its time transferring pages between virtual memory (swap file on
disk) and physical memory (RAM) that it has little time to do anything else. It is often a sign that a user is trying to do more
things at the same time than the physical memory of his/her computer can realistically sustain.

|    
 ! | 
" #$%&

'  ( )'


(  ( 


 (  ' '* |)' (( +

  * |'' '( ) ' ( ,(    ((

* |)'
( +(' * ')  #|''| '' 

'  ( ' ' ' (+| '' '

'  ( ' ('(  -( +' * ')  ' 

' .* ')  +'' '  (')      

-' ( ) ''' ''   ||' # |

' ||'*('|(|((   (''  

((  /)( ||'     ''' (' 


0 1


 &)( *  ||'    ''  

|  & 2 3 3 /4"$5$"6 !

| 
 2   78(  9#4 5 :;  )

 
" #$%&

[  
 
 O   
 
 

 O    
    
       


     
   
 
 
      
 

  
    
      
      
 

 




=      R 


   
     


 
 R   

  
 


R 
R 



   



         !
 [    !
  
  

 
  R  

  

 



 

 
  R     !
 


          [ 
  R    



   !

   
   !
 
   ! 

    
 
  R  

mohiqbal.staff.gunadarma.ac.id/Downloads/.../08.Memori+· .ppt
 
Ê
    Ê
ã     
 


 

      

    
  
R   
  
   

 
    
  
    
  

         

 

  



ã     
 o       
 

 
  
    R

  
   

o        







  
 
        R

 Ê
ã   o           
   
  
  
    R

       
      



   
   
  
   

  

   
R       o      
  


   
   
 

    R

 


   
   o       
  
 

     
Ê
Î    
  
     
   
    

   
  
  

       



 
 
        
  

    
  
 

        
 

   

               
 

 

             

  Ê

|  ) |<   2'   


 ;;;=2
| 22> 2

 
# 

   '' (' ( ()  ') 

.') |'  ( 0   | 1'

( ( "  8   ' 

| 0
1'|   ' (|' 

 +  (   ' (|' 


 |' 

"  * ''  )+|  |'

.8?
   | )?
 >  ))

*   
| '( 

('+ 

&
+' |'
'  

(|''( ( ' 

@ #
+'( ''( (( 

0
|
 1'( |(  

 '' | 2
 

3' |'' +( |  '(( 

 |'+ ( |( '     *( 

# 
(+ ' ') '  (  

 +  |) ' ' ''   

"' + ' 



'( ('  

   ( (' ' *( "' 2' |

'|(  ( '  *'  

 '' ( *(|*  ' A'

 ( ( |   |' ' ' 

( '' ' ((  +'   ' 




 +  ' ') ' * ((  


('

' ( +'    


(    ' '
 '  ' (| 
( +' 

  
() 

? + |'  '' | 

    

|  > '>;!- |$2.#4-#;;B.#4#-2;2" 98   )

 
ïwap File
ïwap File ata page file adalah sebuah file yang diciptakan oleh sistem operasi di dalam hard disk
sebagai tempat penampungan sementara informasi dari RAM. ïwap file sangat m embantu sistem yang
memiliki kapasitas RAM yang minim.

|   (' 
( 2 9C =2)

 
ü Ê° °Ê

r
r



˜ ÊÊ
Ê Ê Ê ÊÊ  Ê  ÊÊÊ
  Ê Ê Ê

      


    
 


 Ê  Ê

 Ê Ê Ê Ê
 Ê Ê Ê


 Ê  Ê

 Ê Ê Ê Ê
 Ê Ê Ê Ê ÊÊ  Ê Ê

|   ( 
 2!C;2'' |

 
ãaging
From Wikipedia, the free encyclopedia

This article is about computer virtual memory. For the wireless communication devices, see "Pager". Bank
switching is also called paging. Page flipping is also called paging.

In computer operating systems, R  is one of the memory-management schemes by which a computer can
store and retrieve data from secondary storage for use in main memory. In the paging memory-management
scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The
main advantage of paging is that it allows the physical address space of a process to be noncontiguous. Before
the time paging was used, systems had to fit whole programs into storage contiguously, which caused
various storage and fragmentation problems.[1]

ãaging is an important part of virtual memory implementation in most contemporary general-purpose operating
systems, allowing them to use disk storage for data that does not fit into physical random-access
memory (RAM).

o  
D| E 

4>>

Ô  8 ''

Ô    
 ''

Ô  ?'  

Ô  C.' '

Ô  B#)
|

Ô  !.2
'

||'

'

C$  

Ô C ?  

Ô C   <  <

Ô C  %

è C  ?'  

Ô C C6< 6<2( 

è C C  <
è C C 
4#F

è C C #

Ô C B '4#C

B.)


! '
G

=" 

 '  | 

#

;% 

")


/< (

[edit]Overview

The main functions of paging are performed when a program tries to access pages that are not currently
mapped to physical memory (RAM). This situation is known as a page fault. The operating system must then
take control and handle the page fault, in a manner invisible to the program. Therefore, the operating system
must:

1. Determine the location of the data in auxiliary storage.


2. Obtain an empty page frame in RAM to use as a container for the data.

3. Load the requested data into the available page frame.


4. Update the page table to show the new data.
5. Return control to the program, transparently retrying the instruction that caused the page fault.

Because RAM is faster than auxiliary storage, paging is avoided until there is not enough RAM to store all the
data needed. When this occurs, a page in RAM is moved to auxiliary storage, freeing up space in RAM for use.
Thereafter, whenever the page in secondary storage is needed, a page in RAM is saved to auxiliary storage so
that the requested page can then be loaded into the space left behind by the old page. Efficient paging systems
must determine the page to swap by choosing one that is least likely to be needed within a short time. There
are various page replacement algorithms that try to do this.

Most operating systems use some approximation of the least recently used (LRU) page replacement algorithm
(the LRU itself cannot be implemented on the current hardware) or working set based algorithm.

If a page in RAM is modified (i.e. if the page becomes dirty) and then chosen to be swapped, it must either be
written to auxiliary storage, or simply discarded.
To further increase responsiveness, paging systems may employ various strategies to predict what pages will
be needed soon so that it can preemptively load them.

[edit]ü  R 


Ñain article: Demand paging

When demand paging is used, no preemptive loading takes place. ãaging only occurs at the time of the data
request, and not before. In particular, when a demand pager is used, a program usually begins execution with
none of its pages pre-loaded in RAM. ãages are copied from the executable file into RAM the first time the
executing code references them, usually in response to page faults. As such, much of the executable file might
never be loaded into memory if pages of the program are never executed during that run.

[edit]=
R  R 
This technique preloads a process's non-resident pages that are likely to be referenced in the near future
(taking advantage of locality of reference). ïuch strategies attempt to reduce the number of page faults a
process experiences.

[edit]  R  
The free page queue is a list of page frames that are available for assignment after a page fault. ïome
operating systems[NB 1] support page reclamation; if a page fault occurs for a page that had been stolen and the
page frame was never reassigned, then the operating system avoids the necessity of reading the page back in
by assigning the unmodified page frame.

[edit]   
ïome operating systems periodically look for pages that have not been recently referenced and add them to
the Free page queue, after paging them out if they have been modified.

[edit]R R  

A few operating systems use anticipatory paging, also called swap prefetch. These operating systems
periodically attempt to guess which pages will soon be needed, and start loading them into RAM. There are
various heuristics in use, such as "if a program references one virtual address which causes a page fault,
perhaps the next few pages' worth of virtual address space will soon be used" and "if one big program just
finished execution, leaving lots of free RAÑ, perhaps the user will return to using some of the programs that
were recently paged out".

[edit] 
  
Unix operating systems periodically use sync to pre-clean all dirty pages, that is, to save all modified pages to
hard disk. Windows operating systems do the same thing via "modified page writer" threads.
ãre-cleaning makes starting a new program or opening a new data file much faster. The hard drive can
immediately seek to that file and consecutively read the whole file into pre-cleaned page frames. Without pre-
cleaning, the hard drive is forced to seek back and forth between writing a dirty page frame to disk, and then
reading the next page of the file into that frame.

[edit]Thrashing

Ñain article: Thrashing (computer science)

Most programs reach a steady state in their demand for memory locality both in terms of instructions fetched
and data being accessed. This steady state is usually much less than the total memory required by the
program. This steady state is sometimes referred to as the working set: the set of memory pages that are most
frequently accessed.

Virtual memory systems work most efficiently when the ratio of the working set to the total number of pages
that can be stored in RAM is low enough that the time spent resolving page faults is not a dominant factor in
the workload's performance. A program that works with huge data structures will sometimes require a working
set that is too large to be efficiently managed by the page system resulting in constant page faults that
drastically slow down the system. This condition is referred to as thrashing: pages are swapped out and then
accessed causing frequent faults.

An interesting characteristic of thrashing is that as the working set grows, there is very little increase in the
number of faults until the critical point (when faults go up dramatically and majority of system's processing
power is spent on handling them).

An extreme example of this sort of situation occurred on the IBM ïystem/360 Model 67 and IBM
ïystem/370 series mainframe computers, in which a particular instruction could consist of an execute
instruction, which crosses a page boundary, that the instruction points to a move instruction, that itself also
crosses a page boundary, targeting a move of data from a source that crosses a page boundary, to a target of
data that also crosses a page boundary. The total number of pages thus being used by this particular
instruction is eight, and all eight pages must be present in memory at the same time. If the operating system
will allocate less than eight pages of actual memory in this example, when it attempts to swap out some part of
the instruction or data to bring in the remainder, the instruction will again page fault, and it will thrash on every
attempt to restart the failing instruction.

To decrease excessive paging, and thus possibly resolve thrashing problem, a user can do any of the
following:

è Increase the amount of RAM in the computer (generally the best long-term solution).
è Decrease the number of programs being concurrently run on the computer.
The term thrashing is also used in contexts other than virtual memory systems, for example to
describe cache issues in computing or silly window syndrome in networking.

[edit]Terminology

Historically, paging sometimes referred to a memory allocation scheme that used fixed-length pages as
opposed to variable-length segments, without implicit suggestion that virtual memory technique were employed
at all or that those pages were transferred to disk.[2] [3] ïuch usage is rare today.

ïome modern systems use the term swapping along with paging. Historically, swapping referred to moving
from/to secondary storage a whole program at a time, in a scheme known asroll-in/roll-out. [4] [5] In the 1960s,
after the concept of virtual memory was introduced²in two variants, either using segments or pages²the
term swapping was applied to moving, respectively, either segments or pages, between disk and memory.
Today with the virtual memory mostly based on pages, not segments, swapping became a fairly close synonym
ofpaging, although with one difference.[dubious ± discuss]

In many popular systems, there is a concept known as page cache, of using the same single mechanism
for both virtual memory and disk caching. A page may be then transferred to or from any ordinary disk file, not
necessarily a dedicated space. Page in is transferring a page from the disk to RAM. Page out is transferring a
page from RAM to the disk. Swap in and outonly refer to transferring pages between RAM and dedicated swap
space or swap file, and not any other place on disk.

On Windows NT based systems, dedicated swap space is known as a page file and paging/swapping are often
used interchangeably.
|  (  '(.''

 
Ê
Ê

Ê

Ê
Ê Ê ÊÊÊ 
Ê  Ê Ê
 Ê
ÊÊ  Ê Ê
Ê Ê
 Ê Ê  Ê   Ê ÊÊ  Ê  ÊÊ  Ê ÊÊ  Ê!Ê
"  ÊÊÊ"  ÊÊ #Ê Ê Ê$Ê ÊÊ   ÊÊ %Ê  &Ê Ê ÊÊ
ÊÊ &#Ê  ÊÊ Ê  ÊÊÊ Ê  Ê  Ê Ê  ÊÊÊ
Ê  Ê Ê Ê
 Ê ÊÊ  ÊÊ Ê ÊÊ  ÊÊÊ   Ê
   Ê  Ê

'Ê Ê( Ê


Ê Ê

Ê ÊÊ Ê ÊÊ  Ê ÊÊ


ÊÊ   ÊÊ Ê  Ê ÊÊ

Ê Ê)*#Ê( Ê Ê  ÊÊ Ê
Ê Ê Ê   Ê Ê
Ê  ÊÊ Ê ÊÊÊ  &Ê ÊÊ   ÊÊ Ê ÊÊ  Ê$Ê
  ÊÊ   ÊÊÊ
ÊÊ Ê Ê  Ê 
 Ê Ê 
Ê Ê Ê Ê
 ÊÊ$Ê  Ê( ÊÊ Ê Ê  Ê
 ÊÊÊÊ   ÊÊ Ê
   Ê  Ê  Ê ÊÊÊ
Ê  Ê Ê  #Ê Ê  ÊÊÊ
  Ê ÊÊ$ Ê  ÊÊÊÊ
Ê

+ Ê,Ê ($Ê- Ê


ÊÊ
Ê
Ê
Ê

.  ÊÊÊ
è .  Ê /Ê Ê &Ê  Ê Ê Ê
Ê  Ê
è .  Ê /Ê
 Ê Ê   Ê   ÊÊ 

Ê
Ê   Ê
è "Ê Ê$
ÊÊ 
Ê & Ê 0Ê Ê Ê ÊÊ Ê&#Ê
è ï  ÊÊ$Ê & ÊÊ ÊÊÊ 1Ê Ê   ÊÊ Ê
 ÊÊ+'' #Ê
è + Ê Ê Ê$Ê ÊÊ1 2Ê ÊÊ &Ê Ê  Ê
Ê   Ê#Ê
è "Ê Ê Ê
è ·
ÊÊ Ê Ê  Ê  Ê 
Ê ÊÊ
 Ê  Ê3."Ê
#Ê

| >    * '' |

 
Virtual memory
From Wikipedia, the free encyclopedia

This article is about the computational technique. For the TBN game show, see Virtual Ñemory (game show).

| 
 
" 
 
   
 
  
 
 
A 
   ' j 


| 

  ))
+   

 


 
 
  
   

.| > | 
 
'

  |  j


The program thinks it has a large range of contiguous addresses, but in reality the parts it is currently using are scattered
around RAM, and the inactive parts are saved in a disk file.

In computing, á     is a memory management technique developed for multitasking kernels. This
technique virtualizes acomputer architecture's various hardware memory devices (such as RAM modules
and disk storage drives), allowing a program to beà  à
:

è there is only one hardware memory device and this "virtual" device acts like a RAM module.
è the program has, by default, sole access to this virtual RAM module as the basis for a contiguous working
memory (an address space).

ïystems that employ virtual memory:

è use hardware memory more efficiently than systems without virtual memory.[citation needed]
è make the programming of applications easier by:

è hiding fragmentation.

è delegating to the kernel the burden of managing the memory hierarchy; there is no need for the
program to handle overlaysexplicitly.

è obviating the need to relocate program code or to access memory with relative addressing.

Memory virtualization is a generalization of the concept of virtual memory.

Virtual memory is an integral part of a computer architecture; all implementations (excluding[dubious ±


discuss]
emulators and virtual machines) require hardware support, typically in the form of a memory
management unit built into the CãU. Consequently, older operating systems (such as DOï[1] of the 1980s or
those for the mainframes of the 1960s) generally have no virtual memory functionality[dubious ± discuss], though
notable exceptions include the Atlas, B5000, IBM ïystem/360 Model 67, IBM ïystem/370mainframe systems
of the early 1970s, and the Apple Lisa project circa 1980.

Embedded systems and other special-purpose computer systems that require very fast and/or very consistent
response times may opt not to use virtual memory due to decreased determinism; virtual memory systems
trigger unpredictable interrupts that may produce unwanted "jitter" during I/O operations. This is because
embedded hardware costs are often kept low by implementing all such operations with software (a technique
called bit-banging) rather than with dedicated hardware. In any case, embedded systems usually have little use
for complicated memory hierarchies.
en.wikipedia.org/wiki/Virtual_memory
v   R   
A swap file allows an operating system to use hard disk space to simulate extra
memory. When the system runs low on memory, it swaps a section of RAM that an idle
program is using onto the hard disk to free up memory for other programs. Then when
you go back to the swapped out program, it changes places with another program in
RAM. This causes a large amount of hard disk reading and writing that slows down your
computer considerably.

This combination of RAM and swap files is known as virtual memory. The use of virtual
memory allows your computer to run more programs than it could run in RAM alone.

The way swap files are implemented depends on the operating system. ïome operating
systems, like Windows, can be configured to use temporary swap files that they create
when necessary. The disk space is then released when it is no longer needed. Other
operating systems, like Linux and Unix, set aside a permanent swap space that
reserves a certain portion of your hard disk.

ãermanent swap files take a contiguous section of your hard disk while some temporary
swap files can use fragmented hard disk space. This means that using a permanent
swap file will usually be faster than using a temporary one. Temporary swap files are
more useful if you are low on disk space because they don't permanently reserve part of
your hard disk.

| (     ' | 

 
ï 
=   
        
  
 
                  
   
               =
    
                
                   
              

 
 
      

  
   
  
   
      
              

   
    
  
     
       
       
  

    

      


 

Вам также может понравиться