Вы находитесь на странице: 1из 18

Chapter 1

 Basic Elements:
o Processor
o I/O Modules
o System Bus
o Main Memory
 Computer Registers:
o built into processor; fastest memory locations
o User Visible:
 Register that can be referenced through machine language in the user mode of
execution
o Control and Status:
 Program counter: contains address of next instruction
 Instruction Register: instruction being executed
 Accumulator (AC): temporary storage
 Condition Code: result of the most recent arithmetic (e.g. zero, carry,
overflow)
 Status Information: includes interrupt flags, execution mode
 Memory Address Register: hold the address being read from or written to
 Memory Buffer Register: hold the data being loaded in or out of memory

 Instruction Cycle:
o Fetch: get next instruction
o Execute
o Halt (only due to unrecoverable error and explicit halt instruction)
 4 Categories of Instructions:
o Processor – Memory: data transferred between memory and processor
o Processor -- I/O: data transferred between I/O device and processor
o Data Processing: arithmetic or logical operations on data
o Control: instruction that alters the sequence of execution
 I/O Programs:
o Very slow to load
 Interrupts:
o I/O devices let OS know when events occur
o Then OS preempts any running process (switches context away) to handle event
o OS and Interrupt Handler (hardware) suspends execution when interrupt occurs and
resumes execution when it is done

o
 Multiple Interrupts:
o An interrupt occurs while one is being processed
o Two Approaches:
 Disable interrupts while one is being processed
 Use a priority scheme
 Locality of Reference:
o Memory references by processor tend to cluster for both instructions and data
o Temporal Locality:
 Locality in Time
 Keeps most recently accessed data closer to processor
o Spatial Locality:
 Locality in space
 Moves nearby blocks of data to upper levels
 Three Possible Techniques for I/O Operation:
o Programmed I/O
 Program must manually check for status of the I/O interrupt by polling
o Interrupt-Driven I/O
 More efficient than programmed I/O,
 But transfer rate is limited by speed of processor
 Processor is tied up in manage each I/O transfer and running I/O instructions
o Direct Memory Access (DMA)
 Most-efficient; Does not go through processor for transferring data, just uses
system bus directly
 Processor only involved at beginning and end of transfer
 Symmetric Multiprocessors:
o Multiple processors that each contain their own cache, control unit, logic unit, and
registers
o Each processor has shared memory and I/O devices through a shared bus
o Communicate with each other through memory
o Advantages:
 Performance: work done in parallel is more powerful
 Availability: the failure of a single processor does not halt machine
 Incremental Growth: additional processor can be added to increase
performance
 Scaling
 Cache Coherence:
o Multiple copies of data can exist in different caches at same time
o Inconsistent view of memory can result
o Write Policies:
 Write Back: only cache is changed at first and main memory is updated later
 Write Through: both cache and main memory updated with every write
operation

Chapter 2
 OS
o Program that controls the execution of application programs
o Acts as the interface between hardware and applications
 Main Objectives of OS:
o Convenience
o Efficiency:
 Manage hardware resources (memory, processor) efficiently
o Ability to Evolve:
 To permit the effective development, testing, and introduction of new system
functions without interfering with service
 Due to new hardware or services
 Architecture Interfaces:
o ISA – Instruction Set Architecture; boundary between hardware and software
o ABI – Application Binary Interface
o API – Application Programming Interface
 Evolution of OS:
o Serial Processing
 No Operating System: programmers interacted directly with hardware
 Users sign up for time slots of exclusive use of computer
 Load program into memory space, overwriting old content
 Inconvenient for user and wasteful of computer time
o Simple Batch Systems
 User no longer has direct access to processor
 Job is submitted to monitor
 Overhead due to monitor:
 Main memory
 Processor time
 Processor is often idle due to I/O processing
o Multiprogrammed Batch Systems
 Same issue and positives as simple batch
 But more efficient by maximizing processor use
 Still uses JCL
o Time Sharing Systems
 Multiuser OS sharing processor time
 Minimize response time
 Commands entered through terminal
 Monitor:
o Controls sequence of events
o Resident Monitor software always in memory
o Monitor reads job and gives control
o Job returns control to monitor
o Job Control Language (JCL): special programming language used to provide
instructions to monitor
 Monitor Needs Hardware:
o Memory protection: user program does not alter memory area with monitor
o Timer: to prevent a single job from monopolizing system
o Privileged instruction: certain machine level instructions that can only be executed
by monitor
o Interrupts: gives OS more flexibility in relinquishing control and regaining it
 Modes of Operation:
o User Mode: application program switched back after interrupt is handled
o Kernel Mode: hardware access; initiated by system call
 Measure Efficiency:
o Turnaround time = actual time to complete job
o Throughput = average number of jobs completed per time period T
o Processor utilization = percentage of time that processor is active (not idle)
 Multiprogramming:
o Multitasking with multiple jobs at once
 Causes of Errors:
o Improper Synchronization: waiting for data to come in buffer
o Failed Mutual Exclusion: more than one user/program attempts to use shared
resource
o Nondeterminate program operation: program execution interleaved by processor
when memory shared
o Deadlocks: two or more programs hung up waiting for each other
 Processes:
o Hold all info to run a program
o Program in execution
o Structure:

 Address Protection with Base and Limit Registers

 Keys for Scheduling and Resource Management


o Efficiency
o Fairness
o Differential Responsiveness
 Monolithic System
o Block of code (layered kernel) that executes in a single address space as a single
process
o Ex: Linux
 Microkernel Architecture
o Assigns only essentials to kernel
 Address Spaces - > Simple Kernel
 Interprocess Communication (IPC) - > Flexibility
 Basic Scheduling - > Good for distributed OS
 Has overhead with IPC and context switch from kernel to user space
 Ex: Windows 7, 8, Mac OS X
 Multithreading
o Process is divided into threads that can run concurrently
o Threads are uninterruptible
 Symmetric Multiprocessing (SMP)
o Refers to hardware architecture but also OS that exploits it
o Schedules processes or threads across all processors
o OS provides tools and functions to exploit parallelism
 OS Design:
o Distributed Operating System
o Object-Oriented Design
 Fault Categories:
o Permanent:
 A fault that after it occurs, it is always present
 Persists until faulty component is repaired/replaced
o Temporary
 Not present all the time, depends on operating conditions
 Subcategories:
 Transient: fault only occurs once
 Intermittent: fault occurs at multiple, unpredictable times
 Availability:
o Fraction of time system is available to service user requests
o Mean time to failure (MTTF):
 Uptime
o Mean time to repair (MTTR):
 Downtime
 average time it takes to repair or replace a faulty element
o Availability = ( MTTF ) / ( MTTF + MTTR )
 Fault Tolerance:
o Spatial (physical) Redundancy
 Involves the use of multiple components that perform the same function
simultaneously so there is one available as backup
o Temporal Redundancy
 Involves repeating function or operation when error is detected
 Effective with temporary faults
o Information Redundancy
 Provides fault tolerance by replicating or coding data so bit errors can be
detected and corrected
 Ex: RAID Disks
 Linux:
o Unix variant for the IBM PC
o Created by Linus Torvalds
o Fully featured UNIX system that runs on several platforms
o Open-sourced code
o Highly modular and easily configured
 Modular Monolithic
o All of OS functionality in one large block of code that runs as a single process within
a single address space
o All functional components of kernel have access to all internal data structures and
routines
o Loadable Modules
 Relatively independent blocks
 Module is executed in kernel mode depending on the current process
 Dynamic Linking: module can be linked or unlinked to the kernel while the
kernel is already in memory and executing
 Stackable Modules:
 modules are arranged in a hierarchy
 Modules serve as libraries when they are referenced by modules higher
up in the hierarchy
 Act as clients when they reference modules further down

Chapter 7
 Terms:
o Frame: fixed length block of main memory
o Page: Fixed-length block of data in secondary memory (e.g. on disk)
o Segment: Variable-length block of data in secondary memory
 Memory Management:
o Relocation: processes may be swapped in and out of main memory, so it must be
able to relocation to a different area of memory to maximize processor utilization
o Protection: processes should not be able to reference memory locations in a process
for reading or writing purposes without permission; hardware
o Sharing: allow processes controlled access to shared areas of main memory
 Base and Limit Registers
o Define logical address
o If (base + limit) > ADDRESS >= base

 Measuring Inefficiency
o Internal fragmentation: wasted space due to block of data loaded being smaller than
partition
o External fragmentation: memory that is external to all partitions becomes
increasingly fragmented and utilization declines
 Fixed Partitioning:
o Main memory is divided into static partitions at system generation time
o Internal Fragmentation
 Overlaying:
o Programmer organizes program and data in such a way that various modules assigned
to same region of memory, with a main program responsible for switching the
modules in and out as needed
 Equal Size Fixed Partition:
o Program may be too big for partition, overlay used
o Too small for space; has internal fragmentation
 Unequal Size Fixed Partitions:
o One queue per partition with empty queue problem
o Single queue with large internal fragmentation problem
o Disadvantage:
 partitions specified at system generation limits active processes
 does not efficiently handle small jobs
 Dynamic Partitioning
o Partitions are created dynamically
o Used by IBM’s OS/MVT
o Variable size and number of partitions
o No internal fragmentation
o External fragmentation
 Compaction:
o Technique for overcoming external fragmentation
o OS shifts processes so they are contiguous
o Time-consuming and wastes CPU time
 Placement Algorithms
o Best fit: chooses the block that is closest in size to the request; worst and compaction
happen often due to small pieces
o First fit: scans memory from beginning and chooses first available block that is large
enough; best and fastest
o Next fit: scans memory from the location of last placement (cursor) and chooses the
next available block that is large enough
 Buddy System

 Addresses:
o Logical: reference to memory location independent of current assignment of data to
memory
o Relative: Address is expressed as a location relative to some known point
o Physical/Absolute: actual location in main memory
 Simple Paging
o Frames: available chunks of memory
o Pages: chunks of a process
o Main memory is divided into equal-size frames
o Process divided into equal size pages with the same length as frames
o Process loaded by pages being loaded into available frames;
o DOES NOT HAVE TO BE CONTIGUOUS due to page table
o No external fragmentation
o Small amount of internal fragmentation
 Page Table
o Maintain by OS for each process
o Contains frame location for each page in process
o Used by processor to produce a physical address

 Simple Segmentation
o Each process divided into segments
o Process loaded by loading all segments into dynamic partitions
o Process can occupy more than one partition
o Partitions do not have to be contiguous
o Segments
 Can vary in length
 Has a max length
o Logical address:
 Segment Number
 Offset
o No internal fragmentation
o External fragmentation
 Segmentation vs Paging
o Segmentation visible to programmer
o Paging is invisible
o Principal of Inconvenience: programmer must know max segment size limit

Chapter 8
 Real Memory
o Main memory, actual RAM
 Virtual Memory
o Main memory combined with swap space on disk
o Allows for higher degree of multiprogramming
o Degree of Multiprogramming: amount of processes running in memory
o Eliminates requirement to have all pages of a process in memory during execution
 Not necessary to load whole process if:
o Process can be broken into many pieces that are not contiguous in main memory
o Memory references are logical addresses translated to physical at run time
 Thrashing:
o State that the system spends most of the time swapping process pieces rather than
executing instructions
 Memory Management Unit (MMU)
o Translates logical address to physical address and sends them to memory
 Page Table Issues:
o Mapping from virtual address to physical must be fast
o If virtual address space large then page table will be large
 Most VM schemes store page table in virtual memory than real memory
 Problems VM Page Tables:
o Page fault fetching the page table entry
o Fault for fetching data
 Inverted Page Table:
o Indexes page table entries by frame number than by virtual page number
o Page number is mapped to a hash value using function
o Fixed proportion of real memory is used
 Translation Lookaside Buffer (TLB)
o High-speed cache for page table entries
o Each TLB entry must include page number with page table entry
o Searches for matching page number in TLB; associative mapping

 Page Size:
o Smaller page size, less internal frag
o But more pages per process = large page tables
 Segmentation: Address Translation:

 Combined Paging and Segmentation:


o User address space broken up into segments
o Each segment is broken up into fixed-size pages which are equal to frame
 Design of OS Memory Management depends on:
o Whether or not to use VM
o Use paging or segmentation / both
o Algorithms employed for memory management
 Page fault:
o Occurs when a program accesses a page that is mapped in virtual address space but
not loaded to physical memory
o Minor fault: page loaded in memory at the time of the fault
o Major fault: page not loaded in memory at time fault generated
 Fetch Policy:
o Determines when a page should be brought into memory
o Demand paging
 Only brings pages into main memory when reference made to location on
page
 Lot of page faults at first
 As time goes on, principle of locality suggest it will be more accurate
o Prepaging
 Pages other than the one demanded by a page fault are brought in
 If pages for a process stored contiguously, more efficient to bring in several
pages at one time
 Ineffective if extra pages are not referenced
 Placement Policy:
o Determines where in real memory a process piece will reside
o Important for segmentation
o NUMA (nonuniform memory access) time for accessing a particular physical location
varies with distance between processor and memory module
 Frame Locking:
o When locked, the page stored in frame cannot be replaced
o Lock bit associated with each frame
 Replacement Policy:
o Deals with selection of page in main memory to be replaced when new page needed
o Optimal Policy:
 Selects page for which the time to next reference is longest
 Impossible to implement
 Serves as standard against judging real-world algorithms
o Least Recently Used:
 Selects page that has not been refenced for the longest time
 Principle of locality suggests that page is least likely to be referenced in the
future
 Difficult to implement
o First-in-First-out:
 Treats page frames like a circular buffer
 Simple to implement
 Page in memory longest is replaced
o Clock Policy:
 Requires an additional bit: use bit
 When page is referenced or loaded use bit set to 1
 Frames are circular buffer
 If use bit 1, passed over and use bit changed to 0
 If use bit 0, chosen for replacement, insert and shift cursor up
 If the value is already on clock table, cursor stays in spot; use bit of value
becomes 1
o Modified Clock:
 Modify bit (m) and Use bit (u)
 When page has been modified, cannot be replaced without updating secondary
memory
 First, look for u=0 and m=0 and DON’T change use bit
 Second, look for u=0 and m=1; change use bit when passed
 If both fail, repeat step 1 and step 2
o Two Handed Clock
 Reference (use) bit set to 1 when page referenced for read or write
 Fronthand sets ref bit to 0
 Backhand sweeps a little time after fronthand and if bit still 0, it is set to be
paged out
 Scanrate: rate which 2 hands scan through page list; pages per second
 Handspread: gap between fronthand and backhand
 Page Buffering:
o Replaced page is assigned to a list:
 Free Page List: list of page frames available for reading in pages (when use bit
changed to 0, added to tail of list)
 Modified Page List: pages are written out in clusters
 VMSTAT
 No definitive best policy for minimizing page faults
 Resident Set Management:
o OS must decide how many pages to bring into main memory
o Size:
 Fixed Allocation:
 Gives process a fixed number of frames in MM
 Only replaces from what frames are allocated to process
 Global replacement not possible
 Variable Allocation:
 Allows page frames allocated to vary over lifetime of process
 Global replacement causes the size of processes to vary over time
o Replacement Scope
 Local
 Chooses only among resident pages of process that generated the page
fault
 Global
 Considers all unlocked pages in main memory
 Cleaning Policy:
o When a modified page should be written out to secondary memory
o Demand Cleaning
 Page written out to secondary memory only when it is selected for
replacement
 Minimize page writes; but may be slow in the transfer process
o Precleaning
 Writes modified pages before their frames are even needed (but not removed
from main memory/ buffering)
 Writes out pages to secondary memory in batches
 Con: Pages can be modified before they are replaced for good
o Page Buffering:
 Replaced page is assigned to a list:
 Free Page List: list of page frames available for reading in pages
(when use bit changed to 0, added to tail of list)
 Modified Page List: pages are written out in clusters
 Better Cleaning Policy
 Load Control
o Determines number of processes that will be resident in main memory
 Process Suspension
o Degree of multiprogramming is to be reduced something must be swapped out
o Six Possibilities:
 Lowest-priority process
 Faulting process
 Last process activated
 Process with smallest resident set
 Largest process
 Process with the largest remaining execution window
 Unix
o Early Unix no virtual memory
o Two Memory Management Schemes:
 Paging system:
 Allocates page frames in main memory to processes
 Virtual memory for user processes and disk I/O
 Kernel memory allocator:
 Allocates memory for kernel
 Dynamic memory allocation
 Linux
o Similar to unix with a 3 level page table structure
 Page directory: entry points to page middle directory; each active process has
one
 Page middle directory: can span multiple pages; entry points to page table
 Page table: can span multiple pages; entry refers to a virtual page of process
o Page Replacement
 Prior to 2.6.28: based on clock algorithm / form of LRU policy
 Use bit is replaced with an 8-bit age variable incremented with each
access
 Periodically sweeps through global page pool and decrements age bits
 Sweeping uses lots of processor time
 After 2.6.28: Split LRU Algorithm
 Makes use of two flags added to each PTE: active and referenced
 Entire physical memory is divided into different zones based on
address
o Kernel Memory Allocation:
 Buddy algorithm used so memory for kernel can be allocated/deallocated in
unites of one or more pages

Вам также может понравиться