Академический Документы
Профессиональный Документы
Культура Документы
Basic Elements:
o Processor
o I/O Modules
o System Bus
o Main Memory
Computer Registers:
o built into processor; fastest memory locations
o User Visible:
Register that can be referenced through machine language in the user mode of
execution
o Control and Status:
Program counter: contains address of next instruction
Instruction Register: instruction being executed
Accumulator (AC): temporary storage
Condition Code: result of the most recent arithmetic (e.g. zero, carry,
overflow)
Status Information: includes interrupt flags, execution mode
Memory Address Register: hold the address being read from or written to
Memory Buffer Register: hold the data being loaded in or out of memory
Instruction Cycle:
o Fetch: get next instruction
o Execute
o Halt (only due to unrecoverable error and explicit halt instruction)
4 Categories of Instructions:
o Processor – Memory: data transferred between memory and processor
o Processor -- I/O: data transferred between I/O device and processor
o Data Processing: arithmetic or logical operations on data
o Control: instruction that alters the sequence of execution
I/O Programs:
o Very slow to load
Interrupts:
o I/O devices let OS know when events occur
o Then OS preempts any running process (switches context away) to handle event
o OS and Interrupt Handler (hardware) suspends execution when interrupt occurs and
resumes execution when it is done
o
Multiple Interrupts:
o An interrupt occurs while one is being processed
o Two Approaches:
Disable interrupts while one is being processed
Use a priority scheme
Locality of Reference:
o Memory references by processor tend to cluster for both instructions and data
o Temporal Locality:
Locality in Time
Keeps most recently accessed data closer to processor
o Spatial Locality:
Locality in space
Moves nearby blocks of data to upper levels
Three Possible Techniques for I/O Operation:
o Programmed I/O
Program must manually check for status of the I/O interrupt by polling
o Interrupt-Driven I/O
More efficient than programmed I/O,
But transfer rate is limited by speed of processor
Processor is tied up in manage each I/O transfer and running I/O instructions
o Direct Memory Access (DMA)
Most-efficient; Does not go through processor for transferring data, just uses
system bus directly
Processor only involved at beginning and end of transfer
Symmetric Multiprocessors:
o Multiple processors that each contain their own cache, control unit, logic unit, and
registers
o Each processor has shared memory and I/O devices through a shared bus
o Communicate with each other through memory
o Advantages:
Performance: work done in parallel is more powerful
Availability: the failure of a single processor does not halt machine
Incremental Growth: additional processor can be added to increase
performance
Scaling
Cache Coherence:
o Multiple copies of data can exist in different caches at same time
o Inconsistent view of memory can result
o Write Policies:
Write Back: only cache is changed at first and main memory is updated later
Write Through: both cache and main memory updated with every write
operation
Chapter 2
OS
o Program that controls the execution of application programs
o Acts as the interface between hardware and applications
Main Objectives of OS:
o Convenience
o Efficiency:
Manage hardware resources (memory, processor) efficiently
o Ability to Evolve:
To permit the effective development, testing, and introduction of new system
functions without interfering with service
Due to new hardware or services
Architecture Interfaces:
o ISA – Instruction Set Architecture; boundary between hardware and software
o ABI – Application Binary Interface
o API – Application Programming Interface
Evolution of OS:
o Serial Processing
No Operating System: programmers interacted directly with hardware
Users sign up for time slots of exclusive use of computer
Load program into memory space, overwriting old content
Inconvenient for user and wasteful of computer time
o Simple Batch Systems
User no longer has direct access to processor
Job is submitted to monitor
Overhead due to monitor:
Main memory
Processor time
Processor is often idle due to I/O processing
o Multiprogrammed Batch Systems
Same issue and positives as simple batch
But more efficient by maximizing processor use
Still uses JCL
o Time Sharing Systems
Multiuser OS sharing processor time
Minimize response time
Commands entered through terminal
Monitor:
o Controls sequence of events
o Resident Monitor software always in memory
o Monitor reads job and gives control
o Job returns control to monitor
o Job Control Language (JCL): special programming language used to provide
instructions to monitor
Monitor Needs Hardware:
o Memory protection: user program does not alter memory area with monitor
o Timer: to prevent a single job from monopolizing system
o Privileged instruction: certain machine level instructions that can only be executed
by monitor
o Interrupts: gives OS more flexibility in relinquishing control and regaining it
Modes of Operation:
o User Mode: application program switched back after interrupt is handled
o Kernel Mode: hardware access; initiated by system call
Measure Efficiency:
o Turnaround time = actual time to complete job
o Throughput = average number of jobs completed per time period T
o Processor utilization = percentage of time that processor is active (not idle)
Multiprogramming:
o Multitasking with multiple jobs at once
Causes of Errors:
o Improper Synchronization: waiting for data to come in buffer
o Failed Mutual Exclusion: more than one user/program attempts to use shared
resource
o Nondeterminate program operation: program execution interleaved by processor
when memory shared
o Deadlocks: two or more programs hung up waiting for each other
Processes:
o Hold all info to run a program
o Program in execution
o Structure:
Chapter 7
Terms:
o Frame: fixed length block of main memory
o Page: Fixed-length block of data in secondary memory (e.g. on disk)
o Segment: Variable-length block of data in secondary memory
Memory Management:
o Relocation: processes may be swapped in and out of main memory, so it must be
able to relocation to a different area of memory to maximize processor utilization
o Protection: processes should not be able to reference memory locations in a process
for reading or writing purposes without permission; hardware
o Sharing: allow processes controlled access to shared areas of main memory
Base and Limit Registers
o Define logical address
o If (base + limit) > ADDRESS >= base
Measuring Inefficiency
o Internal fragmentation: wasted space due to block of data loaded being smaller than
partition
o External fragmentation: memory that is external to all partitions becomes
increasingly fragmented and utilization declines
Fixed Partitioning:
o Main memory is divided into static partitions at system generation time
o Internal Fragmentation
Overlaying:
o Programmer organizes program and data in such a way that various modules assigned
to same region of memory, with a main program responsible for switching the
modules in and out as needed
Equal Size Fixed Partition:
o Program may be too big for partition, overlay used
o Too small for space; has internal fragmentation
Unequal Size Fixed Partitions:
o One queue per partition with empty queue problem
o Single queue with large internal fragmentation problem
o Disadvantage:
partitions specified at system generation limits active processes
does not efficiently handle small jobs
Dynamic Partitioning
o Partitions are created dynamically
o Used by IBM’s OS/MVT
o Variable size and number of partitions
o No internal fragmentation
o External fragmentation
Compaction:
o Technique for overcoming external fragmentation
o OS shifts processes so they are contiguous
o Time-consuming and wastes CPU time
Placement Algorithms
o Best fit: chooses the block that is closest in size to the request; worst and compaction
happen often due to small pieces
o First fit: scans memory from beginning and chooses first available block that is large
enough; best and fastest
o Next fit: scans memory from the location of last placement (cursor) and chooses the
next available block that is large enough
Buddy System
Addresses:
o Logical: reference to memory location independent of current assignment of data to
memory
o Relative: Address is expressed as a location relative to some known point
o Physical/Absolute: actual location in main memory
Simple Paging
o Frames: available chunks of memory
o Pages: chunks of a process
o Main memory is divided into equal-size frames
o Process divided into equal size pages with the same length as frames
o Process loaded by pages being loaded into available frames;
o DOES NOT HAVE TO BE CONTIGUOUS due to page table
o No external fragmentation
o Small amount of internal fragmentation
Page Table
o Maintain by OS for each process
o Contains frame location for each page in process
o Used by processor to produce a physical address
Simple Segmentation
o Each process divided into segments
o Process loaded by loading all segments into dynamic partitions
o Process can occupy more than one partition
o Partitions do not have to be contiguous
o Segments
Can vary in length
Has a max length
o Logical address:
Segment Number
Offset
o No internal fragmentation
o External fragmentation
Segmentation vs Paging
o Segmentation visible to programmer
o Paging is invisible
o Principal of Inconvenience: programmer must know max segment size limit
Chapter 8
Real Memory
o Main memory, actual RAM
Virtual Memory
o Main memory combined with swap space on disk
o Allows for higher degree of multiprogramming
o Degree of Multiprogramming: amount of processes running in memory
o Eliminates requirement to have all pages of a process in memory during execution
Not necessary to load whole process if:
o Process can be broken into many pieces that are not contiguous in main memory
o Memory references are logical addresses translated to physical at run time
Thrashing:
o State that the system spends most of the time swapping process pieces rather than
executing instructions
Memory Management Unit (MMU)
o Translates logical address to physical address and sends them to memory
Page Table Issues:
o Mapping from virtual address to physical must be fast
o If virtual address space large then page table will be large
Most VM schemes store page table in virtual memory than real memory
Problems VM Page Tables:
o Page fault fetching the page table entry
o Fault for fetching data
Inverted Page Table:
o Indexes page table entries by frame number than by virtual page number
o Page number is mapped to a hash value using function
o Fixed proportion of real memory is used
Translation Lookaside Buffer (TLB)
o High-speed cache for page table entries
o Each TLB entry must include page number with page table entry
o Searches for matching page number in TLB; associative mapping
Page Size:
o Smaller page size, less internal frag
o But more pages per process = large page tables
Segmentation: Address Translation: