Академический Документы
Профессиональный Документы
Культура Документы
What is RTOS?
A RTOS (Real-Time Operating System) Is an Operating Systems with the necessary features to support a Real-Time System What is a Real-Time System? A system where correctness depends not only on the correctness of the logical result of the computation, but also on the result delivery time A system that responds in a timely, predictable way to unpredictable external stimuli arrivals
GPOS
(GENERAL PURPOSE OPERATING SYSTEMS)
RTOS
Time
No time bound processing, task time bound processing, task has to has not got any time limit to complete work within given time finish work. frame. Deterministic behavior Flat memory architecture
GPOS are non scalable and has RTOS are scalable and has smaller larger footprint and higher footprint and low context switch context switch latency. latency and interrupt latency. round robin way of scheduling. priority preemptive scheduling preemption is a must feature Tight memory constraints are not considered. it is not lightweight. should not take up too much memory since embedded systems come with tight memory constraints
Scheduling
Memory
priority
Priority consideration for each Each task must have a priority task is not as strict as RTOS
RTS
Tasks or processes attempt to control or react to events that take place in the outside world These events occur in real time and process must be able to keep up with them
Examples of RTS
ATM machine updating database Control of laboratory experiments Process control plants Robotics Air traffic control Telecommunications Military command and control systems
Why an RTOS?
Why an RTOS?
Characteristics of RTOS
Deterministic
Operations are performed at fixed, predetermined times or within predetermined time intervals Concerned with how long the operating system delays before acknowledging an interrupt
Characteristics of RTOS
Responsiveness
How long, after acknowledgment, it takes the operating system to service the interrupt Includes amount of time to begin execution of the interrupt Includes the amount of time to service the interrupt
Characteristics of RTOS
User control
User specifies priority Specify paging What processes must always reside in main memory Disks algorithms to use Rights of processes
Characteristics of RTOS
Reliability
Degradation of performance may have catastrophic consequences Attempt either to correct the problem or minimize its effects while continuing to run Most critical, high priority tasks execute
Features of RTOS
Fast context switch Small size(kernel should fit within ROM) Ability to respond to external interrupts quickly Multitasking with interprocess communication tools such as semaphores, signals, and events Use of special sequential files that can accumulate data at a fast rate Preemptive scheduling based on priority (scheduling) Delay tasks for fixed amount of time Special alarms and timeouts. Resource allocation
Soft Real-Time
Reduction in system quality is acceptable Deadlines may be missed and can be recovered from
Non Real-Time
No deadlines have to be met
Examples
Soft real time e.g. this flight is on time 98 times out of 100 Hard real time e.g. The emergency valve opens within 20 microsecond after this event occurs, irrespective of system load. this parachute automatically opens without fail when you reach 500 feet above ground.
RTOS Concepts
Critical Sections
Resource
A resource is any entity used by a task. A resource can be an I/O device such as a printer, a keyboard, a display, or a variable, a structure, an array, etc.
Shared Resource
A shared resource is a resource that can be used by more than one task. Each task should gain exclusive access to the shared resource to prevent data corruption. This is called Mutual Exclusion
Multitasking
Multitasking is the process of scheduling and switching the CPU (Central Processing Unit) between several tasks; a single CPU switches its attention between several sequential tasks.
Task
Task
A task, also called a thread, is a simple program that thinks it has the CPU all to itself. The design process for a real-time application involves splitting the work to be done into tasks which are responsible for a portion of the problem. Each task is assigned a priority, its own set of CPU registers, and its own stack area
State Diagram
Task states
Blocked
Ready
task delete task resume task create Dormant task delete Ready
Waiting
Running
interrupt ISR
task delete
Priority
Task Priority
Static Priorities: Task priorities are said to be static when the priority of each task does not change during the application's execution. Each task is thus given a fixed priority at compile time. All the tasks and their timing constraints are known at compile time in a system where priorities are static.
Dynamic Priorities Task priorities are said to be dynamic if the priority of tasks can be changed during the application's execution; each task can change its priority at run-time. This is a desirable feature to have in a real-time kernel to avoid priority inversion problem.
T3-lock on semaphore S enters critical section T1 ready-preempts T3-T1 tries to lock semaphore Slocked by T3-T1 is blocked T2 ready-preempts T3 while T3 is in critical section T1(high priority) is blocked for longer duration of time as T2 got executed in between
Dynamic Priorities Task priorities are said to be dynamic if the priority of tasks can be changed during the application's execution; each task can change its priority at run-time. This is a desirable feature to have in a real-time kernel to avoid priority inversion problem.
Deadlock
If task T1 has exclusive access to resource R1 and task T2 has exclusive access to resource R2, then if T1 needs exclusive access to R2 and T2 needs exclusive access to R1, neither task can continue. They are deadlocked. The simplest way to avoid a deadlock is for tasks to: a) acquire all resources before proceeding, b) acquire the resources in the same order, and c) release the resources in the reverse order.
Deadlock
Avoiding deadlock conditions requires careful attention to the way in which multiple tasks share semaphores and other RTOS resources. Deadlock conditions don't always show up easily during software testing.
Memory
The fundamental requirement for memory in a real-time system is that its access time should be bound (or in other words predictable). As a direct consequence, the use of demand paging (swapping pages to disk) is prohibited for real-time processes. This is why systems providing a virtual memory mechanism should have the ability to lock the process into the main memory so swapping will not occur (Swapping is a mechanism that cannot be made predictable.)
Memory
Virtual memory is another technique that cannot be made predictable, and therefore should not be used in real-time systems. A simple solution is to allocate all memory for all objects you need during the life of the system and never de-allocate them. Another solution is always to allocate and deallocate blocks of memory with a fixed size. (Introducing internal fragmentation = never using some parts of memory internal to the blocks).
Memory Management
Memory
Static memory allocation
All memory allocated to each process at system startup
Expensive Desirable solution for Hard-RT systems
Memory
In HRT static memory allocation is used. In SRT you have the option of dynamic memory allocation, no virtual memory, and no compaction. In non-RT you may want virtual memory and compaction.
Memory Requirements
A minimal kernel for an 8-bit CPU that provides only scheduling, context switching, semaphore management, delays, and timeouts should require about 1 to 3 Kbytes of code space. Code space needed when a kernel is used. Application code size + Kernel code size
Context Switches
Example PIC24F
Kernel
The kernel is the part of a multitasking system responsible for the management of tasks (that is, for managing the CPU's time) and communication between tasks. The fundamental service provided by the kernel is context switching. The use of a real-time kernel will generally simplify the design of systems by allowing the application to be divided into multiple tasks managed by the kernel. A kernel can allow you to make better use of your CPU by providing you with indispensible services such as semaphore management, queues, time delays etc.
Scheduling
Scheduling Tasks
Scheduler
The scheduler, also called the dispatcher, is the part of the kernel responsible for determining which task will run next. Most real-time kernels are priority based. Each task is assigned a priority based on its importance. The priority for each task is application specific. In a priority-based kernel, control of the CPU will always be given to the highest priority task ready-to-run. When the highest-priority task gets the CPU, however, is determined by the type of kernel used. There are two types of priority-based kernels: non-preemptive and preemptive.
Non-Preemptive Kernel
Non-preemptive kernels require that each task does something to explicitly give up control of the CPU. Non-preemptive scheduling is also called cooperative multitasking, tasks cooperate with each other to share the CPU. This is done to maintain illusion of concurrency.
Co-Operative Multitasking
Tasks execute until they pause or yield Tasks execute until they pause or yield -Must be written to explicitly yield. -System performance can be optimized Tasks can be given priorities
Pre-emptive Multitasking
Priority
Non-Preemptive Kernel
Asynchronous events are still handled by ISRs An ISR can make a higher priority task ready to run, but the ISR always returns to the interrupted task. The new higher priority task will gain control of the CPU only when the current task gives up the CPU.
Preemptive Kernel
A preemptive kernel is used when system responsiveness is important The highest priority task ready to run is always given control of the CPU. If an ISR makes a higher priority task ready, when the ISR completes, the interrupted task is suspended and the new higher priority task is resumed. With a preemptive kernel, execution of the highest priority task is deterministic
Reentrancy
A reentrant function is a function that can be used by more than one task without fear of data corruption. A reentrant function can be interrupted at any time and resumed at a later time without loss of data. Reentrant functions either use local variables (i.e., CPU registers or variables on the stack) or protect data when global variables are used.
copies of the arguments to strcpy() are placed on the task's stack, strcpy() can be invoked by multiple tasks without fear that the tasks will corrupt each other's pointers.
Non-Reentrant function
The programmer intended to make swap() usable by any task. Figure 2-6 shows what could happen if a low priority task is interrupted while swap() F2-6(1) is executing. Note that at this point Temp contains 1. The ISR makes the higher priority task ready to run, and thus, at the completion of the ISR F2-6(2), the kernel (assuming C/OSII) is invoked to switch to this task F2-6(3). The high priority task sets Temp to 3 and swaps the contents of its variables correctly (that is, z is 4 and t is 3). The high priority task eventually relinquishes control to the low priority task F2-6(4) by calling a kernel service to delay itself for 1 clock tick (described later). The lower priority task is thus resumed F2-6(5). Note that at this point, Temp is still set to 3! When the low-priority task resumes execution, it sets y to 3 instead of 1.
RMS
Suppose we have n tasks with a fixed period and computation time, so to meet all possible deadlines following expression must hold: Sum of the processor utilization cannot exceed 1 that is the maximum utilization of processor, this is infact for a perfect scheduling algorithm. For practical reasons this bound is less than 1, for RMS it is:
C1 Cn 1/ n --- + .... + --- < U(n) = n(2 - 1) T1 Tn
RMS
Theorem (RMA Bound) Any set o n periodic tasks is RM-schedulable i the processor utilization, U, is no greater 1 than n 2n 1 .
This means that henever U is at or belo the given utilization bound, a schedule can be constructed ith RM. In the limit hen the number o tasks n !g , the maximum utilization limit is
RMS Example
Example: Task P1: C1 = 20, T1 = 100, U1 = 20/100 = 0.2 Task P2: C2 = 40, T2 = 150, U2 = 40/150 = 0.267 Task P3: C3 = 100, T3 = 350, U3 = 100/350 = 0.286 Total Utilization: =0.753
Deadline Scheduling
Real time is not just about sheer speed It is the completion of a task at a specified time, not early neither late.
Types of RT Tasks
An aperiodic real time task is the one which has a dead line by that it must finish or start, or may have constraints on both start and finish. A periodic real time task is the one which has a dead line once every period T or exactly T units apart
Mutual Exclusion
The most common methods to obtain exclusive access to shared resources are: a) Disabling interrupts b) Test-And-Set c) Disabling scheduling d) Using semaphores
Synchronization
A task can be synchronized with an ISR, or another task when no data is being exchanged, by using a semaphore A task initiates an I/O operation and then waits for the semaphore. When the I/O operation is complete, an ISR (or another task) signals the semaphore and the task is resumed. the semaphore is drawn as a flag, to indicate that it is used to signal the occurrence of an event
When the first task reaches a certain point, it signals the second task and then waits for a signal from the second task. Similarly, when the second task reaches a certain point, it signals the first task and then waits for a signal from the first task At this point, both tasks are synchronized with each other.
Events
Event Flags
Event flags are used when a task needs to synchronize with the occurrence of multiple events. The task can be synchronized when any of the events have occurred. This is called disjunctive synchronization (logical OR). A task can also be synchronized when all events have occurred. This is called conjunctive synchronization (logical AND).
Intertask Communication
Message queues Pipes Fifos Mailboxes Semaphore Shared memory
Inter-task Communication
Interrupts
An interrupt is a hardware mechanism used to inform the CPU that an asynchronous event has occurred. When an interrupt is recognized, the CPU saves part (or all) of its context (i.e. registers) and jumps to a special subroutine called an Interrupt Service Routine, or ISR.
Interrupts
RT systems respond to external events
External events are translated by the hardware and interrupts are introduced to the system
Interrupt Service Routines (ISRs) handle system interrupts
May be stand alone or part of a device driver
RTOSs should allow lower level ISRs to be preempted by higher lever ISRs
Interrupts
Interrupts
Interrupt Dispatch Time
Time the hardware needs to bring the interrupt to the processor
Interrupt Routine
ISR execution time
Other Interrupt
Time needed for managing each simultaneous pending interrupt
Pre-emption
Time needed to execute critical code during which no preemption may happen
Interrupts
Scheduling
Time needed to make the decision on which thread to run
Context Switch
Time to switch from one context to another
Interrupt Latency
the most important specification of a realtime kernel is the amount of time interrupts are disabled. All real-time systems disable interrupts to manipulate critical sections of code and reenable interrupts when the critical section has executed. The longer interrupts are disabled, the higher the interrupt latency
Interrupt latency.
Maximum amount of time interrupts are disabled + Time to start executing the first instruction in the ISR
Interrupt Response
Interrupt response is defined as the time between the reception of the interrupt and the start of the user code which will handle the interrupt. The interrupt response time accounts for all the overhead involved in handling an interrupt. Typically, the processor's context (CPU registers) is saved on the stack before the user code is executed.
Interrupt Response
Interrupt latency + Time to save the CPU's context
Interrupt Recovery
Interrupt recovery is defined as the time required for the processor to return to the interrupted code.
use this feature to pass parameters (i.e. larger variables) to and from the ISR and a task.
The total RAM required if the kernel does not support a separate interrupt stack
Data space needed when a kernel is used. Application code requirements + Data space (i.e. RAM) needed by the kernel + SUM(task stacks + MAX(ISR nesting))
To reduce the amount of RAM needed in an application, carefulness about how you use each task's stack for
a) large arrays and structures declared locally to functions and ISRs b) function (i.e., subroutine) nesting c) interrupt nesting d) library functions stack usage e) function calls with many arguments
This especially the case when running an RTOS and wanting to share relatively small amounts of RAM amongst two or three or four tasks' stacks.
a multitasking system will require more code space (ROM) and data space (RAM) The amount of extra ROM depends only on the size of the kernel, and the amount of RAM depends on the number of tasks in your system.
Why an RTOS?
Why an RTOS?
Memory Management
RTOS Configuration
Example Configuration
RTOS Vendors