Вы находитесь на странице: 1из 11

RTOS copyright@gaurav_verma 

Real Time Operating System

1. Real Time Operating System


In the last few chapters, we have seen mainly the hardware platforms available for embedded
system design. For small and simple embedded systems, it may be the case that the software
portion is minimal. The hardware may not need any special mechanism to control and coordinate
its components to realize the applications. However, for moderate to complex embedded
systems, normally there exists a good number of coordinating software processes. Depending
upon the properties of such processes (like criticality, periodicity, deadline, etc.), a proper
scheduling is necessary. This ultimately leads to the requirement of software to control the
overall operation of the system. Such software is the Operating System (OS). It is the piece of
software responsible for managing the underlying hardware and providing an efficient platform
on which applications can be developed. While designing an embedded application to run on
some host processor, it is thus necessary for the application developer to understand the policies
followed by the underlying operating system. On the other hand, since the embedded systems are
mostly real-time functions having continuous interaction with the environment, it is very much
important that the OS also supports real time tasks. In this chapter, we will see the features of
real time tasks and the policies followed by a real-time system to handle those tasks.

1.1 Types of Real Time Tasks: A task is called as thread, is a simple program that thinks that it
has the CPU all to itself. The design procedure of real time systems involves splitting the work to
be done into tasks responsible for a portion of problem.
Bases upon their time criticalities, real time tasks can be classified into three categories. These
are

• Hard real time tasks


• Firm real time tasks
• Soft real time tasks

a) Hard Real Time Task: Hard real time tasks are those which must produce their results
within a specified time limit or deadline. Missing the deadline implies that the task has failed.
An example of such a system can be a plant consisting of several motors along with sensors
and actuators. The plant controller senses various conditions through sensors and issues
commands through the actuators. It is necessary that the controller response to the input
signals in a time bound fashion. That is, on the occurrence of some events, the response from

 
RTOS copyright@gaurav_verma 
 

the controller should come within a pre-specified time limit. Examples of such signals may
be input from fire sensor, power sensor, etc.

Another example may be single or a set of robot(s) that performs a set of activities under the
coordination of a central host. Depending upon the environment, a robot may need to take
some instantaneous decisions, for example, on detecting an obstacle on its path of movement,
the robot should find a way to avoid a collision. A delay in the detection and reaction process
may result in a collision, causing damage to the robot.

A very important feature of the hard real time system is its criticality. It means that a failure
of the task along with a failure to meet the deadline will have a catastrophic effect on the
system. Many of the hard real time systems are safety critical. This is particularly true for the
medical instruments monitoring and controlling the health of patient. However, all hard real
time systems may not be safety critical. For example, in a video game, missing a deadline is
not that severe.

As far as scheduling of hard real time tasks is concerned, we have to honour the deadline.
Unlike in ordinary operating systems in which we try to complete tasks as early as possible to
maintain a high throughput, there is no gain in finishing a hard real time task early. As long
as task completes within the specified time limit, the system runs fine.

b) Firm Real Time Task: A firm real time task also has an associated deadline, within which
the task is expected to be completed. However, if it does not complete within this time (that
is, deadline is missed), then also the system does not fail altogether.

Fig. 4.1 Utility of result for firm real time tasks

 
RTOS copyright@gaurav_verma 
 

Only some of the results may need to be discarded. For example, in a video conferencing,
video frames are sent over a network. Depending upon the properties of the network, some
frames may arrive late, or may be lost. The effect is some degradation in video quality for
some time, however, that in most cases is tolerable.

The main feature here is that any result computed after the deadline is of no value, and thus
discarded. As shown in figure 4.1, after the event has occurred, the utility of response is
100% if it occurs within the dead line. Beyond the deadline utility becomes zero, and thus the
result is simply discarded.

c) Soft Real Time Task: The other category of real time tasks is the soft real time tasks. For
such a task, there is a deadline, however, it is only expected that the task completes within
the deadline. If the task does not complete within the dead line, still the system runs without
failure. Late arrival of results does not force a total discarding of them. However, as the time
passes, the utility of the result drops, as shown in figure 4.2.

A typical example of soft real time task is the railway reservation system, where it is only
expected that the average time needed to process the request for a ticket is small. If a
particular request for a ticket takes slightly larger time, still the system is acceptable, as
nothing critical happens to the system or environment.

Another example of soft real time system is Web-browsing. After typing the URL, we expect
the page to arrive soon. However, we do not consider it as an exception if the process takes
slightly longer time.

Fig 4.2 Utility of result for soft real time task

 
RTOS copyright@gaurav_verma 
 

1.2 Process State Diagram: A process is nothing but a program in execution. A program
residing in a secondary memory is not a process. When it comes into execution then it is called
as process. Most of the processes possess a typical characteristic i.e. when you run a program;
the program does not take CPU time for entire duration rather it is executed on the CPU for some
time. After then it waits for some I/O operation i.e. may be some input from user or may be it is
waiting to get some input from some device.

Similarly, while execution it is also possible that in between it keeps on giving some
intermediate O/P to some output device or intermediate output to the user. So, all these
operations of this I/O device to access something is called I/O burst and the duration for which
the program is actually executed on the CPU is called CPU burst of program. Almost all the
programmes contain alternate CPU and I/O burst.

So, whenever user initiates a program it is called NEW STATE. From the new state, when
program is loaded in main memory and ready for its execution then we can say that the process
enters into READY STATE. Now after that CPU scheduler checks the ready jobs and selects
one of them and passes it to the CPU for execution called as ACTIVE STATE. Finally when the
job is completed and all the CPU burst is completed then it is called dead state or HALTED
STATE. During the active state the process is in the execution, getting the CPU burst as well as
I/O burst alternately.

NEW  READY  ACTIVE  HALTED 

WAITING 

Fig 4.3 Process State Diagram

So, when it is alternating from first CPU burst to I/O burst, the process is in the WAITING
STATE and comes out of the active state i.e. waiting for the I/O device and the CPU become
free. At this time CPU can execute another process. From waiting state it will go to the ready
state. It can also go to the active state but it may be possible that CPU is executing the another
job and later on scheduler decide, when this job would be transferred form ready state to active
state for CPU burst. It means we can execute several jobs in time multiplexed fashion leads to
multiprocessing system. It may happen that the some jobs take more CPU burst time and less I/O
burst time and vice versa also. They are also known as CPU bound and I/O bound jobs. In the
ready state, all the jobs are neither CPU bound only nor I/O bound only. The scheduler which

 
RTOS copyright@gaurav_verma 
 

puts the job form ready state to active state is called short term scheduler and the scheduler
which puts the job form new state to ready state is known as long term scheduler.

1.3 Kernel and Scheduler: The kernel is the part of the multitasking system responsible for the
management of tasks and communication between tasks. The fundamental service provided by
kernel is context switching or task switching i.e. when a multitasking kernel decides to run a
different task; it simply saves the current task in the CPU registers in the current tasks context
area. It simply gives up the CPU, once this operation is performed; the new tasks code is restored
from its storage area, then resumes execution of new tasks code. This process is known as
context switching.

A kernel also heads overhead to your system because it requires extra ROM (code space) and
additional RAM for the kernel data structures but a kernel allows you to make better use of CPU
by providing you with indispensable services such as semaphore management, mailboxes,
queues etc. Single chip microcontrollers are generally not able to run a real time kernel because
they have very little RAM space.

Most real time kernels are priority based on its importance. In priority based kernel, control of
the CPU is always given to the highest priority task ready to run. Now when the highest priority
task gets the CPU, it depends upon the type of kernel used.

There are generally two types of kernel: Non preemptive and Preemptive

a) Non Preemptive Kernel: It requires that each task does same thing to explicitly give up
control of the CPU. The task cooperates with each other to share the CPU. The new high
priority task will gain control of the CPU only when the current task gives up the CPU. The
most important drawback of non- preemptive kernel is responsiveness. A higher priority task
that has been made ready to run may have to wait a long time to run because the current task
must give up the CPU when it is ready to do so.

Low Priority  Low Priority 
Task Task
ISR  ISR 

Execution of  High 
current task  Priority 
is going on  Task

High Priority  Execution of 
Task  current task 
is going on 
Fig. 4.4 Non preemptive kernel Fig. 4.5 Preemptive kernel

 
RTOS copyright@gaurav_verma 
 

b) Preemptive Kernel: It is used, when system responsiveness is important. A Preemptive


kernel always executes the highest priority task that is ready to run. An interrupt preempts a
task. Upon completion of ISR, the kernel resumes execution to the highest priority task ready
to run (not the interrupted task).

A Scheduler is also called dispatcher. It is a part of the kernel responsible for determining which
task will run next. A Schedule for a given set of tasks is thus an assignment of time frames and
resources to individual tasks. Different terminologies related to schedule are

Valid schedule: At most one task for a processor at a time, no task scheduled before its arrival,
precedence and resource constraints satisfied for all tasks.
Feasible schedule: Only if all tasks meet their respective time constraints in the schedule.
Scheduling points: Are the points on the time line at which the scheduler makes decisions
regarding the next task to run.
Preemptive schedule: When a higher priority task arrives, it suspends any lower priority task
that may be running.
Jitter: Deviation of a periodic task from its exact periodic behavior, e.g., deviation in time of
arrival.

1.4 Scheduling Algorithms: The job of the scheduling algorithm is to determine the order in
which various tasks are to be taken up by the operating system or in other words to assign task
priority. The task priority is of two types static and dynamic. Task priority is said to be static, if
the priority of each task does not change during the application execution. Each task is thus
given a fixed priority at compile time. Task priority are said to be dynamic, if the priority of task
can be changed during the applications execution. Each task can change its priority at run time.

Since most of the tasks in embedded systems are periodic in nature, the real time task scheduling
algorithms mostly concentrate on periodic tasks. Sporadic and Aperiodic tasks are handled as
they occur, mostly on case by case basis, without disturbing the deadlines of the already
scheduled tasks.

A large number of algorithms have been proposed in the literature. We will discuss about them
shortly. However, the quality of scheduling algorithm is identified by a term called Processor
Utilization. The processor utilization of a task is the average time for which it executed per unit
time interval (ei / pi). For a set of periodic tasks, total utilization is the sum of individual
utilizations. A good scheduling algorithm should lead to 100% processor utilization. However,
practically it is not possible.

Based upon the scheduling points, scheduling algorithms are classified into the following
categories:

 
RTOS copyright@gaurav_verma 
 

• Clock driven scheduling


• Event driven scheduling

a) Clock driven scheduling: In a clock driven scheduling, the scheduling points are determined
by the interrupts received from a periodic clock. As the name suggests, a clock driven works
in synchronous with a clock signal. The timer periodically generates interrupts. On receiving
an interrupt, the scheduler is activated which then takes a decision about the process to be
scheduled next. Since the set of tasks, their periodicity values, execution times required and
deadlines are known beforehand, it is possible to precompute the process to be scheduled at
various clock interrupts. However, the major drawback of this type of schedulers is their
inability to handle aperiodic and sporadic tasks. These types of schedulers are also known as
static schedulers. There are two types of clock driven schedulers: Table-driven and cyclic
schedulers.

b) Event driven scheduling: A major drawback of cyclic schedulers is that it becomes too
complex to determine suitable frame size as well as a feasible schedule when the number of
tasks increases. Moreover, in almost every frame, sometime is wasted to run the scheduler. It
can handle aperiodic and sporadic tasks more proficiently. These have led to the development
of event-driven schedulers can overcome these short-comings. However, these are less
efficient as they deploy more complex scheduling algorithms and also less suitable for small
embedded applications, however, used invariably in all moderate and large-sized applications
having many tasks. Three important schedulers in this category: Foreground-background
scheduler, Rate Monotonic scheduler, Earliest Deadline First scheduler. However the
detailed discussion on these scheduling algorithms is out of the scope of this book.

1.5 Shared Resource and Mutual Exclusion: A shared resource is a resource that can be used
by more than one task. Each task should gain exclusive access to shared resource to prevent data
corruption known as mutual exclusion. The easiest way for tasks to communicate with each other
through shared data structures. This is especially easy when all tasks exist in a single address
space and can reference global variables, pointers etc. Although sharing data simplifies the
exchange of information, you must ensure that each task has exclusive access to data to avoid
contention and data corruption. The most common method of obtaining exclusive access to
shared resource are: i) disable interrupts ii) performing test and set operations iii) disabling
scheduling iv) using semaphores.

1.6 Semaphore: It is a protocol mechanism offered by most multitasking kernels. Semaphores


are used to

• Control access to shared resource (mutual exclusion)


• Signal the occurrence of event and

 
RTOS copyright@gaurav_verma 
 

• Allow two tasks to synchronize their activity.

A semaphore is a key that your code acquires in order to continue execution. If the semaphore is
already in use the requesting task is suspended until the semaphore is released by its current
owner. In other words, the requesting task says: “Give me the key”. If someone else is using it, I
am willing to wait for it”.

There are generally two types of semaphore binary and counting. A binary semaphore take only
two values either 0 or 1. Counting semaphore allows values between 0 and 255, 65535 or
4294967295, depending upon whether semaphore mechanism is implemented using 8, 16 or 32
bits. Generally only three operations performed on a semaphore: i) INITIALIZE (also called
CREATE), ii) WAIT (also called PEND), iii) SIGNAL (also called POST).

Now let us see how we can access shared data by obtaining a semaphore. Let us take an example
of accessing a printer by two different tasks i.e. Task1 and Task2. If two task were allowed to
send a characters to a printer at the same time. The printer would print interleaved data from each
task. For example, the printout from task1 printing “I am task1” and task2 printing “I am task2”
could result in “II aamm ttasasakk1122”. This problem is resolved by using semaphore and
initializes it to 1. The rule is simple i.e. to access the printer each task first must obtain the
resource’s semaphore (key) as shown in figure 4.6.

Task1 
Acquire Semaphore

♀ Semaphore  Printer 

Task1  Acquire Semaphore

Fig. 4.6 Accessing a shared resource by obtaining a semaphore

Sometimes a situation arises in which two tasks are unknowingly waiting for resources held by
each other. Assume task T1 has exclusive access to resource R1 and task T2 has exclusive access
to resource R2. Now if T1 needs exclusive access to resource R2 and T2 needs access to R1 then
both task can’t continue. They are deadlocked as shown in figure 4.7. To avoid deadlock
situation, the task has to

• Acquire all resources before proceeding.


• Acquire the resources in same order and
• Release the resource in the reverse order

 
RTOS copyright@gaurav_verma 
 

Most kernels allow you to specify a time out when using a semaphore. This feature allows a
deadlock to be broken i.e. if the semaphore is not available within a certain amount of time, the
task requesting the resource resumes execution.

T1

R1  R2 

T2

Fig. 4.7 Deadlock Situation

1.7 Message Mailbox: A message mailbox also called a message exchange is typically a pointer
size variable. Through a service provided by the kernel, a task or an ISR can deposit a message
(the pointer) into the mailbox. Similarly one or more task can receive the message from the
mailbox through a service provided by the kernel. The mailbox services i.e. provided by the
kernel are:

• Initialize the content of the mailbox.


• Deposit a message into the mailbox (POST).
• Wait for a message to be deposited into the mailbox i.e. (PEND).
• Get a message from a mailbox if present; but do not suspend the caller if the mailbox is
empty (ACCEPT).
• A message in the mailbox indicates that the resource is available and an empty mailbox
indicates that the resource is already in use by another task.
• A waiting list is associated with each mailbox in case more than one task wants to receive
messages through the mailbox. A task desiring a message from an empty mailbox is
suspended and placed on the waiting list until the message is received. Typically, the
kernel allows the task waiting for a message to specify a timeout. If the message is not
received before the timeout expires, an error code appears.
• When a message is deposited into the mailbox either the highest priority task waiting for
the message is given the message (priority based) or the first task to request a message is
given the message (FIFO). Below in figure 4.8 shown a task depositing a message into a
mailbox. Note that a mailbox is represented by an “I” and the timeout is represented by
an hour glass. The number next to hour glass represents the number of clock ticks, the
task will wait for a message to arrive.

 
RTOS copyright@gaurav_verma 
 

Mailbox 

POST  PEND 
Task  Task 
10

Fig. 4.8 Deposition of message into mailbox

1.8 Message Queues: It is basically an array of mailboxes. It is used to send one or more
messages to a task. A task or an ISR can deposit a message into the message queue through a
service provided by the kernel. Similarly, one or more tasks can receive messages. Generally a
first message inserted in the queue will be the first message extracted from the queue (FIFO). As
with the mailbox, a waiting list is associated with each message queue also. The queues services
i.e. provided by the kernel are:

• Initialize the queue.


• Deposit a message into the queue (POST).
• Wait for a message to be deposited into the queue (PEND).
• Get a message from a queue if one is present, but do not suspend the caller if the queue is
empty (ACCEPT).
• As shown in the figure 4.9 below, an ISR depositing a message into the queue. Note that
the queue is represented by double “II”. The “10” indicates the number of messages that
can accumulate in the queue. A “0” next to the hourglass indicates that the task will wait
forever for a message to arrive.

Queues 

POST  PEND 
Task  10 Task 
10

Fig. 4.9 Deposition of message into mailbox

1.9 Operating System Services: Operating system is just like a resource manager which
manages the resources of a computer system like CPU, main memory, secondary storage and I/O
devices. The services provided by the operating system in managing these resources are:

• The system must be able to load a program into memory and run it.
• A running program may require I/O device or file.
• It must be able to do file manipulation like creating a file, reading a file, delete, write and
manage etc.

 
RTOS copyright@gaurav_verma 
 

• It must be able to do communication like one process may need to exchange information
with another process or processes in same or different computers.
• It must be able to provide error detection and correction facility i.e. the OS constantly
needs to be aware of possible errors that may occur in CPU, Memory, I/O devices or in
the user programme.

Вам также может понравиться