Вы находитесь на странице: 1из 7

Embedded Systems Purushotam Shrestha

Chapter 5: Real Time Operating Systems


An operating system (OS) is responsible for managing the hardware resources of a computer and hosting
applications that run on the computer.

Controls the execution of programs


Scheduling of processes and tasks
Manages the resource of the computer system
Memory management during program execution
I/O handling
Acts as the interface between the hardware of the computer system and the user.

Executing a single program may be easy and can be directly run over the hardware. When systems become
complex, require flexibility and execute multiple applications, a separate program, the operating system, is
required to manage hardware and software resources of a computer system among multiple programs.

A real time operating system, RTOS, is different from standard OS. An embedded system is for a dedicated
task, so the OS operating on it also is a specific system. The RTOS has the property that it responds and
completes its task within a given deadline that means it acts in real time.

Completes tasks in real time


Deterministic and predictable execution
Custom tailored for the requirement of the job in hand, functions that are not required are not included.
Occupies less space
Fast

An RTOS provides following functionalities


Task Management
Interrupt Handling
Memory Management
Task Synchronisation
Task Scheduling
Time Management

The kernel of an operating system is the central core of the system that provides very basic functionalities of
the system. Whenever a system is started, the basic functions are loaded in a special area of memory. These
functions constitute the kernel. The area in which the kernel is loaded is designed not to be overwritten. The
important and core functionalities of task scheduling, memory management etc are provided by the kernel.

The RTOS is a concise OS, custom tailored to offer very needy services in an embedded system. It may be a
scenario that the RTOS is just the kernel, the real time kernel, RTK. In literature, the two terms are even used
synonymously.

8.1 Processes, tasks and threads


A process is a program in execution. A program under execution and the data used by the program are stored
in main memory where the processor can read and write more easily. So a process resides in the main
memory. At a time, there may be multiple processes in the memory implying execution of multiple programs.
Once a program is brought into the main memory it becomes a process. But a process may not be in
execution all the time. The processor cycles through multiple processes, executing a single process at a time
and putting some processes in idle or wait states.

A thread is a distinct executable portion of a process. A thread is also known as lightweight process. Whenever a
process is said to be executed, a portion is sent to the processor for execution. That distinct portion is the

Chapter 5 1
Embedded Systems Purushotam Shrestha

thread. A process consists of multiple threads and during execution of a program, multiple threads are switched
in and out.
A task can mean a process. In RTOS environment, a task is a block of instructions, like a subroutine, and other
information required for the execution of the instructions like task number, task priority, location of instruction
and data etc. Whenever an application is run, the OS creates tasks for it.

8.2 OS tasks, task states and task scheduling

OS tasks:
In RTOS environment, a task is a block of instructions, like a subroutine, and other information required for the
execution of the instructions like task number, task priority, task state, location of instruction and data etc.
Whenever an application is run, the OS creates tasks for it, memory is allocated for the processing of tasks and
other information required for managing the task are generated and put in the task control block. An application
program can consists of a number of tasks. The task is made ready and moved to processor for execution. This is
controlled by the scheduler.

A task may be
Periodic: arriving at regular intervals
Aperiodic: arriving with an unknown rate of arrival
Sporadic: arriving with an unknown rate of arrival but there is a minimum interval of time between two tasks

Hard: task must be completed in deadline


Soft: task may miss the deadline, to a certain extent.

Preemptive: The running task may be replaced by another high priority task
Non-Preemptive: A non-preemptive task cannot be interrupted until its completion

Dependent: A dependent task executes communicating with others, exchanging data etc
Independent: An independent task executes without relation to other tasks.

Task States:
A task may be in one of the following states:
New
Ready
Running
Blocked
Terminated

New: Whenever a program is to be executed, a task is created. A task thus created is said to be in new state. The
OS initializes the tasks; it brings into memory the necessary instructions, creates TCB and starts gathering
required data if not available. When everything is ready, the task goes to ready state.

Ready: A task in ready state is in stand-by condition to be executed by the processor. While a task is being
executed, many tasks can be in ready state. A running task may be replaced by a higher priority task and come
to the ready state. Whenever the processor finishes the task at hand and becomes available, the task that has
high priority, or as scheduled by the OS, will be called for execution.

Running: A task under execution is said to be in running state. Unless the system consists of multiple processors,
a single processor can execute a single task at a time. In such systems, a single task can be in running state while
others are in either ready or blocked state.

Blocked: A task is said to be in blocked state if it is not executed by a free processor. The task may be blocked if
it has not got any data for processing, or if it is event triggered, by some interrupts or users. A task goes into

2 Chapter 5
Embedded Systems Purushotam Shrestha

blocked state when it has got nothing to do. Once triggered, the task moves to ready state and then goes for
execution.

Terminated: The OS destroys a task which finishes its execution. Now the task is terminated, is non-existent. The
TCB is destroyed and any memory being used is freed.

Replaces running task as


a higher priority task or execution
Initialization complete, complete, task
it is turn
acquires data etc destroyed
READY RUNNING
Running task replaced
by higher priority task or
timeout
NEW TERMINATED

required Task has nothing to do,


event needs to wait for data
BLOCKED
occured or another task

Process states: 5-state model

Task Scheduling:
Task scheduling refers to the queuing of the tasks in certain order for their execution. It's the decision that takes
a task in ready state to running state. Scheduling helps in optimizing various parameters in a program execution
scenario, some of which are
CPU utilization keep the CPU as busy as possible and make it do many things
Throughput maximize the number of processes that executed per unit time
Waiting time decrease amount of time a process, in ready state has to wait for the processor
Turnaround time minimize gross time (actual execution time + waiting time) to execute a particular process

The scheduling may be based upon priorities defined into the tasks, frequency of execution etc. The RTOS
consists of a scheduler which provides the scheduling function in order to time share the processing unit. The
scheduler is responsible for keeping the record of the tasks in various states and choosing a task for execution.
The choice depends upon the method employed by the scheduler.

When a task runs out of necessary data, that the task has got nothing to act upon, the task goes to blocked
state. There should be instructions/ codes that determine whether the task should go into blocked state or
continue running. This is not determined by the RTOS. Once in blocked state, there must be some events /
instructions that triggers the transition to ready state.
The job of the scheduler is to choose among the tasks that are in ready state.

A scheduler may use algorithms based upon the following to choose among the tasks:
Priority: Tasks are assigned distinct priorities and higher priority task is run first. HPF: High Priority First.
The running task may be replaced with a high priority task :- Preemptive scheduling
or a running task is executed to completion even if a high priority task is available :- Non preemptive method.

Execution Times: The task with the shortest execution time is run first. The goal is to minimize average waiting
time. If another task with shorter time arrives and the preemptive method is employed, the running task is
replaced with the latter task.
Deadlines: The tasks with tight and the earliest deadlines are run first. EDF: earliest deadlines first.
Arrival Times: The task that comes to ready state first is run first, FIFO method.
Chapter 5 3
Embedded Systems Purushotam Shrestha

Round Robin: Each task is allowed to execute for a fixed time and if not completed it is sent to the end of the
queue for execution after others in the queue have executed, each for the same time period.

Offline Scheduling: Scheduling occurs before execution of any tasks.


Online Scheduling: Scheduling occurs dynamically while the tasks are being executed.

Two tasks with same priority:


Assign distinct priority
Time slice between the two tasks

8.3 Interrupt Handling

An interrupt is a signal generated by peripheral devices that interrupt the processor during its normal program
execution in order to respond to its requests.
Interrupt Service routine is the block of instructions executed in response to the interrupt.
Interrupt Address Vector is the memory address location where the interrupt service routine resides.
Interrupt Latency is the time period between the generation of interrupt and responding to the interrupt.

8.4 Clocking Communication and Task Synchronization


When multiple tasks are executed, they need to access and use various types of data and resources which are
limited in number. The data sharing is handled by inter-task communication and the conflict less resource
sharing is managed by the process of task synchronization.

Producer-Consumer situation: A producer must produce products before a consumer can consume it. The task
that is giving data must write the data before the task that requires the data makes a read operation.

Intertask Communication
Intertask communication involves sharing of data among tasks through sharing of memory space, transmission
of data and etc. Few of mechanisms available for executing inter-task communications includes:

Shared Memory: The data exchange among the tasks can be accomplished by shared use of the memory holding
the data. The communicating tasks read and write from same memory location. A memory space in a program is
represented by variable declaration. The memory location can be referred by referring the variable. This method
is simple and easy to implement, but care must be taken while programming that the modification of the shared
location does not result in invalid results.

4 Chapter 5
Embedded Systems Purushotam Shrestha

Message Passing: A data sender task simply performs send() operation and the receiver task gets the data by
receive() operation. The data to exchanged is passed as parameters such as memory address values, sender
and receiver ID etc.

Message passing schemes


Message queues: By using services provided by the OS, tasks and ISRs can send to and receive messages from
specially allocated locations in memory, known as the message queue. A task seeking for a message from an
empty queue is blocked for a duration or until a message is received. The sending and receiving of messages to
and from the queue may follow
1) First In First Out (FIFO),
2) Last in First Out (LIFO) or
3) Priority (PRI) sequence.
Usually, a message queue comprises of an associated queue control block (QCB), name, unique ID, memory
buffers, queue length, maximum message length and one or more task waiting lists. The concept of mailbox
can be used. Each task is provided with certain space in the memory. It can buffer some number of messages
for the task. Mailbox parameters such as identifier, associated tasks, size etc are held by a separate data
structure like the QCB.
Message queues are referred by their addresses while mailboxes are accessed by their identifiers.
Message queues are asynchronous in nature that the sender and receiver don't require to exchange at the
same time.

Pipes: A pipe is an object that provides simple communication channel used for unstructured data exchange
among tasks. A pipe can be opened, closed, written to and read from. Traditionally, a pipe is a unidirectional
data exchange facility. There are two descriptors respectively at each end of the pipe for reading and writing.
Data is written into the pipe as an unstructured byte stream via one descriptor and read from the pipe in the
FIFO order from the other. Unlike message queue, a pipe does not store multiple messages but stream of
bytes. In addition, data flow from a pipe cannot be prioritized.

Remote procedural calls (RPC): Remote procedure call (RPC) component permits distributed computing where
task can invoke the execution of a another task on a remote computer, as if the task ran on the same
computer.

Task Synchronization

Semaphores
A semaphore allows a single, controlled access of data or any other resource. Semaphores are used in railways
to prevent two rail cars being same section avoiding any collision. When a rail car enters a section, a
semaphore is lowered. Once the train leaves the section, the semaphore is raised. This process prevents the
entry of another rail into the same section. Similarly in the RTOS, a task can access resource only if it's got a
semaphore. The semaphore is taken by calling functions like takesemaphore() and released by
givesemaphore(); the function names may be different. A task without a semaphore cannot access and
modify data. Thus the use of invalid data is prevented eliminating any undesirable results. It should be noted
that once a semaphore is taken it must be released, otherwise other tasks won't be able to use the data.
Generally a semaphore is associated with the resource that tasks need to access. A semaphore maintains
The resource count: It shows the availability of the resource. 1 represents a single quantity. If there are
N quantity of the resource, then the value is N. As the semaphore is provided, the count is decreased.
A wait queue: It queues the tasks that request for the semaphore related to a particular resource.

Semaphores may be :
Binary Semaphores: semaphore value of either 0 or 1 to indicate unavailability and availability
respectively. A semaphore taken by one task may be released by another.
Counting Semaphores: semaphore value is a signed integer which may be
Negative: tasks are in queue in order to acquire the lock, the value representing the number of
tasks in queue

Chapter 5 5
Embedded Systems Purushotam Shrestha

Zero: no waiting queues but a requesting task would be put in queue


Positive: no waiting queue but a requesting task would not be put into queue, 0 or
greater indicating it can be acquired/released multiple times
Generally used for resource that exists in discrete quantity.
Mutually Exclusive Semaphores (or Mutex): semaphore value of 0 or 1 but lock count can be
0 or greater for recursive locking. A Mutex once locked by a task must be unlocked by the
same task. If a task locks the mutex, other task won't get it.

Atomic codes and Critical Sections


The problem of shared data like unwanted modifications and conflicting accesses can be solved by allowing a
particular segment of the task to finish its execution. One way is to disable interrupts while such instruction
segment is running. The segment of a program which cannot be interrupted is said to be atomic. The section
containing the atomic codes and the instructions that make them atomic is called the Critical Section. Critical
sections are used when a task has to access shared memory thus preventing other tasks from executing and
hence accessing the same section of memory. No two tasks can be in their critical section at the same time.

Disabling task switching


When task switching is disabled, the task in the running state has the sole authority over the data being
processed. No other tasks can execute and hence cannot act upon the data avoiding the undesired results.

8.5 Control Blocks

The task (or a process) control block is a data structure containing the information required for managing the
tasks. The Task Control Block (TCB) specifies all the parameters necessary to schedule and execute a routine.

Generally a TCB consists of


The identifier of the task --> identification
Type : hard, soft etc
Register values for the task including the Program Counter value for the task, Stack Pointer. -->status
State of tasks --> new, ready, running,
The address space for the task
Priority value of the task
Task accounting information, such as when the task was last run, how much CPU time it has accumulated,
etc.
I/O Information (i.e. I/O devices allocated to this process, list of opened files, etc)
Pointers to other related tasks, code and data location

Each task has its own TCB.


When a task is created, a TCB is created alongside. It is maintained by the OS and is only accessible to the OS or
the kernel. The OS uses TCBs for scheduling, memory and I/O resource access and performance monitoring. The
TCB is deleted when the

The location of the TCB must be a protected area, normally alongside the kernel, in order to protect it from
overwriting and deletion.

The Task Control Block specifies all the parameters necessary to schedule and execute a routine. Typically, a TCB
is a 6-10 words long and is logically divided into two parts:
Task-Independent Parameters - The first four words (32-bit) of the TCB are task-independent and simply
specify the scheduling parameters to the scheduler.
Task-Dependent Parameters - These parameters specify the routine to be executed and the parameters of
execution. The number and format of these parameters is routine dependent.

6 Chapter 5
Embedded Systems Purushotam Shrestha

8.6 Memory Requirements and control kernel services

An embedded RTOS usually tries to use as less memory as possible by including only the functionality needed for
the users applications. There are two types of memory management in RTOSs. They are Stack and Heap
managements.
In a multi-tasking RTOS, each task needs to be allocated with an amount of memory for storing their contexts (i.e.
volatile information such as registers contents, program counter, etc) for context switching. This allocation of
memory is done using task-control block model. This set of memory is commonly known as kernel stack and the
management process termed Stack Management.

Upon the completion of a program initialization, physical memory of the MCU or MPU will usually be occupied
with program code, program data and system stack. The remaining physical memory is called heap. This heap
memory is typically used by the kernel for dynamic memory allocation of data space for tasks. The memory is
divided into fixed size memory blocks, which can be requested by tasks. When a task finishes using a memory
block it must return it to the pool. This process of managing the heap memory is known as Heap management.
In general, a memory management facility maintains internal information for a heap
in a reserved memory area called the control block. Typical internal information includes:
the starting address of the physical memory block used for dynamic memory allocation,
the overall size of this physical memory block, and
the allocation table that indicates which memory areas are in use, which memory areas are free, and the size
of each free region.

The assignment of the memory blocks is accomplished by functions like malloc() and when not required the
memory is freed by free() function.

In practice, a well-designed memory allocation function should allow for allocation that permits blocking. A
blocking memory allocation function can be implemented using both a counting semaphore and a mutex lock.
These synchronization objects are created for each memory pool and are kept in the
control structure. The counting semaphore is initialized with the total number of available memory blocks at the
creation of the memory pool. Multiple tasks can access the free-blocks list of the memory pool, using a counting
semaphore. Once a task gains a counting semaphore, it reserves a memory block. Then it goes for a mutex
semaphore, if it locks the mutex it gets to use the memory block, otherwise it waits.
The control block is updated each time a memory allocation or a free memory occurs.
A task might wait for a block to become available, acquire the block, and then continue its execution.

Memory Management Unit


Virtual memory is a technique in which mass storage (for example, a hard disk) is made to appear to an
application as if the mass storage were RAM. Virtual memory address space (also called logical address space) is
larger than the actual physical memory space. This feature allows a program larger than the physical memory to
execute. The memory management unit (MMU) provides several functions. First, the MMU translates the virtual
address to a physical address for each memory access. Second, the MMU provides memory protection.
If an MMU is enabled on an embedded system, the physical memory is typically divided into pages. A set of
attributes is associated with each memory page. Information on attributes can include the following:
whether the page contains code (i.e., executable instructions) or data,
whether the page is readable, writable, executable, or a combination of these, and
whether the page can be accessed or not

All memory access is done through MMU when it is enabled. Therefore, the hardware enforces memory access
according to page attributes. For example, if a task tries to write to a memory region that only allows for read
access, the operation is considered illegal, and the MMU does not allow it. The result is that the operation
triggers a memory access exception.

Chapter 5 7

Вам также может понравиться