Академический Документы
Профессиональный Документы
Культура Документы
Executing a single program may be easy and can be directly run over the hardware. When systems become
complex, require flexibility and execute multiple applications, a separate program, the operating system, is
required to manage hardware and software resources of a computer system among multiple programs.
A real time operating system, RTOS, is different from standard OS. An embedded system is for a dedicated
task, so the OS operating on it also is a specific system. The RTOS has the property that it responds and
completes its task within a given deadline that means it acts in real time.
The kernel of an operating system is the central core of the system that provides very basic functionalities of
the system. Whenever a system is started, the basic functions are loaded in a special area of memory. These
functions constitute the kernel. The area in which the kernel is loaded is designed not to be overwritten. The
important and core functionalities of task scheduling, memory management etc are provided by the kernel.
The RTOS is a concise OS, custom tailored to offer very needy services in an embedded system. It may be a
scenario that the RTOS is just the kernel, the real time kernel, RTK. In literature, the two terms are even used
synonymously.
A thread is a distinct executable portion of a process. A thread is also known as lightweight process. Whenever a
process is said to be executed, a portion is sent to the processor for execution. That distinct portion is the
Chapter 5 1
Embedded Systems Purushotam Shrestha
thread. A process consists of multiple threads and during execution of a program, multiple threads are switched
in and out.
A task can mean a process. In RTOS environment, a task is a block of instructions, like a subroutine, and other
information required for the execution of the instructions like task number, task priority, location of instruction
and data etc. Whenever an application is run, the OS creates tasks for it.
OS tasks:
In RTOS environment, a task is a block of instructions, like a subroutine, and other information required for the
execution of the instructions like task number, task priority, task state, location of instruction and data etc.
Whenever an application is run, the OS creates tasks for it, memory is allocated for the processing of tasks and
other information required for managing the task are generated and put in the task control block. An application
program can consists of a number of tasks. The task is made ready and moved to processor for execution. This is
controlled by the scheduler.
A task may be
Periodic: arriving at regular intervals
Aperiodic: arriving with an unknown rate of arrival
Sporadic: arriving with an unknown rate of arrival but there is a minimum interval of time between two tasks
Preemptive: The running task may be replaced by another high priority task
Non-Preemptive: A non-preemptive task cannot be interrupted until its completion
Dependent: A dependent task executes communicating with others, exchanging data etc
Independent: An independent task executes without relation to other tasks.
Task States:
A task may be in one of the following states:
New
Ready
Running
Blocked
Terminated
New: Whenever a program is to be executed, a task is created. A task thus created is said to be in new state. The
OS initializes the tasks; it brings into memory the necessary instructions, creates TCB and starts gathering
required data if not available. When everything is ready, the task goes to ready state.
Ready: A task in ready state is in stand-by condition to be executed by the processor. While a task is being
executed, many tasks can be in ready state. A running task may be replaced by a higher priority task and come
to the ready state. Whenever the processor finishes the task at hand and becomes available, the task that has
high priority, or as scheduled by the OS, will be called for execution.
Running: A task under execution is said to be in running state. Unless the system consists of multiple processors,
a single processor can execute a single task at a time. In such systems, a single task can be in running state while
others are in either ready or blocked state.
Blocked: A task is said to be in blocked state if it is not executed by a free processor. The task may be blocked if
it has not got any data for processing, or if it is event triggered, by some interrupts or users. A task goes into
2 Chapter 5
Embedded Systems Purushotam Shrestha
blocked state when it has got nothing to do. Once triggered, the task moves to ready state and then goes for
execution.
Terminated: The OS destroys a task which finishes its execution. Now the task is terminated, is non-existent. The
TCB is destroyed and any memory being used is freed.
Task Scheduling:
Task scheduling refers to the queuing of the tasks in certain order for their execution. It's the decision that takes
a task in ready state to running state. Scheduling helps in optimizing various parameters in a program execution
scenario, some of which are
CPU utilization keep the CPU as busy as possible and make it do many things
Throughput maximize the number of processes that executed per unit time
Waiting time decrease amount of time a process, in ready state has to wait for the processor
Turnaround time minimize gross time (actual execution time + waiting time) to execute a particular process
The scheduling may be based upon priorities defined into the tasks, frequency of execution etc. The RTOS
consists of a scheduler which provides the scheduling function in order to time share the processing unit. The
scheduler is responsible for keeping the record of the tasks in various states and choosing a task for execution.
The choice depends upon the method employed by the scheduler.
When a task runs out of necessary data, that the task has got nothing to act upon, the task goes to blocked
state. There should be instructions/ codes that determine whether the task should go into blocked state or
continue running. This is not determined by the RTOS. Once in blocked state, there must be some events /
instructions that triggers the transition to ready state.
The job of the scheduler is to choose among the tasks that are in ready state.
A scheduler may use algorithms based upon the following to choose among the tasks:
Priority: Tasks are assigned distinct priorities and higher priority task is run first. HPF: High Priority First.
The running task may be replaced with a high priority task :- Preemptive scheduling
or a running task is executed to completion even if a high priority task is available :- Non preemptive method.
Execution Times: The task with the shortest execution time is run first. The goal is to minimize average waiting
time. If another task with shorter time arrives and the preemptive method is employed, the running task is
replaced with the latter task.
Deadlines: The tasks with tight and the earliest deadlines are run first. EDF: earliest deadlines first.
Arrival Times: The task that comes to ready state first is run first, FIFO method.
Chapter 5 3
Embedded Systems Purushotam Shrestha
Round Robin: Each task is allowed to execute for a fixed time and if not completed it is sent to the end of the
queue for execution after others in the queue have executed, each for the same time period.
An interrupt is a signal generated by peripheral devices that interrupt the processor during its normal program
execution in order to respond to its requests.
Interrupt Service routine is the block of instructions executed in response to the interrupt.
Interrupt Address Vector is the memory address location where the interrupt service routine resides.
Interrupt Latency is the time period between the generation of interrupt and responding to the interrupt.
Producer-Consumer situation: A producer must produce products before a consumer can consume it. The task
that is giving data must write the data before the task that requires the data makes a read operation.
Intertask Communication
Intertask communication involves sharing of data among tasks through sharing of memory space, transmission
of data and etc. Few of mechanisms available for executing inter-task communications includes:
Shared Memory: The data exchange among the tasks can be accomplished by shared use of the memory holding
the data. The communicating tasks read and write from same memory location. A memory space in a program is
represented by variable declaration. The memory location can be referred by referring the variable. This method
is simple and easy to implement, but care must be taken while programming that the modification of the shared
location does not result in invalid results.
4 Chapter 5
Embedded Systems Purushotam Shrestha
Message Passing: A data sender task simply performs send() operation and the receiver task gets the data by
receive() operation. The data to exchanged is passed as parameters such as memory address values, sender
and receiver ID etc.
Pipes: A pipe is an object that provides simple communication channel used for unstructured data exchange
among tasks. A pipe can be opened, closed, written to and read from. Traditionally, a pipe is a unidirectional
data exchange facility. There are two descriptors respectively at each end of the pipe for reading and writing.
Data is written into the pipe as an unstructured byte stream via one descriptor and read from the pipe in the
FIFO order from the other. Unlike message queue, a pipe does not store multiple messages but stream of
bytes. In addition, data flow from a pipe cannot be prioritized.
Remote procedural calls (RPC): Remote procedure call (RPC) component permits distributed computing where
task can invoke the execution of a another task on a remote computer, as if the task ran on the same
computer.
Task Synchronization
Semaphores
A semaphore allows a single, controlled access of data or any other resource. Semaphores are used in railways
to prevent two rail cars being same section avoiding any collision. When a rail car enters a section, a
semaphore is lowered. Once the train leaves the section, the semaphore is raised. This process prevents the
entry of another rail into the same section. Similarly in the RTOS, a task can access resource only if it's got a
semaphore. The semaphore is taken by calling functions like takesemaphore() and released by
givesemaphore(); the function names may be different. A task without a semaphore cannot access and
modify data. Thus the use of invalid data is prevented eliminating any undesirable results. It should be noted
that once a semaphore is taken it must be released, otherwise other tasks won't be able to use the data.
Generally a semaphore is associated with the resource that tasks need to access. A semaphore maintains
The resource count: It shows the availability of the resource. 1 represents a single quantity. If there are
N quantity of the resource, then the value is N. As the semaphore is provided, the count is decreased.
A wait queue: It queues the tasks that request for the semaphore related to a particular resource.
Semaphores may be :
Binary Semaphores: semaphore value of either 0 or 1 to indicate unavailability and availability
respectively. A semaphore taken by one task may be released by another.
Counting Semaphores: semaphore value is a signed integer which may be
Negative: tasks are in queue in order to acquire the lock, the value representing the number of
tasks in queue
Chapter 5 5
Embedded Systems Purushotam Shrestha
The task (or a process) control block is a data structure containing the information required for managing the
tasks. The Task Control Block (TCB) specifies all the parameters necessary to schedule and execute a routine.
The location of the TCB must be a protected area, normally alongside the kernel, in order to protect it from
overwriting and deletion.
The Task Control Block specifies all the parameters necessary to schedule and execute a routine. Typically, a TCB
is a 6-10 words long and is logically divided into two parts:
Task-Independent Parameters - The first four words (32-bit) of the TCB are task-independent and simply
specify the scheduling parameters to the scheduler.
Task-Dependent Parameters - These parameters specify the routine to be executed and the parameters of
execution. The number and format of these parameters is routine dependent.
6 Chapter 5
Embedded Systems Purushotam Shrestha
An embedded RTOS usually tries to use as less memory as possible by including only the functionality needed for
the users applications. There are two types of memory management in RTOSs. They are Stack and Heap
managements.
In a multi-tasking RTOS, each task needs to be allocated with an amount of memory for storing their contexts (i.e.
volatile information such as registers contents, program counter, etc) for context switching. This allocation of
memory is done using task-control block model. This set of memory is commonly known as kernel stack and the
management process termed Stack Management.
Upon the completion of a program initialization, physical memory of the MCU or MPU will usually be occupied
with program code, program data and system stack. The remaining physical memory is called heap. This heap
memory is typically used by the kernel for dynamic memory allocation of data space for tasks. The memory is
divided into fixed size memory blocks, which can be requested by tasks. When a task finishes using a memory
block it must return it to the pool. This process of managing the heap memory is known as Heap management.
In general, a memory management facility maintains internal information for a heap
in a reserved memory area called the control block. Typical internal information includes:
the starting address of the physical memory block used for dynamic memory allocation,
the overall size of this physical memory block, and
the allocation table that indicates which memory areas are in use, which memory areas are free, and the size
of each free region.
The assignment of the memory blocks is accomplished by functions like malloc() and when not required the
memory is freed by free() function.
In practice, a well-designed memory allocation function should allow for allocation that permits blocking. A
blocking memory allocation function can be implemented using both a counting semaphore and a mutex lock.
These synchronization objects are created for each memory pool and are kept in the
control structure. The counting semaphore is initialized with the total number of available memory blocks at the
creation of the memory pool. Multiple tasks can access the free-blocks list of the memory pool, using a counting
semaphore. Once a task gains a counting semaphore, it reserves a memory block. Then it goes for a mutex
semaphore, if it locks the mutex it gets to use the memory block, otherwise it waits.
The control block is updated each time a memory allocation or a free memory occurs.
A task might wait for a block to become available, acquire the block, and then continue its execution.
All memory access is done through MMU when it is enabled. Therefore, the hardware enforces memory access
according to page attributes. For example, if a task tries to write to a memory region that only allows for read
access, the operation is considered illegal, and the MMU does not allow it. The result is that the operation
triggers a memory access exception.
Chapter 5 7