Вы находитесь на странице: 1из 7

<BACK

Embedded Systems Programming

BACK>

Understanding Semaphores Multitasking real-time software is a mixed blessing at best. The adoption of a real-time kernel or its equivalent allows you to quickly create a system that can do many things at the same time. From a design perspective, breaking an application into multiple tasks results in applications that are simpler to create and maintain. You must, however, pay a price for these benefits. You must now grapple with yet another layer of software. In addition to your original application, you now have to comprehend all the features and functions of a small operating system. A few months ago, I tackled this problem of real-time complexity by offering readers uCOS, a freeware real-time multitasking kernel.1,2 While the response to uCOS was gratifying, I soon realized that a number of readers were unsure how to use a real-time operating system. To help dispel some of the confusion, this month, I'll describe semaphores, one of the most common features of a real-time operating system. You don't need a specific kernel to follow the discussion, but I will assume that the kernel you're using supports preemptive multitasking. WHY SEMAPHORES? As soon as you introduce the idea of multiple tasks, you introduce competition for resources. When several tasks can use the same portion of memory or another system resource, you have to find a way to keep the tasks out of each other's way. Work on this problem dates back to the early days of computer science,3 and the semaphore was introduced by Edgar Dijkstra back in 1965.4 Today, you'll find that most kernels and operating systems offer semaphores among their features. So what is a semaphore? Basically, a semaphore is a protocol mechanism for task communication. Specifically, semaphores are used to: * Control access to a shared resource (mutual exclusion). * Signal the occurrence of an event. * Allow two tasks to synchronize their activities. A semaphore is basically a key that your code acquires to continue execution. If the semaphore is already in use, the requesting task is suspended until the semaphore is released by its current owner. In other words, the requesting task says: "Give me the key. If you don't have it, I'm willing to wait for it." There are two types of semaphores: binary semaphores and counting semaphores. As its name implies, a binary semaphore can only take two values: zero or one. A counting

semaphore, however, allows values between zero and 255, 65,535 or 4,294,967,295, depending on whether it is implemented using eight, 16, or 32 bits, respectively. The size depends on the kernel used. In practice, 32-bit semaphores are pretty rare. Along with the semaphore's value, you need to keep track of any tasks that are waiting for it. Generally, only three operations can be performed on a semaphore: initialize (also called create), wait (also called pend) and signal (also called post). The initial value of the semaphore must be provided when it is initialized. The waiting list of tasks is always initially empty. A task desiring the semaphore will perform a wait operation. If the semaphore is available (its value is greater than zero), the semaphore value is decremented and the task continues execution. If the semaphore's value is zero, the task performing a wait on the semaphore is placed in a waiting list. Most kernels allow you to specify a timeout. If the semaphore is not available within a certain amount of time, the requesting task is made ready to run and an error code (indicating a timeout occurred) is returned to it. A task releases a semaphore by performing a signal operation. If a task is not waiting for the semaphore, its value is incremented. However, if a task is waiting for the semaphore, one of the tasks is made ready to run and the semaphore value is not incremented. The key is basically given to a waiting task. Depending on the kernel used, the task that will receive the semaphore is the highest priority task waiting for the semaphore or the first task that requested the semaphore. Some kernels allow you to choose either method through an option when the semaphore is initialized. (uCOS always readies the highest priority task that is waiting for the semaphore.) If the readied task has a higher priority than the current task (the task that is releasing the semaphore), a context switch will occur and the higher priority task will resume execution. The current task will be suspended until it again becomes the highest priority task that is ready to run. MUTUAL EXCLUSION Imagine what would happen if two tasks were allowed to send characters to a printer at the same time. The printer would contain interleaved data from each task. For instance, if task number one tried to print "I am task #1!" and task number two tried to print "I am task #2!" the printout could look like: "I Ia amm t tasask k#1 #!2!" To prevent this situation, exclusive access to the printer by each task is necessary until each task completes its print job. In this case, a binary semaphore is used to provide mutual exclusion, and the semaphore is initialized to one. The rule is simple: to access the resource, you must first obtain the resource's semaphore. Figure 1 shows the tasks competing for a semaphore to gain exclusive access to the printer. The semaphore is represented by a flag.

Figure 1 implies that each task must know about the existence of the semaphore if it expects to access the resource. In some situations, it is better to encapsulate the semaphore. Then, each task would not know that it is actually acquiring a semaphore when accessing the resource. For example, a serial port is used by multiple tasks to send commands and receive responses from a device connected at the other end of the serial port. A flow diagram is shown in Figure 2. The function CommSendCmd() is called with three arguments: the ASCII string containing the command, a pointer to the response string from the device, and a timeout (in case the device doesn't respond within a certain amount of time). Here is the pseudocode for this function: UBYTE CommSendCmd(char *cmd, char *response, UWORD timeout) { Acquire port's semaphore; Send command to device; Wait for response (with timeout); if (timed out) { Release semaphore; return (error code); } else { Release semaphore; return (no error); } } Each task that needs to send a command to the device calls this function. The semaphore is assumed to be initialized to one (indicating that it is available) by the communicationdriver initialization routine. The first task that calls CommSendCmd() will acquire the semaphore and proceed to send the command and wait for a response. If another task attempts to send a command while the port is busy, this second task will be suspended until the semaphore is released. It made a call to a normal function that will not return until the function has performed its duty. When the semaphore is released by the first task, the second task will acquire the semaphore and will be allowed to use the serial port. A more general type of semaphore, called a counting semaphore, allows the semaphore value to take positive values. A counting semaphore is used when a resource can be used by more than one task at the same time. For example, a counting semaphore is used in the management of a buffer pool, as in Figure 3. Let's assume that the buffer pool initially contains 10 buffers. A task would obtain a buffer from the buffer manager by calling BufReq(). When the buffer is no longer needed, the task would return the buffer to the buffer manager by calling BufRel(). Here is the pseudocede for these functions: BUF *BufReq(void) { BUF *ptr;

Acquire a semaphore; Disable interrupts; ptr = BufFreeList; BufFreeList = ptr->BufNext; Enable interrupts; return (ptr); } void BufRel(BUF *ptr) { Disable interrupts; ptr->BufNext = BufFreeList; BufFreeList = ptr; Enable interrupts; Release semaphore: } The buffer manager will satisfy the first 10 buffer requests (since there are 10 keys). When all of the semaphores are being used, a task requesting a buffer will be suspended until one becomes available. Interrupts are disabled to gain exclusive access to the linked list (this operation is very fast). When a task is finished with a buffer, it calls BufRel() to return the buffer to the buffer manager. The buffer is inserted into the linked list before releasing the semaphore. By encapsulating the interface to the buffer manager, the caller doesn't need to be concerned with implementation details. ENOUGH IS ENOUGH Semaphores are a useful tool in real-time systems, but be careful not to overuse them. For example, using a semaphore to access a simple shared variable is overkill in many situations. The overhead involved in acquiring and releasing the semaphore can become expensive. Disabling and enabling interrupts could just as well do the job. All real-time kernels will disable interrupts during critical sections of code, allowing you to disable interrupts for as much time as the kernel does without affecting interrupt latency. With this approach, you need to know how long the kernel will disable interrupts, as well as the worst-case specification for your application. For example, let's suppose that two tasks are sharing a 16-bit integer variable. The first task increments the variable and the other task clears it. If you consider how long a processor takes to perform either operation, you'll see it would be foolish to use a semaphore to gain exclusive access to the variable. Each task must disable interrupts before performing its operation on the variable and enable interrupts when it is finished. However, a semaphore should be used if you are incrementing and clearing a floatingpoint variable and using a microprocessor that doesn't support floating point. In this case, the processing time involved in incrementing the floating-point variable could affect interrupt latency.

Another danger for real-time developers is a deadlock, which is also known as the "deadly embrace." In this situation, two tasks are unknowingly waiting for resources that are held by each other. For example, if task T1 has exclusive access to resource R1 and task T2 has exclusive access to resource R2 and T1 needs exclusive access to R2 and T2 also needs exclusive access to R1, neither task can continue--they are deadlocked. The simplest way to avoid a deadlock is for both tasks to acquire all resources before proceeding, and acquire the resources in the same order. GETTING IN SYNCH A task can be synchronized with an interrupt-service routine (ISR) or another task by using a semaphore, as shown in Figure 4. Semaphores are used to synchronize a task to an ISR or another task when no data is being exchanged. When used as a synchronization mechanism, the semaphore is initialized to zero. Using a semaphore for this type of synchronization is called a unilateral rendezvous. For example, a task initiates an I/O operation and then waits for the semaphore. When the I/O operation is complete, an ISR (or another task) signals the semaphore and the task is resumed. If the kernel supports counting semaphores, the semaphore would accumulate events that have not yet been processed. Keep in mind that more than one task can be waiting for the event to occur. In this case, the kernel could signal the occurrence of the event either to the highest priority task that is waiting for the event to occur or the first task waiting for the event. Depending on the application, more than one ISR or task could signal the occurrence of the event. Two tasks can synchronize their activities by using two semaphores, as shown in Figure 5. This activity is called a bilateral rendezvous. A bilateral rendezvous is similar to a unilateral rendezvous except that the tasks must be synchronized before proceeding. Imagine that two tasks are executing: Task1() { while (1) { Perform operation; Signal task #2; Wait for signal from task #2; Continue operation; } } Task2() { while (1) { Perform operation; Signal task #1;

Wait for signal from task #1; Continue operation; } } When the first task reaches a certain point, it signals the second task, then waits for a signal from the second task. Similarly, when the second task reaches a certain point, it signals the first task, then waits for a signal from the first task. At this point, the tasks are synchronized. A bilateral rendezvous cannot be performed between a task and an ISR. EVENT FLAGS Event flags are used when a task needs to be synchronized with multiple events. The task can be synchronized when any of the events have occurred. This method is called disjunctive synchronization (or logical OR). A task can also be synchronized when all events have occurred. This method is called conjunctive synchronization (or logical AND). Disjunctive and conjunctive synchronization are shown in Figure 6. Common events can be used to signal multiple tasks, as shown in Figure 7. Events are usually grouped. A group can consist of eight, 16 or 32 events. Tasks and ISRs can set or clear any event in a group. A task is resumed when all the events it requires are satisfied. A decision about which task will be resumed is made when a new set of events occur. Coordinating tasks in a real-time system can be a confusing matter for even the most experienced developer. For beginners in the field of real-time programming, it can seem impossible. I hope this brief tour has helped you over some of the stumbling blocks by giving you an understanding of the strengths and weaknesses of semaphores. BY JEAN J. LABROSSE Jean J. Labrosse has a Master's degree in Electrical Engineering from the University of Sherbrooke in Quebec, Canada, and has been developing real-time software for over 10 years. He is currently employed by Dynalco Controls in Fort Lauderdale, Fla., where he designs control software for industrial reciprocating engines. He is currently working on a book describing the uCOS real-time kernel. References 1. Labrosse, Jean J. "A Real-Time Kernel in C." Embedded Systems Programming, May 1992, pp. 40-53. 2. Labrosse, Jean J. "Implementing a Real-Time Kernel." Embedded Systems Programming, June 1992, pp. 44-49.

3. Tanenbaum, Andrew S. Modern Operating Systems. Englewood Cliffs, N.J.: PrenticeHall, 1992. 4. Dijkstra, E.W. "Cooperating Sequential Processes." Programming Languages." Genuys, F. (Ed.) London, U.K.: Academic Press, 1965. 5. Allworth, S.T. Introduction to Real-Time Software Design. New York, N.Y.: SpringerVerlag, 1981. 6. Savitzky, Stephen. Real-Time Microprocessor Systems. New York, N.Y.: Van Nostrand Reinhold Company, 1985.

Вам также может понравиться