Вы находитесь на странице: 1из 10

Operating Systems

================ Start Lecture #4 ================ 2.2.2: Thread Usage


Often, when a process A is blocked (say for I/O) there is still computation that can be done. Another process B can't do this computation since it doesn't have access to the A's memory. But two threads in the same process do share memory so that problem doesn't occur. An important modern example is a multithreaded web server. Each thread is responding to a single WWW connection. While one thread is blocked on I/O, another thread can be processing another WWW connection. Question: Why not use separate processes, i.e., what is the shared memory? Ans: The cache of frequently referenced pages. A common organization is to have a dispatcher thread that fields requests and then passes this request on to an idle thread. Another example is a producer-consumer problem (c.f. below) in which we have 3 threads in a pipeline. One thread reads data from an I/O device into a buffer, the second thread performs computation on the input buffer and places results in an output buffer, and the third thread outputs the data found in the output buffer. Again, while one thread is blocked the others can execute. Question: Why does each thread block? Answer: 1. The first thread blocks waiting for the device to finish reading the data. It also blocks if the input buffer is full. 2. The second thread blocks when either the input buffer is empty or the output buffer is full. 3. The third thread blocks when the output device is busy (it might also block waiting for the output request to complete, but this is not necessary). It also blocks if the output buffer is empty. Homework: 9. A final (related) example is that an application that wishes to perform automatic backups can have a thread to do just this. In this way the thread that interfaces with the user is not blocked during the backup. However some coordination between threads may be needed so that the backup is of a consistent state.

2.2.3: Implementing threads in user space


Write a (threads) library that acts as a mini-scheduler and implements thread_create, thread_exit, thread_wait, thread_yield, etc. The central data structure maintained and used by this library is the thread table, the analogue of the process table in the operating system itself. Advantages

Requires no OS modification. Requires no OS modification. Requires no OS modification. Very fast since no context switching. Can customize the scheduler for each application.

Disadvantages

Blocking system calls can't be executed directly since that would block the entire process. For example the producer consumer example, if implemented in the natural manner would not work well as whenever an I/O was issued that caused the process to block, all the threads would be unable to run (but see just below). Similarly a page fault would block the entire process (i.e., all the threads). A thread with an infinite loop prevents all other threads in this process from running. Re-doing the effort of writing a scheduler.

Possible methods of dealing with blocking system calls


Perhaps the OS supplies a non-blocking version of the system call, e.g. non-blocking read. Perhaps the OS supplies another system call that tell if the blocking system call will in fact block. For example, a unix select() can be used to tell if a read would block. It might not block if o The requested disk block is in the buffer cache (see I/O chapter). o The request was for a keyboard or mouse or network event that has already happened.

2.2.4: Implementing Threads in the Kernel


Move the thread operations into the operating system itself. This naturally requires that the operating system itself be (significantly) modified and is thus not a trivial undertaking.

Thread-create and friends are now system calls and hence much slower than with usermode threads. They are, however, still much faster than creating/switching/etc processes since there is so much shared state that does not need to be recreated. A thread that blocks causes no particular problem. The kernel can run another thread from this process or can run another process.

Similarly a page fault, or infinite loop in one thread does not automatically block the other threads in the process.

2.2.5: Hybrid Implementations


One can write a (user-level) thread library even if the kernel also has threads. This is sometimes called the M:N model since M user mode threads run on each of N kernel threads. Then each kernel thread can switch between user level threads. Thus switching between user-level threads within one kernel thread is very fast (no context switch) and we maintain the advantage that a blocking system call or page fault does not block the entire multi-threaded application since threads in other processes of this application are still runnable.

2.2.6: Scheduler Activations


Skipped

2.2.7: Popup Threads


The idea is to automatically issue a thread-create system call upon message arrival. (The alternative is to have a thread or process blocked on a receive system call.) If implemented well, the latency between message arrival and thread execution can be very small since the new thread does not have state to restore.

Making Single-threaded Code Multithreaded


Definitely NOT for the faint of heart.

There often is state that should not be shared. A well-cited example is the unix errno variable that contains the error number (zero means no error) of the error encountered by the last system call. Errno is hardly elegant (even in normal, single-threaded, applications), but its use is widespread. If multiple threads issue faulty system calls the errno value of the second overwrites the first and thus the first errno value may be lost. Much existing code, including many libraries, are not re-entrant. Managing the shared memory inherent in multi-threaded applications opens up the possibility of race conditions that we will be studying next. What should be done with a signal sent to a process. Does it go to all or one thread? How should stack growth be managed. Normally the kernel grows the (single) stack automatically when needed. What if there are multiple stacks?

2.3: Interprocess Communication (IPC) and Coordination/Synchronization


2.3.1: Race Conditions

A race condition occurs when two (or more) processes are about to perform some action. Depending on the exact timing, one or other goes first. If one of the processes goes first, everything works, but if another one goes first, an error, possibly fatal, occurs. Imagine two processes both accessing x, which is initially 10.

One process is to execute x <-- x+1 The other is to execute x <-- x-1 When both are finished x should be 10 But we might get 9 and might get 11! Show how this can happen (x <-- x+1 is not atomic) Tanenbaum shows how this can lead to disaster for a printer spooler

Homework: 18.

2.3.2: Critical sections


We must prevent interleaving sections of code that need to be atomic with respect to each other. That is, the conflicting sections need mutual exclusion. If process A is executing its critical section, it excludes process B from executing its critical section. Conversely if process B is executing is critical section, it excludes process A from executing its critical section. Requirements for a critical section implementation. 1. No two processes may be simultaneously inside their critical section. 2. No assumption may be made about the speeds or the number of CPUs. 3. No process outside its critical section (including the entry and exit code)may block other processes. 4. No process should have to wait forever to enter its critical section. o I do NOT make this last requirement. o I just require that the system as a whole make progress (so not all processes are blocked). o I refer to solutions that do not satisfy Tanenbaum's last condition as unfair, but nonetheless correct, solutions. o Stronger fairness conditions have also been defined.

2.3.3 Mutual exclusion with busy waiting


The operating system can choose not to preempt itself. That is, we do not preempt system processes (if the OS is client server) or processes running in system mode (if the OS is self service). Forbidding preemption for system processes would prevent the problem above where x<--x+1 not being atomic crashed the printer spooler if the spooler is part of the OS.

But simply forbidding preemption while in system mode is not sufficient.


Does not work for user-mode programs. So the Unix print spooler would not be helped. Does not prevent conflicts between the main line OS and interrupt handlers. o This conflict could be prevented by disabling interrupts while the main line is in its critical section. o Indeed, disabling (a.k.a. temporarily preventing) interrupts is often done for exactly this reason. o Do not want to block interrupts for too long or the system will seem unresponsive.

Does not work if the system has several processors. o Both main lines can conflict. o One processor cannot block interrupts on the other.

Software solutions for two processes


Initially P1wants=P2wants=false Code for P1 Loop forever { P1wants <-- true while (P2wants) {} critical-section P1wants <-- false non-critical-section } ENTRY ENTRY EXIT Code for P2 Loop forever { P2wants <-- true while (P1wants) {} critical-section P2wants <-- false non-critical-section }

Explain why this works. But it is wrong! Why? Let's try again. The trouble was that setting want before the loop permitted us to get stuck. We had them in the wrong order!
Initially P1wants=P2wants=false Code for P1 Loop forever { while (P2wants) {} P1wants <-- true critical-section P1wants <-- false non-critical-section } ENTRY ENTRY EXIT Code for P2 Loop forever { while (P1wants) {} P2wants <-- true critical-section P2wants <-- false non-critical-section }

Explain why this works.

But it is wrong again! Why? So let's be polite and really take turns. None of this wanting stuff.
Initially turn=1 Code for P1 Loop forever { while (turn = 2) {} critical-section turn <-- 2 non-critical-section } Code for P2 Loop forever { while (turn = 1) {} critical-section turn <-- 1 non-critical-section }

This one forces alternation, so is not general enough. Specifically, it does not satisfy condition three, which requires that no process in its non-critical section can stop another process from entering its critical section. With alternation, if one process is in its non-critical section (NCS) then the other can enter the CS once but not again. The first example violated rule 4 (the whole system blocked). The second example violated rule 1 (both in the critical section. The third example violated rule 3 (one process in the NCS stopped another from entering its CS). In fact, it took years (way back when) to find a correct solution. Many earlier solutions were found and several were published, but all were wrong. The first correct solution was found by a mathematician named Dekker, who combined the ideas of turn and wants. The basic idea is that you take turns when there is contention, but when there is no contention, the requesting process can enter. It is very clever, but I am skipping it (I cover it when I teach distributed operating systems in V22.0480 or G22.2251). Subsequently, algorithms with better fairness properties were found (e.g., no task has to wait for another task to enter the CS twice). What follows is Peterson's solution, which also combines turn and wants to force alternation only when there is contention. When Peterson's solution was published, it was a surprise to see such a simple soluntion. In fact Peterson gave a solution for any number of processes. A proof that the algorithm satisfies our properties (including a strong fairness condition) for any number of processes can be found in Operating Systems Review Jan 1990, pp. 18-22.
Initially P1wants=P2wants=false Code for P1 Loop forever { P1wants <-- true turn <-- 2 while (P2wants and turn=2) {} critical-section P1wants <-- false non-critical-section and turn=1

Code for P2 Loop forever { P2wants <-- true turn <-- 1 while (P1wants and turn=1) {} critical-section P2wants <-- false non-critical-section

Hardware assist (test and set)

TAS(b), where b is a binary variable, ATOMICALLY sets b<--true and returns the OLD value of b. Of course it would be silly to return the new value of b since we know the new value is true. The word atomically means that the two actions performed by TAS(x) (testing, i.e., returning the old value of x and setting , i.e., assigning true to x) are inseparable. Specifically it is not possible for two concurrent TAS(x) operations to both return false (unless there is also another concurrent statement that sets x to false). With TAS available implementing a critical section for any number of processes is trivial.
loop forever { while (TAS(s)) {} CS s<--false NCS ENTRY EXIT

2.3.4: Sleep and Wakeup


Remark: Tanenbaum does both busy waiting (as above) and blocking (process switching) solutions. We will only do busy waiting, which is easier. Sleep and Wakeup are the simplest blocking primitives. Sleep voluntarily blocks the process and wakeup unblocks a sleeping process. We will not cover these. Homework: Explain the difference between busy waiting and blocking process synchronization.

2.3.5: Semaphores
Remark: Tannenbaum use the term semaphore only for blocking solutions. I will use the term for our busy waiting solutions. Others call our solutions spin locks. P and V and Semaphores The entry code is often called P and the exit code V. Thus the critical section problem is to write P and V so that
loop forever P critical-section V non-critical-section

satisfies 1. 2. 3. 4. Mutual exclusion. No speed assumptions. No blocking by processes in NCS. Forward progress (my weakened version of Tanenbaum's last condition).

Note that I use indenting carefully and hence do not need (and sometimes omit) the braces {} used in languages like C or java. A binary semaphore abstracts the TAS solution we gave for the critical section problem.

A binary semaphore S takes on two possible values open and closed. Two operations are supported P(S) is
while (S=closed) {} S<--closed <== This is NOT the body of the while

where finding S=open and setting S<--closed is atomic


That is, wait until the gate is open, then run through and atomically close the gate Said another way, it is not possible for two processes doing P(S) simultaneously to both see S=open (unless a V(S) is also simultaneous with both of them). V(S) is simply S<--open

The above code is not real, i.e., it is not an implementation of P. It is, instead, a definition of the effect P is to have. To repeat: for any number of processes, the critical section problem can be solved by
loop forever P(S) CS V(S) NCS

The only specific solution we have seen for an arbitrary number of processes is the one just above with P(S) implemented via test and set. Remark: Peterson's solution requires each process to know its processor number. The TAS soluton does not. Moreover the definition of P and V does not permit use of the processor number. Thus, strictly speaking Peterson did not provide an implementation of P and V. He did solve the critical section problem. To solve other coordination problems we want to extend binary semaphores.

With binary semaphores, two consecutive Vs do not permit two subsequent Ps to succeed (the gate cannot be doubly opened). We might want to limit the number of processes in the section to 3 or 4, not always just 1.

Both of the shortcomings can be overcome by not restricting ourselves to a binary variable, but instead define a generalized or counting semaphore.

A counting semaphore S takes on non-negative integer values

Two operations are supported P(S) is


while (S=0) {} S--

where finding S>0 and decrementing S is atomic


That is, wait until the gate is open (positive), then run through and atomically close the gate one unit Another way to describe this atomicity is to say that it is not possible for the decrement to occur when S=0 and it is also not possible for two processes executing P(S) simultaneously to both see the same necessarily (positive) value of S unless a V(S) is also simultaneous. V(S) is simply S++

These counting semaphores can solve what I call the semi-critical-section problem, where you premit up to k processes in the section. When k=1 we have the original critical-section problem.
initially S=k loop forever P(S) SCS <== semi-critical-section V(S) NCS

Producer-consumer problem

Two classes of processes o Producers, which produce times and insert them into a buffer. o Consumers, which remove items and consume them. What if the producer encounters a full buffer? Answer: It waits for the buffer to become non-full. What if the consumer encounters an empty buffer? Answer: It waits for the buffer to become non-empty. Also called the bounded buffer problem. o Another example of active entities being replaced by a data structure when viewed at a lower level (Finkel's level principle).

Initially e=k, f=0 (counting semaphore); b=open (binary semaphore) Producer loop forever produce-item P(e) P(b); add item to buf; V(b) V(f) Consumer loop forever P(f) P(b); take item from buf; V(b) V(e) consume-item

k is the size of the buffer e represents the number of empty buffer slots f represents the number of full buffer slots We assume the buffer itself is only serially accessible. That is, only one operation at a time. o This explains the P(b) V(b) around buffer operations o I use ; and put three statements on one line to suggest that a buffer insertion or removal is viewed as one atomic operation. o Of course this writing style is only a convention, the enforcement of atomicity is done by the P/V. The P(e), V(f) motif is used to force bounded alternation. If k=1 it gives strict alternation.

2.3.6: Mutexes
Remark: Whereas we use the term semaphore to mean binary semaphore and explicitly say generalized or counting semaphore for the positive integer version, Tanenbaum uses semaphore for the positive integer solution and mutex for the binary version. Also, as indicated above, for Tanenbaum semaphore/mutex implies a blocking primitive; whereas I use binary/counting semaphore for both busy-waiting and blocking implementations. Finally, remember that in this course we are studying only busywaiting solutions. My Terminology Busy wait block/switch critical Tanenbaum's Terminology Busy wait block/switch critical enter/leave region mutex semi-critical no name semaphore (binary) semaphore (binary) semaphore semi-critical counting semaphore counting semaphore

2.3.7: Monitors
Skipped.

2.3..8: Message Passing


Skipped. You can find some information on barriers in my lecture notes for a follow-on course (see in particular lecture #16).

Вам также может понравиться