Вы находитесь на странице: 1из 27

Flow Control Protocols

The simplest form of a flow control scheme merely adjusts the rate at which a sender produces data to the rate at which the receiver can absorb it. More elaborate schemes can protect against the deletion, insertion, duplication, and reordering of data as well. But let us first look at the simpler version of the problem. It is used y y y To make sure that data are not sent faster than they can be processed. To optimize channel utilization. To avoid data clogging transmission links.

The second and the third goals are complementary: sending the data too slowly is wasteful, but sending data too fast can cause congestion. The data path between sender and receiver may contain transfer points with a limited capacity for storing messages shared between several sender-receiver pairs. A prudent flow control scheme prevents one such pair from hogging all the available storage space. Figure 4.1 illustrates a protocol without any form of flow control. Note that it is a simplex protocol: it can be used for transfer of data in only one direction

The protocol in Figure 4.1 only works reliably if the receiver process is guaranteed to be faster than the sender. If this assumption is false, the sender can overflow the input queue of the receiver. The protocol violates a basic law of program design for concurrent systems: Never make assumptions about the relative speeds of concurrent processes. The relative speed of concurrent processes depends on too many factors to base any design decisions on it. Apart from that, the assumption about the relative speed of sender and receiver is often not just dangerous but also invalid. Receiving data is generally a more time-consuming process than sending data. The receiver must interpret the data, decide what to do with it,

allocate memory for it, and perhaps forward it to the appropriate recipient. The sender need not find a provider for the data it is transmitting: it does not run unless there are data to transfer. And, instead of allocating memory, the sender may have to free memory after the data are transmitted, usually a less time-consuming task. Therefore, the bottleneck in the protocol is likely to be the receiver process. It is bad planning to assume that it can always keep up with the sender.

ACKNOWLEDGMENTS So far, we have used acknowledgments as a method of flow control, not of error control. If a message is lost or damaged beyond recognition, the absence of a positive acknowledgment would cause the sender eventually to time out and retransmit the message. If the probability of error is high enough, this can degrade the efficiency of the protocol, forcing the sender to be idle until it can be certain that an acknowledgment is not merely delayed, but is positively lost. The problem can be alleviated, though not avoided completely, with the introduction of negative acknowledgments. The negative acknowledgment is used by the receiver whenever it receives a message that is damaged on the transmission channel. When the sender receives a negative acknowledgment, it knows immediately that it must retransmit the corresponding message, without having to wait for a timeout. The timeout itself is still needed, of course, to allow for a recovery from messages that disappear on the channel. TERMINOLOGY The method of using acknowledgments to control the retransmission of messages is usually referred to as an ARQ method, where ARQ stands for Automatic Repeat Request. There are three main variants: y y y Selective repeat ARQ Stop-and-wait ARQ Go-back-N ARQ

Selective repeat ARQ Selective Repeat ARQ / Selective Reject ARQ is a specific instance of the Automatic RepeatRequest (ARQ) protocol used for communications. It may be used as a protocol for the delivery and acknowledgement of message units, or it may be used as a protocol for the delivery of subdivided message sub-units. When used as the protocol for the delivery of messages, the sending process continues to send a number of frames specified by a window size even after a frame loss. Unlike Go-Back-N ARQ,

the receiving process will continue to accept and acknowledge frames sent after an initial error; this is the general case of the sliding window protocol with both transmit and receive window sizes greater than 1. The receiver process keeps track of the sequence number of the earliest frame it has not received, and sends that number with every acknowledgement (ACK) it sends. If a frame from the sender does not reach the receiver, the sender continues to send subsequent frames until it has emptied its window. The receiver continues to fill its receiving window with the subsequent frames, replying each time with an ACK containing the sequence number of the earliest missing frame. Once the sender has sent all the frames in its window, it re-sends the frame number given by the ACKs, and then continues where it left off. The size of the sending and receiving windows must be equal, and half the maximum sequence number (assuming that sequence numbers are numbered from 0 to n1) to avoid miscommunication in all cases of packets being dropped. To understand this, consider the case when all ACKs are destroyed. If the receiving window is larger than half the maximum sequence number, some, possibly even all, of the packages that are resent after timeouts are duplicates that are not recognized as such. The sender moves its window for every packet that is acknowledged. In most channel models with variable length messages, the probability of error-free reception diminishes in inverse proportion with increasing message length. In other words it's easier to receive a short message than a longer message. Therefore standard ARQ techniques involving variable length messages have increased difficulty delivering longer messages, as each repeat is the full length. Selective retransmission applied to variable length messages completely eliminates the difficulty in delivering longer messages, as successfully delivered sub-blocks are retained after each transmission, and the number of outstanding sub-blocks in following transmissions diminishes. Examples The Transmission Control Protocol uses a variant of Go-Back-N ARQ to ensure reliable transmission of data over the Internet Protocol, which does not provide guaranteed delivery of packets; with Selective Acknowledgement (SACK), it uses Selective Repeat ARQ. The ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s) Local area network using existing home wiring (power lines, phone lines and coaxial cables), uses Selective Repeat ARQ to ensure reliable transmission over noisy media. G.hn employs packet segmentation to sub-divide messages into smaller units, to increase the probability that each one is received correctly.

Stop and wait ARQ Stop-and-wait ARQ is a method used in telecommunications to send information between two connected devices. It ensures that information is not lost due to dropped packets and that packets are received in the correct order. It is the simplest kind of automatic repeat-request (ARQ) method. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the general sliding window protocol with both transmit and receive window sizes equal to 1. After sending each frame, the sender doesn't send any further frames until it receives an acknowledgement (ACK) signal. After receiving a good frame, the receiver sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the sender sends the same frame again. The above behavior is the simplest Stop-and-Wait implementation. However, in a real life implementation there are problems to be addressed. Typically the transmitter adds a redundancy check number to the end of each frame. The receiver uses the redundancy check number to check for possible damage. If the receiver sees that the frame is good, it sends an ACK. If the receiver sees that the frame is damaged, the receiver discards it and does not send an ACK -- pretending that the frame was completely lost, not merely damaged. One problem is where the ACK sent by the receiver is damaged or lost. In this case, the sender doesn't receive the ACK, times out, and sends the frame again. Now the receiver has two copies of the same frame, and doesn't know if the second one is a duplicate frame or the next frame of the sequence carrying identical data. Another problem is when the transmission medium has such a long latency that the sender's timeout runs out before the frame reaches the receiver. In this case the sender resends the same packet. Eventually the receiver gets two copies of the same frame, and sends an ACK for each one. The sender, waiting for a single ACK, receives two ACKs, which may cause problems if it assumes that the second ACK is for the next frame in the sequence. To avoid these problems, the most common solution is to define a 1 bit sequence number in the header of the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When the receiver sends an ACK, it includes the sequence number of the next packet it expects. This way, the receiver can detect duplicated frames by checking if the frame sequence numbers alternate. If two subsequent frames have the same sequence number, they are duplicates, and the second frame is discarded. Similarly, if two subsequent ACKs reference the same sequence number, they are acknowledging the same frame.

Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between packets, if the ACK and the data are received successfully, is twice the transit time (assuming the turnaround time can be zero). The throughput on the channel is a fraction of what it could be. To solve this problem, one can send more than one packet at a time with a larger sequence number and use one ACK for a set. This is what is done in Go-Back-N ARQ and the Selective Repeat ARQ. Go-Back-N ARQ Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in which the sending process continues to send a number of frames specified by a window size even without receiving an acknowledgement (ACK) packet from the receiver. It is a special case of the general sliding window protocol with the transmit window size of N and receive window size of 1. The receiver process keeps track of the sequence number of the next frame it expects to receive, and sends that number with every ACK it sends. The receiver will ignore any frame that does not have the exact sequence number it expects whether that frame is a "past" duplicate of a frame it has already ACK'ed [1] or whether that frame is a "future" frame past the last packet it is waiting for. Once the sender has sent all of the frames in its window, it will detect that all of the frames since the first lost frame are outstanding, and will go back to sequence number of the last ACK it received from the receiver process and fill its window starting with that frame and continue the process over again. Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since unlike waiting for an acknowledgement for each packet, the connection is still being utilized as packets are being sent. In other words, during the time that would otherwise be spent waiting, more packets are being sent. However, this method also results in sending frames multiple times if any frame was lost or damaged, or the ACK acknowledging them was lost or damaged, then that frame and all following frames in the window (even if they were received without error) will be re-sent. To avoid this, Selective Repeat ARQ can be used.

Choosing the window size (N) There are a few things to keep in mind when choosing a value for N. 1: The sender must not transmit too fast. N should be bounded by the receivers ability to process packets.

2: N must be smaller than the number of sequence numbers (if they are numbered from zero to N) to verify transmission in cases of any packet (any data or ACK packet) being dropped. 3: Given the bounds presented in (1) and (2), choose N to be the largest number possible.

1.1

Performance Analysis

Performance is often a central issue in the design, development, and configuration of systems. It is not always enough to know that systems work properly,they must also work effectively. There are numerous studies, e.g. in the areas of computer and telecommunication systems, manufacturing, military, health care, and transportation, that have shown that time, money, and even lives can be saved if the performance of a system is improved. Performance analysis studies are conducted to evaluate existing or planned systems, to compare alternative configurations, or to find an optimal configuration of a system. There are three alternative techniques for analysing the performance of a system: measurement,analytical models, and simulation models. There are advantages and drawbacks to each of these techniques. Measuring the performance of a system can provide exact answers regarding the performance of the system. The system in question is observed directly no details are abstracted away, and no simplifying assumptions need to be made regarding the behaviour of the system. However, measurement is only an option if the system in question already exists. The measurements that are taken may or may not be accurate depending on the current state of the system. For example, if the utilization of a network is measured during an off-peak period,then no conclusions can be drawn about either the average utilization of the network or the utilization of the network during peak usage periods. Performance analysis is both an art and a science. One of the arts of performance analysis is knowing which of these three analysis technique to use in which situation. Measurement can obviously not be used if the system in question does not exist. Simulation should probably not be used if the system consists of a few servers and queues, in this case queueing networks would be a more appropriate method. Simulation and analytic models are often complementary.Analytic models are excellent for smaller systems that fulfil certain requirements, such as exponentially distributed interarrival periods and processing times. Simulation models are more appropriate for large and complex systems with characteristics that render them intractable for analytic models. Performance analysts need to be familiar with a variety of different techniques, models, formalisms and tools. Creating models that contain an appropriate level of detail is also an art. It is important to include enough information to be able to make a reasonable representation of the system, however, it is equally important to be able to determine which details are irrelevant and unnecessary. In some studies, the scenarios may been given, and the purpose of the study may be to compare the performance of the given configurations with a standard or to choose the best of the configurations. If the scenarios are not predetermined, then the purpose of the simulation study may be to locate the parameters that have the most impact on a particular performance measure or to locate important parameters in the system

1.2

Petri Nets

Petri nets were introduced in 1962 by Dr. Carl Adam Petri . Petri nets are a powerful modeling formalism in computer science, system engineering and many other disciplines. Petri nets combine a well defined mathematical theory with a graphical representation of the dynamic behavior of systems. The theoretic aspect of Petri nets allow precise modeling and analysis of system behavior, while the graphical representation of Petri nets enable visualization of the modeled system state changes. This combination is the main reason for the great success of Petri nets. Consequently, Petri nets have been used to model various kinds of dynamic event-driven systems like computers networks (Ajmone Marsan, Balbo and Conte 1986), communication systems (Merlin and Farber 1976; Wang 2006), manufacturing plants (Venkatesh, Zhou and Caudill 1994; Zhou and DiCesare 1989; Desrochers and Ai-Jaar 1995), real-time computing systems (Mandrioli and Morzenti 1996; Tsai, Yang and Chang 1995), logistic networks (Landeghem and Bobeanu 2002), and workflows (Aalst and Hee 2000; Lin, Tian and Wei 2002) to mention only a few important examples. This wide spectrum of applications is accompanied by wide spectrum different aspects which have been considered in the research on Petri nets. This report focuses on the the use of Petri nets/colored Petri nets (CP-nets or CPN) for performance analysis. CP-nets are a graphical modelling language that models both the states of a system and the events that change the system from one state to another. CP-nets combine the strengths of Petri nets (PN) and programming languages. The formalism of Petri nets is well suited for describing concurrent and synchronizing actions in distributed systems. Programming languages can be used to define data types and manipulation of data. In timed CP-nets , which are described in more detail below, a global clock models the passage of time. Large and complex models can be built using hierarchical CP-nets in which modules, which are called pages in CPN terminology, are related to each other in a well-defined way. Without the hierarchical structuring mechanism, it would be difficult to create understandable CP-nets of real-world systems. Petri nets provide a framework for modelling and analysing both the performance and the functionality of distributed and concurrent systems. A distinction is generally made between high-level Petri nets and low-level Petri nets. CP-nets are an example of high-level Petri nets which combine Petri nets and programming languages and which are aimed at modelling and analyzing realistically-sized systems.. Low-level Petri nets to have a simpler graphical representation, and they are well suited as a theoretical model for concurrency. The Petri Net World web site contains extensive descriptions of the many kinds of Petri nets and of PN tools.

Petri Net Definition


A Petri net is a particular kind of bipartite directed graphs populated by three types of objects. These objects are places, transitions, and directed arcs. Directed arcs connect places to transitions or transitions to places. In its simplest form, a Petri net can be represented by a transition together with an input place and an output place. This elementary net may be used to represent various aspects of the modeled systems. For example, a transition and its input place and output place can be used to represent a data processing event, its input data and output data, respectively, in a data processing system. In order to study the dynamic behavior of a Petri net modeled system in terms of its states and state changes, each place may potentially hold either none or a positive number of tokens. Tokens are a primitive concept for Petri nets in addition to places and transitions. The presence or absence of a token in a place can indicate whether a condition associated with this place is true or false, for instance. A Petri net is formally defined as a 5-tuple N = (P, T, I, O, M0), where (1) P = {p1, p2, , pm} is a finite set of places; (2) T = {t1, t2, , tn} is a finite set of transitions, P T , and P T= ;

(3) I: P T N is an input function that defines directed arcs from places to transitions, where N is a set of nonnegative integers; (4) O: T P places; and (5) M0: P N is an output function that defines directed arcs from transitions to

N is the initial marking.

A marking in a Petri net is an assignment of tokens to the places of a Petri net. Tokens reside in the places of a Petri net. The number and position of tokens may change during the execution of a Petri net. The tokens are used to define the execution of a Petri net. Most theoretical work on Petri nets is based on the formal definition of Petri net structures. However, a graphical representation of a Petri net structure is much more useful for illustrating the concepts of Petri net theory. A Petri net graph is a Petri net structure as a bipartite directed multigraph. Corresponding to the definition of Petri nets, a Petri net graph has two types of nodes. A circle represents a place; a bar or a box represents a transition. Directed arcs (arrows) connect places and transitions, with some arcs directed from places to transitions and other arcs directed from transitions to places. An arc directed from a place pj to a transition ti defines pj to be an input place of ti, denoted by I(ti, pj) = 1. An arc directed from a transition ti to a place pj defines pj to be an output place of ti, denoted by O(ti, pj) = 1. If I(ti, pj) = k (or O(ti, pj) = k), then there exist k directed (parallel) arcs connecting place pj to transition ti ( or connecting transition ti to place pj). Usually, in the graphical representation, parallel arcs connecting a place (transition) to a transition (place) are represented by a single directed arc labeled with its multiplicity, or weight k. A circle contains a dot represents a place contains a token.

Example 1: A simple Petri net.

Figure 1 shows a simple Petri net. In this Petri net, we have P = {p1, p2, p3, p4}; T = {t1, t2, t3}; I(t1, p1) = 2, I(t1, pi) = 0 for i = 2, 3, 4; I(t2, p2) = 1, I(t2, pi) = 0 for i = 1, 3, 4; I(t3, p3) = 1, I(t3, pi) = 0 for i = 1, 2, 4; O(t1, p2) = 2, O(t1, p3) = 1, O(t1, pi) = 0 for i = 1, 4; O(t2, p4) = 1, O(t2, pi) = 0 for i = 1, 2, 3; O(t3, p4) = 1, O(t3, pi) = 0 for i = 1, 2, 3; M0 = (2 0 0 0) . Transition Firing The execution of a Petri net is controlled by the number and distribution of tokens in the Petri net. By changing distribution of tokens in places, which may reflect the occurrence of events or execution of operations, for instance, one can study the dynamic behavior of the modeled system. A Petri net executes by firing transitions. We now introduce the enabling rule and firing rule of a transition, which govern the flow of tokens: (1) Enabling Rule: A transition t is said to be enabled if each input place p of t contains at least the number of tokens equal to the weight of the directed arc connecting p to t, i.e., M(p) I(t, p) for any p in P. (2) Firing Rule: Only enabled transition can fire. The firing of an enabled transition t removes from each input place p the number of tokens equal to the weight of the directed arc connecting p to t. It also deposits in each output place p the number of tokens equal to the weight of the directed arc connecting t to p. Mathematically, firing t at M yields a new marking
T

M (p) = M(p) I(t, p) + O(t, p) for any p in P. Notice that since only enabled transitions can fire, the number of tokens in each place always remains non-negative when a transition is fired. Firing transition can never try to remove a token that is not there. A transition without any input place is called a source transition, and one without any output place is called a sink transition. Note that a source transition is unconditionally enabled, and that the firing of a sink transition consumes tokens, but doesnt produce tokens. A pair of a place p and a transition t is called a self-loop, if p is both an input place and an output place of t. A Petri net is said to be pure if it has no self-loops. Example 2: Transition firing.

Consider the simple Petri net shown in Figure 1. Under the initial marking, M0 = (2 0 0 0), only t1 is enabled. Firing of t1 results in a new marking, say M1. It follows from the firing rule that M1 = (0 2 1 0). The new token distribution of this Petri net is shown in Figure 2. Again, in marking M1, both transitions of t2 and t3 are enabled. If t2 fires, the new marking, say M2, is: M2 = (0 1 1 1). If t3 fires, the new marking, say M3, is: M3 = (0 2 0 1).

Modeling Power The typical characteristics exhibited by the activities in a dynamic event-driven system, such as concurrency, decision making, synchronization and priorities, can be modeled effectively by Petri nets. 1. Sequential Execution. In Figure 3(a), transition t2 can fire only after the firing of t1. This imposes the precedence constraint t2 after t1. Such precedence constraints are typical of the execution of the parts in a dynamic system. Also, this Petri net construct models the causal relationship among activities. 2. Conflict. Transitions t1 and t2 are in conflict in Figure 3(b). Both are enabled but the firing of any transition leads to the disabling of the other transition. Such a situation will arise, for example, when a machine has to choose among part types or a part has to choose among several machines. The resulting conflict may be resolved in a purely non-deterministic way or in a probabilistic way, by assigning appropriate probabilities to the conflicting transitions. 3. Concurrency. In Figure 3(c), the transitions t1, and t2 are concurrent. Concurrency is an important attribute of system interactions. Note that a necessary condition for transitions to be concurrent is the existence of a forking transition that deposits a token in two or more output places. 4. Synchronization. It is quite normal in a dynamic system that an event requires multiple resources. The resulting synchronization of resources can be captured by transitions of the type shown in Figure 3(d). Here, t1 is enabled only when each of p1 and p2 receives a token. The arrival of a token into each of the two places could be the result a possibly complex sequence of operations elsewhere in the rest of the Petri net model. Essentially, transition t1 models the joining operation. 5. Mutually exclusive. Two processes are mutually exclusive if they cannot be performed at the same time due to constraints on the usage of shared resources. Figure 3(e) shows this structure. For example, a robot may be shared by two machines for loading and unloading. Two such structures are parallel mutual exclusion and sequential mutual exclusion. 6. Priorities. The classical Petri nets discussed so far have no mechanism to represent priorities. Such a modeling power can be achieved by introducing an inhibitor arc. The inhibitor arc connects an input place to a transition, and is pictorially represented by an arc terminated with a small circle. The presence of an inhibitor arc connecting an input place to a transition changes the transition enabling conditions. In the presence of the inhibitor arc, a transition is regarded as enabled if each input place, connected to the transition by a normal arc (an arc terminated with an arrow), contains at least the number of tokens equal to the weight of the arc, and no tokens are present on each input place connected to the transition by the inhibitor arc. The transition firing rule is the same for normally connected places. The firing, however, does not change the marking in the inhibitor arc connected places. A Petri net with an inhibitor arc is shown in Figure 3(f). t1 is enabled if p1 contains a token, while t2 is enabled if p2 contains a token and p1 has no token. This gives priority to t1 over t2.

Petri Net Properties As a mathematical tool, Petri nets possess a number of properties. These properties, when interpreted in the context of the modeled system, allow the system designer to identify the presence or absence of the application domain specific functional properties of the system under design. Two types of properties can be distinguished, behavioral and structural ones. The behavioral properties are these which depend on the initial state or marking of a Petri net. The structural properties, on the other hand, do not depend on the initial marking of a Petri net. They depend on the topology, or net structure, of a Petri net. Here we provide an overview of some of the most important, from the practical point of view, behavioral properties. They are reachability, safeness, and liveness. Reachability An important issue in designing event-driven systems is whether a system can reach a specific state, or exhibit a particular functional behavior. In general, the question is whether the system modeled with a Petri net exhibits all desirable properties as specified in the requirement specification, and no undesirable ones. In order to find out whether the modeled system can reach a specific state as a result of a required functional behavior, it is necessary to find such a transition firing sequence which would transform a marking M0 to Mi, where Mi represents the specific state, and the firing sequence represents the required functional behavior. It should be noted that a real system may reach a given state as a result of exhibiting different permissible patterns of functional behavior, which would transform M0 to the required Mi. The existence in the Petri net model of additional sequences of transition firings which transform M0 to Mi indicates that the Petri net model may not exactly reflect the structure and dynamics of the underlying system. This may also indicate the presence of unanticipated facets of the functional behavior of the real system, provided that the Petri net model accurately reflects the underlying system requirement specification. A marking Mi is said to be reachable from a marking M0 if there exists a sequence of transitions firings which transforms a marking M0 to Mi. A marking M1 is said to be immediately reachable from M0 if firing an enabled transition in M0 results in M1. Safeness In a Petri net, places are often used to represent information storage areas in communication and computer systems, product and tool storage areas in manufacturing systems, etc. It is important to be able to determine whether proposed control strategies prevent from the overflows of these storage areas. The Petri net property which helps to identify the existence of overflows in the modeled system is the concept of boundedness. A place p is said to be k-bounded if the number of tokens in p is always less than or equal to k (k is a nonnegative integer number) for every marking M reachable from the initial marking M0, i.e., M R(M0). It is safe if it is 1-bounded. A Petri net N = (P, T, I, O, M0) is k-bounded (safe) if each place in P is k-bounded (safe). Liveness

The concept of liveness is closely related to the deadlock situation, which has been situated extensively in the context of computer operating systems. A Petri net modeling a deadlock-free system must be live. This implies that for any reachable marking M, it is ultimately possible to fire any transition in the net by progressing through some firing sequence. This requirement, however, might be too strict to represent some real systems or scenarios that exhibit deadlock-free behavior. For instance, the initialization of a system can be modeled by a transition (or a set of transitions) which fire a finite number of times. After initialization, the system may exhibit a deadlock-free behavior, although the Petri net representing this system is no longer live as specified above. For this reason, different levels of liveness for transition t and marking M0 were defined. High-Level Petri Nets In a standard Petri net, tokens are indistinguishable. Because of this, Petri nets have the distinct disadvantage of producing very large and unstructured specifications for the systems being modeled. To tackle this issue, high-level Petri nets were developed to allow compact system representation.

Colored Petri nets Introduced by Kurt Jensen in (Jensen 1981), a Colored Petri Net (CPN) has its each token attached with a color, indicating the identity of the token. Moreover, each place and each transition has attached a set of colors. A transition can fire with respect to each of its colors. By firing a transition, tokens are removed from the input places and added to the output places in the same way as that in original Petri nets, except that a functional dependency is specified between the color of the transition firing and the colors of the involved tokens. The color attached to a token may be changed by a transition firing and it often represents a complex data-value. CPNs lead to compact net models by using of the concept of colors. This is illustrated by Example 4.
Example 4: A manufacturing system. Consider a simple manufacturing system comprising two machines M1 and M2, which process three different types of raw parts. Each type of parts goes through one stage of operation, which can be performed on either M1 or M2. After the completion of processing of a part, the part is unloaded from the system and a fresh part of the same type is loaded into the system. Figure 5 shows the (uncolored) Petri net model of the system. The places and transitions in the model are as follows:

p (p ): Machine M1 (M2) available


1 2

p (p , p ): A raw part of type 1 (type 2, type 3) available


3 4 5

p6 (p7, p8): M1 processing a raw part of type 1 (type 2, type 3) p9 (p10, p11): M2 processing a raw part of type 1 (type 2, type 3)

t1 (t2, t3): M1 begins processing a raw part of type 1 (type 2, type 3) t4 (t5, t6): M2 begins processing a raw part of type 1 (type 2, type 3) t7 (t8, t9): M1 ends processing a raw part of type 1 (type 2, type 3) t10 (t11, t12): M2 ends processing a raw part of type 1 (type 2, type 3)

Now let us take a look at the CPN model of this manufacturing system, which is shown in Figure 6. As we can see, there are only 3 places and 2 transitions in the CPN model, compared at 11 places and 12 transitions in Fig. 5. In this CPN model, p1 means machines are available, p2 means parts available, p3 means processing in progress, t1 means processing starts, and t1 means processing ends. There are three color sets: SM, SP and SM SP, where SM = {M1, M2}, SP = {J1, J2, J3}. The color of each node is as follows: C(p1) = {M1, M2}, C(p ) = {J1, J2, J3},
2

C(p ) = SM SP,
3

C(t1) = C(t2) = SM SP.

CPN models can be analyzed through reachability analysis. As for ordinary Petri nets, the basic idea behind reachability analysis is to construct a reachability graph. Obviously, such a graph may become very large, even for small CPNs. However, it can be constructed and analyzed totally automatically, and there exist techniques which makes it possible to work with condensed occurrence graphs without losing analytic power. These techniques build upon equivalence classes. Another option to the CPN model analysis is simulation.

Modeling Stop-and-Wait Flow Control Protocol


Stop-and-Wait is an elementary form of flow control between a sender and a receiver. In this chapter, we begin by presenting an introduction to the Stop-and-Wait Protocol in Section 5.1, along with a narrative description of the SWP as modelled in this thesis. This is followed by a formal definition of the service that the SWP should provide to its users in Section 5.2. Sections 5.3 and 5.4 present the modelling assumptions we have made with respect to our SWP CPN model and the parameterization of this model, which is presented in Section 5.5 and formalised as a High-level Petri Net Graph in Section 5.6. Finally, Section 5.7 presents the properties that we wish to verify. 5.1 Introduction to the Stop-and-Wait Protocol As the name suggests, a Stop-and-Wait Protocol can be considered to be any data transfer protocol in which the sending entity stops after transmitting a message and waits until it receives an acknowledgement indicating that the receiver is ready to receive the next message. The Stopand-Wait Protocol is a data transfer protocol, and hence does not include any connection management procedures (e.g. establishment and tear-down of connections).As mentioned in Section 4.3, we do not consider explicit communication with the users, but instead we rely on the send and non-duplicate receive actions to be considered as synchronised communication with the user. We do not consider the reporting of errors to the user. We consider that the sender and receiver entities can each be in one of two states. For the sender, this is one state in which the sender is ready to send a new message, and another in which the sender is waiting for an acknowledgement of the currently outstanding message. For the receiver, this is one state in which it is ready to receive a message, and another in which it is processing a message and generating the appropriate acknowledgement with which to reply. Both the sender and receiver will alternate between their two respective states as protocol execution proceeds. Both the sender and the receiver maintain a sequence number. In the case of the sender, this sequence number (the sender sequence number) records the sequence number of the message to send next, or if a message is currently outstanding, the sequence number of the message that is currently outstanding. In the case of the receiver, this sequence number (the receiver sequence number) records the sequence number of the next message expected by the receiver. The basic operation of the sender and receiver protocol entities can be described as follows. Both the sender and receiver start with a sequence number of 0, so that the first message sent by the sender will have sequence number 0, and the first message expected by the receiver is a message with sequence number 0. When the sender sends a message, it enters a state in which it can not send another new message until it receives an acknowledgement of the message it just sent (the currently outstanding message). When the receiver receives this message, the receiver checks the sequence number of this message against the sequence number it is expecting. If they match, this means that the message received is the correct message, and the receiver will increment its sequence number to reflect

that it is now expecting the next message. The receiver responds to the sender with an acknowledgement containing the next sequence number expected. Once the sender receives this acknowledgement, compares its own sequence number with that received in the acknowledgement. If the received sequence number is one greater than the sequence number of the currently outstanding message, the sender knows that the receiver has successfully received the message. The sender increments its sequence number and can now proceed to send the next message. Stop-and-Wait Protocols often operate over noisy channels and combine flow control with error recovery using a timeout and retransmission scheme, known as Automatic Repeat Request (ARQ) . When a message is sent, a timer is started, which will expire after some finite timeout period. A checksum is included in the message to detect transmission errors. Messages that pass the checksum are acknowledged as received correctly. A message that fails the checksum is discarded by the receiver. Acknowledgements are also protected by a checksum. In the case of a checksum failure of either the message or acknowledgement, the sender of the message will not receive an acknowledgement before the expiration of the retransmission timer, and thus will retransmit the message. The retransmission will have the same sequence number as the original message. Retransmissions may have two causes: transmission errors and delay. In the case of delay and acknowledgement transmission errors, retransmission will introduce unnecessary duplicates of the original message into the system. When the acknowledgement is not received by the sender before the timer expires, a retransmission will occur even though the first data message has been received correctly and, in the case of delay, also acknowledged correctly. Appending a sequence number to each message, as described above, prevents duplicate messages being accepted as new messages. In practice, however, neither sequence numbers or the number of retransmissions are unbounded. To circumvent the issue of unbounded sequence numbers in a practical system, the Stop-andWait Protocol has a maximum sequence number beyond which sequence numbers cannot increase. This defines a finite sequence number space ranging from 0 to the maximum sequence number. When the sequence number of the sender or receiver reaches this maximum value, and is incremented, the sequence number wraps back to 0. Hence, sequence numbers are incremented using modulo arithmetic, modulo the maximum sequence number plus one. To prevent unbounded retransmissions, the Stop-and-Wait Protocol has a maximum number of retransmissions. The sender entity maintains a retransmission counter, which records the number of times the currently outstanding message has been retransmitted, and is reset to zero when the currently outstanding message is acknowledged. When the maximum number of retransmission attempts has been reached, and the outstanding message has still not been successfully acknowledged, then the sender gives up retransmitting on the assumption that a severe problem has occurred with the link (such as a cable being cut). This problem is dealt with by a management entity, and hence is not considered as part of the Stop-and-Wait Protocol itself. Note that the receiver will send an acknowledgement regardless of whether the message received is the next expected message or a duplicate of a previously received message. This is because if

only one acknowledgement is sent per new message, then the protocol will enter a deadlock if this one acknowledgement is lost. 5.1.1 Sender Procedures The following narrative pseudo-code summarises the procedures of the Sender. The key points from these procedures are illustrated in Fig. 5.1. 1. Initial state: y Sender sequence number = 0 y Retransmission counter = 0 y Sender state = ready to send a message 2. The sender is in a state in which it is ready to send a message. Two things may happen while the sender is in this state: (a) the sender sends a message. In this case, the sender goes to step 3. (b) the sender receives an acknowledgement for a message. Because there are no messages currently outstanding, the sender discards this acknowledgement and takes no further action. The sender returns to the start of step 2. 3. A Send Message event occurs. The sender: (a) receives a message to send to the receiver from the sender user; (b) sends this message, with its sequence number equal to the current sender sequence number, into the communication channel; (c) places a copy of this message in a retransmission buffer; (d) starts a retransmission timer; and (e) changes state to be waiting for an acknowledgement. The sender now moves on to step 4. 4. While the sender is waiting for an acknowledgement, three things may happen. Either: (a) the retransmission timer expires because an acknowledgement of the currently outstanding message has not yet been received. In this case, the sender goes to step 5. (b) the receiver receives an acknowledgement for the currently outstanding message. In this case, the sender goes to step 6. (c) the receiver receives an old acknowledgement, i.e. an acknowledgement for a message that has already been successfully acknowledged. In this case, the sender discards the acknowledgement and takes no further action. The sender returns to the start of step 4. 5. If the number of retransmissions of the currently outstanding message has not yet reached the maximum, then this event triggers a retransmission, in which: (a) a copy of the currently outstanding message, with the same sequence number as the original message, is sent into the communication channel; (b) the retransmission counter is incremented by one; (c) the retransmission timer is reset; and (d) the sender remains in a state waiting for an acknowledgement. The sender returns to step 4. If the number of retransmissions of the currently outstanding message has reached the maximum

limit, then the sender gives up and notifies the user that there is a problem. This pseudo-code terminates. 6. When an acknowledgement of the currently outstanding message is received, the sender: (a) stops the retransmission timer; (b) resets the retransmission counter to 0; (c) removes the message from the retransmission buffer; (d) increments its sequence number, wrapping the sequence number back to 0 if the maximum sequence number has been reached; and (e) returns to a state in which it is ready to send a message. The sender returns to step 2. 5.1.2 Receiver Procedures The following narrative pseudo-code summarises the procedures of the Receiver. The key points from these procedures are illustrated in Fig. 5.2. 1. Initial state: y Receiver Sequence Number = 0 y Receiver state = ready to receive a message 2. The receiver is in a state in which it is ready to receive a message. It is expecting a message with the next expected sequence number. Two things may happen while the receiver is in this state: (a) the receiver receives a message with the expected sequence number. In this case, the receiver goes to step 3. (b) the receiver receives a duplicate message (its sequence number does not equal the next expected sequence number). In this case, the duplicate message is discarded and the receiver goes to step 4. 3. A Receive Message event occurs. The receiver: _ passes this message to the receiver user; and _ increments its sequence number, wrapping the sequence number back to 0 if the maximum sequence number has been reached. The receiver now moves to step 4. 4. The receiver sends an acknowledgement, containing the next sequence number expected. The receiver: y sends this acknowledgement into the communication channel; and

y returns to a state in which it is ready to receive a message. The receiver now returns to step 2.

Modeling Protocols Using Petri Nets (Stop and Wait Protocol)


y y y The transmitter sends a frame and stops waiting for an acknowledgement from the receiver (ACK) Once the receiver correctly receives the expected packet, it sends an acknowledgement to let the transmitter send the next frame. When the transmitter does not receive an ACK within a specified period of time (timer) then it retransmits the packet.

Stop and Wait Protocol Petri net models

Stop and W Protocol: Normal ait Operation pkt 0


Send Pkt 0 Ack0 Wait Ack 0 Wait Ptk 1 Get Pkt 0

pkt 1 Send Pkt 1

Get Pkt 1

Wait Ack 1

Ack 1

Wait Pkt 0

Transmitter State

Channel State

Receiver State

StopandW Protocol: Norm ait al Operation pkt 0


SendPkt 0 Ack0 Wait Ack0 Wait Ptk1 Get Pkt 0

pkt 1 SendPkt 1

Get Pkt 1

Wait Ack1

Ack1

Wait Pkt 0

Transmitter State

Channel State

Receiver State

Stop and W Protocol: Deadlock ait


pkt 0 Send Pkt 0 loss Wait Ack 0 loss Send Pkt 1 Ack0 Wait Ptk 1 Get Pkt 0

pkt 1

Get Pkt 1

Wait Ack 1

loss

Ack 1

Wait Pkt 0

loss Transmitter State Channel State Receiver State


4

StopandW Protocol: Loss of ait Acknowledgem ents


pkt 0 SendPkt 0 loss Wait Ack0 timer loss SendPkt 1 timer Reject 1 Ack0 Reject 0 Wait Ptk1 Get Pkt 0

pkt 1

Get Pkt 1

Wait Ack1

loss

Ack1

Wait Pkt 0

loss Transmitter State Channel State Receiver State


2

Вам также может понравиться