Вы находитесь на странице: 1из 12

Optimized Jitter Routing Algorithm (OJRA) for Mobile Ad Hoc Networks

Sungil Hong, Ben Miller


{sungil, bamiller}@cs.umn.edu Department of Computer Science, University of Minnesota, Twin Cities Abstract: This paper considers the problem of delay jitter when offering QoS routing in MANETS using TDMA. The major cause of delay jitter is assumed to be from link or node failures due to movement of the nodes in the environment. A local repair algorithm named Optimized Jitter Routing Algorithm (OJRA) is devised to significantly reduce the delay for reestablishing a route after such a failure. OJRA has two components, Reverse Route Repair Algorithm (3RA) and an active buffering algorithm named Smart Buffering (SB). OJRA is designed for use with TDMA, but it can be used with any QoS routing protocols. Our ns-2 simulation shows that OJRA can lead to better delay jitter performance.

1. INTRODUCTION The increase of bandwidth available to home users is leading to more multimedia rich services being offered over the Internet. These services require some quality of service (QoS) guarantee to ensure delivery of the multimedia rich content in a timely fashion. QoS guarantees often proposed include bandwidth, delay, and delay jitter, or the variance of the delay. These QoS guarantees are also applicable to mobile ad hoc networks (MANETS). MANETS are a special case of wireless networks that do not have a fixed infrastructure or access point to a wired network. Some popular configurations of MANETS are military troops in operation, sensor networks, and video surveillance systems. MANETS have many constraints including battery life, mobility, communication range, and computing resources. These networks are typically setup quickly and nodes move freely within a given communication range leading to many problems with routing information between nodes. Multiple routing methods have been developed which roughly fall into two categories table based and on demand. Many of the recent schemes are described and compared in [1]. In general the on demand methods require less power since routes are only setup and maintained when needed between nodes so this paper will focus on these methods. QoS routing for MANETS has also been developed for guaranteeing bandwidth [2]. This protocol is developed on top of Ad-hoc On-Demand Distance Vector Routing protocol (AODV) [3] which is run on top of Evolutionary-TDMA (E-TDMA) [4] medium access control layer. It provides bandwidth guarantee using TDMA in MANET environment, but it does a poor job when there is a link or node failure. Bandwidth and delay jitter are two main components in QoS in networks, so in this paper, we want to improve the other aspect of QoS while maintaining bandwidth guarantee using TDMA. The layout of this paper is as follows. In Section 2 we will define some new terms to be used in this paper, and Section 3 will examine AODV and related prior work for QoS routing in MANETS. Following it, Section 4 will define the problem to be solved, and Section 5 will describe in detail assumptions and our proposed solution. Section 6 will explain the simulation setup and results, and Section 7 will discuss the results. Section 8 will conclude this paper by describing future work.

2. TERMINOLOGY Here is some terminology used in the discussion and explanation: SOURCE NODE is a node that initiates the data transmission DESTINATION NODE is a node that the data transmission is sent to PATH FAILURE is a link failure or a node failure SOURCE FRONT NODE is the node upstream of the path failure DESTINATION FRONT NODE is the node downstream of the path failure SOURCE PATH is a path from source front node to the source node DESTINATION PATH is a path from destination front node to the destination node SOURCE PATH NODES are the nodes that are on the source path DESTINATION PATH NODES are the nodes that are on the destination path In this paper, we will think about only link failure for the simplicity. But an extension to handle node failures should be possible with a little effort. It is assumed that the source node will buffer every packet until it has received an acknowledgement that the destination received the packet. This assumption does not describe real time traffic, which is usually dropped if congestion occurs, but this can be handled with slight modifications. 3. RELATED WORK AODV was first introduced several years ago in 1997, but now a working group for MANET has formed and made many revisions that are posted in [5]. One revision added local repair, which is very similar to the algorithm developed in this project. However, combined with the QoS routing from [2] this method will be sub optimal. The original routing scheme in the AODV paper [3], the local repair in the internet draft version [6], and the QoS routing paper will be introduced and compared. Also, the Forwarding Algorithm (FA) used in the QoS paper will be explained. In the original AODV paper [3], when a path failure occurs, the source front node will notify the source node through the current path, and then the source node reinitiates another route discovery ignoring the previous path. When the propagation delay from the source node to the destination node is t, it takes on average 3.5*t after a failure is detected for the next packet to be delivered to the destination node. (There is 0.5t for the notification message from the source front node to the source node, 1*t for broadcast message to the destination node, 1*t from the reply message from the destination node to the source node, and finally 1*t for the packet to be delivered to the destination node). But since the destination node will continue to receive packets from the destination path it will see a delay of 3*t. In the later section of the same paper, it discusses the potential for a local repair scheme. Local repair is an attractive enhancement because the source front node may be close to the destination front node and the destination path may still be available. But if there is no path from the source front node to any destination path nodes, then the delay will increase since the source front node does not send the failure notification until after the local repair times out. Also, only the source front node knows about the path failure all the source path nodes will transmit packets which the source front node needs to buffer. If a local repair fails the power used to transmit these packets will be wasted. The latest AODV draft [6] discusses a local repair scheme. This depends on the position of the path failure if it is close enough to the destination node, then the source node is not notified and the local repair scheme is used. The source front 2

node sends a route request message, but with a reduced TTL value and timer interval for fast decision of the existence of the new local path. Because of the many parameters available, it is hard to determine the average time to repair a broken route, but roughly, it will take 0.5 * t for broadcast, 0.5*t for reply, then 0.75*t for the next packet to be delivered, for a total of 1.75 * t. Like before, the last packet that is delivered before the path failure will take 0.5 * t on average, so it will become 1.25 * t. If this option is disabled or is not applied, then it will be the same as the original AODV paper [3]. In the QoS routing paper [2], the authors mention the local reply scheme, which is better than nothing, but worse than local repair one. The local reply scheme is when an intermediate node that receives a route request (RREQ) and has an active path to the destination will generate the reply (RREP) instead of forwarding the RREQ to the destination. Considering only this scheme the source front node notifies the source node of the path failure, and the source node will reinitiate the new route search, but any destination path nodes can reply if there route to the destination has not timed out. Also this paper considers the case in which a broken link can be recovered without any further effort after a short time interval. Their local reply takes on average 0.5*t for notification, 0.75*t for broadcast, 0.75*t for reply, and 1*t for first packet after failure to reach the destination, for a total of 3*t because the packets that are on the destination path, 2.5*t is the delay the destination node will experience between packets. These timing results are summarized in Table 1. The QoS routing paper [2] introduced a very good mechanism for bandwidth guarantee. Because collisions caused by overlapped transmission signals are common in the wireless environment, they showed mathematically that the QoS routing problem has many constraints and is NP-hard. So, a simplified heuristic algorithm was developed called Forwarding Algorithm (FA), which is very efficient and optimized for the wireless environment. It does not consider future time slot constraints, but just records the two previous links time slots and determines the next links time slots to be used among those that are available. A node doesnt need to negotiate with upstream nodes, so this can be done in a one-way direction. This algorithm will be used without modification since it works very well.
Best case Original AODV Internet Draft QoS routing 2*t Small Small On Average Worst case 3*t 1.25 * t 2.5 * t 4*t with FA More than Does not work well 4*t 4*t with FA Work well with FA Etc. Does not work well

Table 1. Average delay from a path failure Associativity-based routing (ABR) [7] and Signal Stability based Adaptive Routing (SSA) [8] take a different approach to reduce path failure. ABR tries to reduce path failures by carefully selecting nodes that will be used as the next hops. Associativity means that if a node is present within the range of another node, then the former node will be acknowledged by the latter node and it will record the presence. When the latter node becomes a sender, it will choose one among several candidates that are within its range based on the associativity. SSA is a variant of ABR, in which the sender detects the signal strengths of all the nodes within its range, and decides which neighbor to route through according to the changes of signal strength. Even though these two methods can help select good routes in the MANETS environment, which leads to improved delay jitter by reducing path failures. However, they depend on the nature of the network, especially the pattern of the node 3

movement. Simply, in case of ABR, if the pattern of movements of nodes in a network is periodic, like nodes moves for some fixed time, then stays for another fixed time and so on, then the ones that were within range for a short time should be preferred. But what if nodes dont move according to this pattern? Then, this method breaks down sine no prediction of stability can be made from past information. Rather, it might be more probable that the longer in the past, the longer in the near future. A similar argument can be applied to the SSA. Strong signal doesnt mean that the node will stay in range longer than the nodes that have week signals. If the movement of nodes in a network is too random in the sense that they dont go far before they change the direction, then SSA will not work well. 4. PROBLEM DEFINITION The QoS routing algorithm obtains better performance than the original AODV scheme by reserving time slots in TDMA scheme. But QoS routing algorithm does not address the issue of delay jitter caused by the path failure. Even though local reply in the QoS routing algorithm is implemented in the recent Internet draft of AODV, this recent AODV version does not consider bandwidth reservation scheme used in the QoS routing protocol, so, it does not work well under the TDMA environment. When we exclude the TDMA environment from consideration, we think that there is still some room for improvement. AODVs local repair fails because FA imposes very severe constrains for finding routes to the destination. The new route also needs to satisfy the bandwidth guarantees that the previous route maintained. It is very probable that a new route exists but the source front node cant find it within a short time. Therefore the AODV local repair will tend to fail which is inefficient since the source front node has to send a delayed notification to the source to setup a new path. So applying this scheme into the FA in the QoS routing scheme will not help much at all. The devised local repair approach presented in this paper is optimized for FA. At the same time, it will enhance AODV by significant degree. 5. OPTIMIZED JITTER REPAIR ALGORITHM (OJRA). In short, we propose two ideas. One is initiation of path repair from the destination path nodes which we name Reverse Route Repair Algorithm (3RA), and the other is exchange of buffered data packets between source path nodes for prompt transmission of the next data packet which we name Smart Buffering (SB). These two work together to give the minimum delay jitter. Even though this work can be applied to any AODV variant and possibly all the MANET routing protocols, the focus here will be on application with FA given in [2]. Here are some assumptions in our scheme for simplification. i) The source node will hold all the content to be sent to the destination until the destination notifies it has been received. ii) There are some reserved time slots for backward transmission of control packets. These can be shared by multiple paths or occupied by only one path. In the latter case, time slots will be more constrained. Also, routes can be reversed temporarily, i.e., these time slots can be used for downstream transmission while original downstream time slots are used for upstream data transmission. Control messages or flags in data packets will handle this. iii) When a path failure occurs, not only the source front node but also the destination front node will be notified. This is possible through the use of hello messages, which maintain a list of current neighbors, nodes that can be reached in a 4

single hop. iv) Links are assumed to be bi-directional. And, some types of messages are defined for this new protocol in addition to the ones in the AODV protocol. Path search message: this will be sent by destination path nodes when failure occurs. In AODV, when a failure occurs, the source front node sends a route request (RREQ) message, and we give a new name for differentiating from the one in AODV. Path found message: this will be sent by a source path node which received a path search message to the initiator of it. Path invalidation message: this will be sent by a source path node which sent a path found message toward the initiator of the path search message for canceling the new local path. Notification message: this will be sent by source front node and destination front node to the all source path nodes and destination path nodes respectively. In AODV, source front node will send a route error (RERR) message to the source node when it cant find a local route to the destination side, and we also give a new name for emphasizing extended functionality. Connected message: this is unique in our algorithm. Because multiple local routes can be setup between source path and destination path, the decision is made on the source side by exchanging connected message. For prompt data transmission, each source path node will send the next data packet whenever a local path is formed, and eventually only one will be selected as the local route. 3RA and SB actually work together to reach the optimized jitter delay, but for better understanding, they will be explained separately. 5-1 REVERSE ROUTE REPAIR ALGORITHM (3RA) In short, 3RA has it most distinguished feature in initiation of new (local) route search by the destination path nodes. If a link failure occurs, then the destination path nodes broadcast path search messages. To the best of our knowledge, this method has not been devised before. When a link failure occurs, the source and destination front nodes will notify it. The destination front node will send a path search message immediately and this will automatically notify the next downstream node on the destination path of the fact that there is a link failure. This broadcast message will contain the location of the link failure so that it can be used for the identification of the failure later. The next downstream node will do the same until the message reaches the destination node. Initiation of path repair from the destination side is a benefit because it reduces the repair delay on average 0.5 * t more than other routing protocols which use source side initiation. This path search message does the same as a route request and route reply from AODVs local repair method. One addition for reducing the number of messages is the setting the TTL value for these messages. This is also in the Internet Draft version of AODV. Determining values for TTL can be left for implementers.

5-2 SMART BUFFERING (SB) If a link failure occurs, then the source front node will notify its upstream node of the failure which is transmitted along the path until the source node. This notification uses specially reserved time slots for control messages. Detailed implementation scheme can be one of several candidates as explained later. As soon as one node receives this failure notification, it will stop sending packets to its downstream node, while receiving the next packet (or several packets within one cycle) from its upstream node. Smart buffering is for making every source path nodes ready for accepting the path search message so that it can react to it as soon as possible. The second assumption made previously will be important in smart buffering as we will see. Note that we will only think about the nodes that are on the source path. Notification from the source front node up until the source node will be the first message propagated by the failure, but it will also act like a control message by which transmission time slots used for downstream transmission of data packets will be switched to the time slots for upstream transmission of buffered packets from source path nodes. Following this notification message, buffered packets in source path nodes will be transferred to their upstream nodes until the node just before the source node. To help show the performance of SB a naive buffering scheme will be compared.

Figure 1 One thing to note is that because we assume that there is only one link failure, all the packets that are on the destination path will eventually be delivered to the destination node without any problem. Thats why we dont care about the packets on the destination path.

The ultimate goal is to make every source path node have all the packets that are not transferred to the destination path so that when any source path node receives a path search message, it can send the next packet right away. Node E is the source front node. It will notify node D about the failure, and it will also act like control message and node E will send packets 5, 6, and 7 using the time slots that are previous used by node D to send packets to node E. Node D will forward packets 5, 6, and 7 to it upstream node, node C, and so on. When packet 7 is forwarded, node D will send packet 8 from itself. Here we see, only the source front node will have 3 packets, while all others will have 2 packets that are buffered initially by the failure. These will be transferred until node B. Node A here is the source node. Then one might easily see that node B will have the maximum number of buffered packets, from 5 to 13. But this is not so efficient because when the node B receives the path search message, it can request to the source node for giving the packets again. Because the source node holds all the packets even that are sent already, we need only 2 packets which are 5 and 6 for prompt response to a path search message. The result is the shape of triangle which is shown is Figure 2.

Figure 2 Clearly we can see that the middle node of the source path will have the maximum number of buffered packets. In this way, when a path search message is received a data packet can be sent to the destination without delay. But things get complicated when a path search message arrives before this smart buffering is done. This will be considered in the next step while the competition among multiple receptions of path search message is handled. When multiple path search messages arrive, several prompt replies with the first data packet will be done, but the following algorithm will select only one new path. a. In each node, only the first one will be considered for the same failure. b. Connected message will be sent to its downstream node and its upstream node so that all the source path nodes will acknowledge the fact that there is a new path. c. If connected message is received at the same time or before any path search message in a node, then all the path search messages will be ignored. d. If connected message is received after path search message is received from one of its upstream node, then just ignore the connected message. e. If connected message is received after path search message is received from one of its downstream node, then stop sending data packets through the new path and send the new path canceled message, and ignore the new path. f. If one node didnt receive notification message about the failure which caused a path search message that just arrived, the node will not reply back or send connected message. If that node receives notification message and if the message doesnt have connected message in it, then it will follow this algorithm. But if there is connected message in the 7

notification message, then it will ignore all the path search messages it has received and receives for this failure. 5-3 PROOF OF OJRA Proof for the completeness of this algorithm is very simple. We can categorize all possible cases into three. i) Among those that received notification first. ii) Among those that didnt received notification first. iii) Among those that received or not notification first. For i), the one that received path search message very early or the one that is closer to the failure will be selected. For ii), only the one that is closer to the failure will be selected. For iii), one from the notification-not-received and those among the notification received will compete and resolved by the algorithms. In this algorithm, not only the property of decreased delay, but also the preference for the short, local path is achieved. This is a very desirable property because frequent resource update (addition and deletion) should be suppressed for reliable transmission. But several local restorations will prolong the path. This will be shortened by the algorithm for multiple link failures, which is explained below. There is one exception for the path search message. We will limit the TTL of each path search message by the triple of the length on the previously sound path from a destination path node to the source node. But the path search message from the destination node will not be limited by a TTL value. Whenever a source node receives a path search message, it will check if it received a notification message about it; if not, then it will select the new path ignoring the existing path. If the path search message received before notification message at the source node, it means that the new path is shorter than before. If the initiator of the notification message is the destination node, then it means the new path is much shorter than the existing one. If there are multiple failures, the notification message will not be propagated to the source node at the right time, so using new path will be more preferable in the sense of delay as well as the resource usage efficiency. Smart buffering will not add to the amount of energy consumed. Because every source path node should get the next first packet from the source node or the source front node, it is the same as before on average. Also, because of the TTL scheme, the messages propagated are much reduced then in the QoS routing paper. This new routing algorithm is also useful for non-TDMA type mobile ad-hoc routing protocols. The initiation of the new path search message from the destination side is the first idea as far as we know, and smart buffering mechanism will work perfectly with it while consuming the same amount of energy. 6. SIMULATION Ns-2 simulation package does not provide complete implementation of TDMA for wireless. And since our effort is an extension of previous one done in the QoS paper, we tried but couldnt get the source code from the authors of that paper, and due to time constraints, we were not able to implement TDMA. We regret this because our algorithm was designed to perform very well with TDMA or any similar protocol. So instead our simulation compares the delay jitter of our algorithm and the current AODV local repair algorithm in Ns-2. Smart buffering also wasnt implemented in this simulation, but 8

simple theoretical calculation will help compare the performance. We tried to simulate the same environment as in the QoS paper, in which network area is 1000m by 1000m, and each node will have 250m of signal range. There are 25 nodes in the network with sessions from 5 to 20 with intervals of 5 sessions, and from 20 to 300 with intervals of 40 sessions. Each session takes up to 30 sec, and in a session, 25 packets are sent per second, where one packet is 64Kbytes. We simulated 5 different session patterns which are averaged in the results, and 5 different speeds of nodes, which are 2, 4, 6, 8, and 10 meters per second. In the original AODV, when a failure occurs, the source front node will send RREQ message and one destination path node will reply. We got the time taken for the RREQ message to be received by a destination path node. Also, in the modified code for our algorithm, we let the destination front node to send RREQ message and the first source path node that receives it print out the time, so we got the time for the delay of Path search message transmission. Table 2 & 3 shows simulation results. One interesting point is that from sessions 60 to sessions 300, they all have the same results. So, here we only showed in case of Session 180. Speed 2 4 6 8 10 Sessions 5 0.005263242 0.10777122 0.013581276 0.01610976 0.015259743 Sessions 10 0.0033471524 0.07152995 0.42946625 0.06530752 0.95167685 Sessions 15 0.019318495 0.25576824 0.035485066 0.04782942 0.28306365 Sessions 20 0.0018997192 0.13000202 1.8106096 8.6851845 3.3382335 Sessions 180 0.0028839111 0.004964492 1.8105954 6.6844773 3.3382335

Table 2. Average Delay in case of AODV

Speed 2 4 6 8 10

Sessions 5 NaN 0.0012381418 12.850208 2.5402777 4.2341857

Sessions 10 NaN 6.9138613 6.5532846 2.09405 4.083391

Sessions 15 0.0028839111 0.003111267 1.9618453 1.3661891 3.291221

Sessions 20 0.0018997192 0.13000202 1.8106096 8.6851845 3.3382335

Sessions 180 0.0028839111 0.004964492 1.8105954 6.6844773 3.3382335

Table 3. Average Delay in case of OJRA 7. DISCUSSION In sessions 20 and sessions 180 and henceforth all sessions not shown here, the data of the original AODV protocol and the data from our algorithm have the same average delay. This is because almost always, the path from the source front node to the destination front node is used for both cases. The actual delay that the destination node will experience in the original AODV protocol is just three times the delay shown in the table 1. The actual delay for our algorithm is two times the delay shown in the table 2. So, without TDMA implementation, we can say that our algorithm performs 50% better than the original AODV protocol in the delay jitter when the connections are not small. If TDMA is used, then this performance enhancement will be increased a lot as explained in the next section. About sessions from 5 to 15, it seems that OJRA performs badly when speed increased than AODV. Theoretically, in all 9

cases the values should be very close. One big difference is that in AODV, we used link layers failure notification while in our algorithm, we used hello message for being notified of the failure. We think that this difference acted some role in case of small number of sessions. When 3RA is used with FA, it will create big improvement in performance. Because one set of time slots in a link for a path at least should not be used for the two nearby links in one direction, which leads to at least four consecutive links will have different time slots for the path. Then, if we want to reuse currently reserved time slots as much as possible, the new path should be short and it should meet some constraints in selecting time slots to be used depending on which nodes the new path connects. Because of this restriction, there might not be many possible new paths available. We will show that initiation from the destination path nodes is the best one by showing the last possible candidate, in which both sides try to make a new path. The FA is not a complete solution for this two sides-initiation case because it chooses randomly if there are more available time slots than needed. So a middle node of possible new paths decides if time slot constraints from both sides are met. Often this method causes valid routes in our case to be invalid since more constraints are imposed upon this decision. In our algorithm, only two positions should be considered; one is on the destination path node which initiates the path search message, and the other is on the source path node which receives that message. Note that on the receiver node, we can have some flexibility to choose the right time slots among free ones available there. But in the two-side initiation case, there are three positions, one more than ours. Several simple simulations based on randomly chosen time slots configurations show that this is true; while our algorithm works for 8 cases out of 10, only 3 were possible for the both sides-initiation case. One might suspect that this scheme produces too many broadcasted messages, which leads to the contradiction of the power saving scheme. When this is not used with the FA, then it is exactly the same as the original AODV by letting nodes allow only the first message to be broadcasted out for a link failure. In the case of usage with FA, unbelievably, it is still the same as the original AODV scheme because when the original AODV scheme also uses TDMA MAC protocol then the number of the broadcast messages will be different from the one without FA. It can be bigger with FA because even though it came from the same source node and same sequence number, the time slot conditions will be different in the packets that is received by one node depending on the path that has been traversed until the node receives it, so we must let a node forwards a packet even though it already received another one with the same source node and sequence number before. But it also can be smaller because of the restrictions of FA. So, one cannot say whether using FA will cause more broadcast message or not. And also, then, the number of broadcast message generated with our algorithm in case of FA is the same as the one with the original AODV. In our algorithm, it will only have different initiation node number. Generally speaking, if broadcast messages from different destination path nodes are considered differently, then the number of broadcast messages will be huge: but, it will have more chance to find a shorter path. But this also depends on the decision whether broadcast messages can reserve time slots or mark time slots. In our scheme, reservation is assumed. But because of huge amount of broadcast messages, reservation may not be the best scheme. When mark is used, only broadcast messages from the same path failure can share time slots, and the source side must choose only one path before sending any data packets; this will delay a little bit more, but this will make a new path shorter or more probable. Related to this, a backup path scheme was considered while searching for the best algorithm. If we can assume that link failure is not so frequent or usually occurs because of movement of one node, then we can make every node broadcast their existence up to 3 or 4 as number of hops away with the information about which nodes a node can reach. Used in TDMA 10

environment, if link failure occurs, then source side front end node can react to the failure immediately by using another path that already known by this periodic messages which also holds free time slots information. Explicit backup path reservation can be done, and we can make reserved time slots for one backup path available for other backup paths too, and when it is switched to the actual used time slots, then notify neighbors about the changes, so that they can make another backup path. This scheme consumes more energy for signaling, but this will reduce the delay a lot depending on the nature of the MANET. Sequence numbers in AODV are used for prevention of route loop. And also, one node which has a more recent sequence number about the destination can send back a reply to the source. But in wireless ad hoc networks, no one can guarantee that a node having a more recent sequence number is closer to the destination. It only means that the node was on a path to the destination. In our simulation, smart buffering is not used. Smart buffering will add some delay, but we believe it will be small it will be very rare that a source path node receives path search message from a destination path node before it receives notification message from the source front node. If the destination path node is not the destination front node, then it will take some time for the notification of the failure to be propagated to the destination path node, and if it is, then the destination path node is still around the source path node, so it will not be much better in the timing sense than the notification message from the source front node. Once a source path node receives path search message from the source front node, then the next time cycle, it will receive the next data packet to be sent, so the delay is not so significant. So, here we claim that smart buffering works great with 3RA and the delay enhancement will be a simple 50% more than before without causing more consumption of energy especially in the case of TDMA. In the original scheme, when there is no local path from the source front node to any destination path node including destination node itself, then it will take too much time for it to be restored. But in our scheme, it will just work fine. Table 4 shows the comparison our algorithm with the ones explained in the related work section. In OJRA, path search message will take on average of 0.5 * t, and the next data packet will be sent right away, so it will take 0.75 * t. Then it takes only 1.25 * t, and in view of the destination node, the previous last packet took 0.5 * t, so subtraction gives us 0.75 * t.
Best case Original AODV Internet Draft QoS routing OJRA 2*t Small Small Half of Small On Average Worst case 3*t 1.25 * t 2.5 * t 0.75 * t 4*t with FA More than Does not work well 4*t 4*t 2*t with FA Work well with FA Work well with FA Etc. Does not work well

Table 4. Average delay from a path failure 8. FUTURE WORK A path failure can be a link failure or a node failure. Only link failure is considered in this paper. Node failure case should be studied further. Obviously, our simulation didnt include TDMA environment, so this will be the first thing to be done. In the current Ns-2 11

package, TDMA is not fully supported. But whether it is TDMA, CDMA or any other scheme that is designed for distributing resources in real time among multiple identities, it can use FA with our algorithms for improving quality of service and reliability. And networks with mingled structures of wired and wireless devices can be applied in our algorithm to boost their performance. These areas should be investigated further. Other than the simulation environment, more delicate specifications for our protocol should be defined. For example, we didnt use TDMA, and also we didnt describe much about how time slots can be allotted for control messages and the contention frequency expected. Or, maybe because of nodes movement, portion of time slots are unusable. Then, this is not the same as in the case of link failures. This can be handled by searching for a new path which has free time slots as much as the lost ones. Another idea would be just having multiple paths for one flow, and it can be used not only for the time slots conflict, but also for ensuring minimal delay though with reduced bandwidth. Security issues are not considered here. Maybe some traffic should always use some specific nodes in the networks. Then, AODV should be modified and our algorithm will face another challenge for optimization. Also, for the different service classes, we can make some nodes forward more traffic while other nodes forward little traffic. More actively, if only we can control the movement and direction of each node, then whole new concept of MANET protocol can be devised, and it can be very helpful while the protocol might be very complex. For example, in the battle environment, it is often true that seniors order the positions of the soldiers. Not every direction or movement is possible, but still it can be helpful. When there are not many possible alternate routes, then the node should be told that it should not move much. 9. REFERENCES [1] Elizabeth M. Royer, A Review of Current Routing Protocols for Ad Hoc Mobile Wireless Networks, IEEE, 1999 [2] Chenxi Zhu, et al, QoS routing for mobile ad hoc networks, IEEE, 2002 [3] Charles E. Perkins, et al., Ad-hoc On-Demand Distance Vector Routing [4] Chenxi Zhu, Medium Access Control and Quality-of-Service Routing for Mobile Ad Hoc Networks, PhD thesis, University of Maryland, College Park, 2001 [5] AODV Home Page, http://moment.cs.ucsb.edu/AODV/aodv.html [6] Charles E. Perkins, et al., Ad-hoc On-Demand Distance Vector Routing, draft-ietf-manet-aodv-13.txt, February 2003 [7] Chai-Keong Toh, A Novel Distributed Routing Protocol To Support Ad-Hoc Mobile Computing, IEEE, 1996 [8] Rohit Dude, et al., Signal Stability based Adaptive Routing (SSA) for Ad-Hoc Mobile Networks, 1996.

12

Вам также может понравиться