Вы находитесь на странице: 1из 2

Homework #14 Solutions

cs349 -- Networks
Chapter 5
1) If a UDP datagram is sent from host A, port P to host B, port Q,
but at host B there is no process listening to port Q, then B is to
send back an ICMP Port Unreachable message to A. Like all ICMP
messages, this is addressed to A as a whole, not to port P on A.
(a) Give an example of when an application might want to receive such
ICMP messages.
If you are trying to connect to a server and you get no reply
and no error message, then you cannot tell whether the network
is down or the host running the server is down. If you get a
Port Unreachable message then you know the host is up but the
server is not running.
Two advantages: Getting an ICMP is probably faster than waiting
for a timeout. If you are searching for a port that is available,
you want to get your error messages.
(b) Find out what an application has to do, on the OS of your choice,
to receive such messsages.
In UNIX, a process that is owned by root can open a "raw socket,"
which allows it to receive all incoming packets, including ICMP
packets addressed to the host as a whole. In theory, such a process
could figure out which user process caused the ICMP
error (the ICMP packet contains the header of the offending packet,
which contains the port number of the sender) and send it a signal.
In practice, though, user processes seldom get informed when they
cause an ICMP error.
(c) Why might it not be a good idea to send such messages directly
back to the originating port P on A?
The biggest problem is that the sending process may no longer be
active, in which case the attempt to deliver the ICMP packet might
cause a new Port Unreachable message to be sent back, and so on
8) The sequence number field in the TCP header is 32 bits long,
which is big enough to cover over 4 billion bytes of data. Even if
this many bytes were never tranferred over a single connection,
why might the sequence number still wrap around from 2^32-1 to 0?
When a
of the

new connection starts, the sequence number is set to a random

to help avoid interference between successive incarnations
same connection. As a result, the sequence number can wrap
after only a few packets, if you happen to start with a
close to ffff ffff.

9) You are hired to design a reliable byte-stream protocol that uses a

sliding window (like TCP). This protocol will run over a 100-Mbps
network. The RTT of the network is 100 ms, and the maximum segment

lifetime is 60 seconds.
(a) How many bits would you include in the AdvertisedWindow and
SequenceNum fields of your protocol header?
The advertised window should be big enough to keep the pipe full. We
want to be able to have one delay-bandwidth product in flight at a
time. That's (100 ms * 100 Mbps) = 10 Mb = 1.25 MB.
We need 21 bits to address 1.25 MB.
The sequence number should be big enough that it doesn't wrap around
during a maximum segment lifetime. In 60 seconds we can send 6000 Mb
of data. But remember that the sequence number counts packets, not
bytes, so we need to guess about what the minimum packet size is.
Assuming it is 40 bytes, then 6000 Mb is 150 million minimum-sized
packets. So we want sequence numbers that are at least 28 bits long.
(b) How would you determine the numbers given above, and which values
might be less certain?
The bandwidth of the network depends on the hardware and can
be determined exactly.
The RTT of the network can be measured easily with a few test
packets (maybe with a range of sizes), but you have to keep
in mind that it varies a lot over time.
The maximum segment lifetime is a pretty much arbitrary number
that is up to the discretion of the protocol designer.
I chose my own value for the minimum packet size. 40 bytes is
the size of an IP header + a TCP header.
11) Suppose TCP operates over a 1-Gbps link.
(a) How long would it take for the TCP sequence numbers to wrap
around completely?
Again the minimum packet size is 40 bytes.
2^32 packets * 320 bits per packet = 1.4 * 10^12 bits
1.4 * 10^12 bits / 1 * 10^9 bits per second = 1.4 * 10^3 seconds
About 23 minutes.
(b) Suppose an added 32-bit timestamp field increments 1000 times
during the wraparound you found above. How long would it take for
the timestamp to wrap around?
If we were able to use all 32-bits of the timestamp, we would
multiply the wraparound time by 4 billion. But since the timestamp
clicks 1000 times per cycle, we only get a factor of 4 million,
so the new wraparound time is about 92 million minutes, which
is about 174 years.