Вы находитесь на странице: 1из 12

A Guide to Bandwidth and Throughput

January 27th, 2008 Related Filed Under

Bandwidth and throughput are two networking concepts that are commonly misunderstood. System administrators regularly use these two concepts to help plan, design, and build new networks. Networking exams also include a few bandwidth and throughput questions, so brushing up on these two subjects is a good idea before exam day.
What is Bandwidth?

You probably already have a fairly good idea on what bandwidth is. It is technically defined as the amount of information that can flow through a network at a given period of time. This is, however, theoretical- the actual bandwidth available to a certain device on the network is actually referred to as throughput (which well discuss further on in this section). Bandwidth can be compared to a highway in many respects. A highway can only allow for a certain amount of vehicles before traffic becomes congested. Likewise, we refer to bandwidth as finite- it has a limit to its capability. If we accommodate the highway with multiple lanes, more traffic could get through. This also applies to networks- we could perhaps upgrade a 56K modem to a DSL modem and get much higher transfer speeds. Bandwidth is measured in bits per second (bps). This basic unit of measurement is fairly small, however, and youll more than likely see bandwidth expressed as kilobits, megabits, and gigabits.

Make sure you make the distinction between bits and bytes. A megabyte is certainly not the same as a megabit, although they are abbreviated quite similarly. Since we know

there are 8 bits in a byte, you can simply divide the number of bits by 8 to find the byte equivalent (or to convert from bytes to bits, multiply by 8).

Lastly, its important to also make the distinction between speed and bandwidth. Bandwidth is simply how many bits we can transmit a second, not the speed at which they travel. We can use the water pipe analogy to grasp this concept further. More water could be transported by buying a larger pipe- but the speed at which the water flows is less affected.
The Difference between Throughput and Bandwidth

Although bandwidth can tell us about how much information a network can move at a period of time, youll find that actual network speeds are much lower. We use the term throughput to refer to the actual bandwidth that is available to a network, as opposed to theoretical bandwidth. Several different things may affect the actual bandwidth a device gets. The number of users accessing the network, the physical media, the network topology, hardware capability, and many other aspects can affect bandwidth. To calculate data transfer speeds, we use the equation Time = Size / Theoretical Bandwidth.

Keep in mind that the above equation is actually what we use to find the best download. It assumes optimal network speeds and conditions since we use theoretical bandwidth. So to get a better idea on the typical download speed, we use a different equation: Time = Size / Actual Throughput.

Closing Comments

Essentially you only need to know the basics of bandwidth and throughput for most exams. Try to commit the units of bandwidth to memory- youll see them on exam day. Cisco also demands that CCNA 1 students know what determines throughput. (You may read back and note that network topology, number of users on the network, and other factors will indeed bring throughput down.) Lastly, you can remember the simple equation T = S / P to calculate file transfer time. (Where T is time, S is file size, and P is actual throughput.)

Measuring throughput in a TCP/IP network


By Matthew Lehman CCNA May 9, 2001, 7:00am PDT When youre making decisions about how to meet your organizations networking needs, its essential that you have an accurate measure of throughput. This article will explain how you can estimate the maximum throughput in a TCP/IP network. Throughout the article, we will be using examples based on a fictitious video content company with offices in Seattle, San Francisco, and New York. Our goal will be to evaluate data transfer times and cost options for the organization so that we can select the appropriate telecommunications technology. In order to estimate throughput, we will need three pieces of data:

A characterization of the traffic flows that will be moving across the network An estimate of network path delay and jitter The maximum size of the TCP window Additional resources on this topic

G. R. Wright, W. R. Stevens, TCP/IP Illustrated, Volume 1: The Protocols , Addison-Wesley, 1993 G. R. Wright, W. R. Stevens, TCP/IP Illustrated, Volume 2: The Implementation, Addison-Wesley, 1995

Understanding traffic flows The key to predicting network performance, and TCP performance, is to have a healthy understanding of the type of traffic flows that occur on your network. A flow commonly consists of a source and destination address, a source and destination port, and the protocol type. However, were going to modify that slightly to include:

Source and destination. Application protocol. Symmetry of flow. Sensitivity to time.

Either the total amount of data to be moved or a sustained data rate that needs to be achieved. You can use a variety of methods to obtain this type of informationnetwork management software, network traffic traces, user interviews, other network engineers assessments, and so on. In a large network, you will probably need to use all these methods to get an accurate picture of traffic flows. Lets assume our video-editing company has a hub and spoke network that connects the three offices through San Francisco (see Figure A). The content producers in Seattle need to move a 10-GB package of video streams via FTP to the editors in New York. They want to do this several times a day. Currently, they can move this package only to San Francisco, so the editing department has been temporarily moved into the corporate offices, and no one is happy about the arrangement. A second group of content producers in San Francisco has the same requirements as the group in Seattle. Figure A

We know the source and destination of the flow, the application protocol (FTP), the symmetry (very asymmetrical), the sensitivity to time (non-real time), and the overall amount of data to move between the source and destination (10 GB). Armed with this data, we can now start looking at the network path delay. Network path delay Network path delay is the total time it takes a packet to be delivered between two points on a network. It consists of many pieces of data, but for the purposes of estimation, it can be simplified down to these few components:

Serialization delay of the individual circuits or networks Transmission delay of the individual circuits or networks Processing delay of the network devices All circuits have a common characteristic known as serialization delay, which is the time it takes some unit of data to be serialized onto the circuit. Its directly related to the bandwidth of the circuit and the technology employed. For instance, if I have a DS3, DS1, and a DS0, and I want to send a 1,500-byte packet

(the maximum payload for a TCP/IP packet), the approximate serialization delay will be: DS3: (1500 bytes * 8 bits/byte) / 44040192 bits/sec = .27 ms (approx) DS1: (1500 bytes * 8 bits/byte) / 1572864 bits/sec = 8 ms (approx) DS0: (1500 bytes * 8 bits/byte) / 65536 bits/sec = 183 ms (approx) With some technologies, satellite in particular, the transmission delay of the circuit can be a factor as well. Actually, all circuits have some sort of transmission delay that varies with both distance and technology. A T1 leasedline across town will likely not have any measurable transmission delay, while a T1 frame relay circuit across the continent will have delay on the order of 50 to 80 ms. In those circumstances where you have measurable transmission delay, you will want to add it to the serialization delay to get a more accurate picture. We also need to factor in the delay for any network devices that our traffic must go through to reach its final destination. This varies depending on the type of device and how busy the device is. There are really no hard-and-fast rules about how much delay a class of device will add. The safe assumption is that anywhere that there is any serious buffering occurring, there will also be noticeable delay. Examples of devices that do a significant amount of buffering are busy routers, traffic-shaping devices, firewalls, and so on. To help illustrate how network path delay is estimated, well go back to our imaginary video content company network. We will assume that the average packet size on our flow from Seattle to New York is 1,500 bytes and that the average packet size on the return path is 64 bytes (e.g., analogous to a FTP session). We will also add a millisecond of delay for each router, as they are not real busy devices. Our path delay looks like this: Seattle to New York Router + SD + Router + SD + TD + Router 1 ms + .27 ms + 1 ms + 8 ms + 60 ms + 1 ms = 71 ms New York to Seattle Router + SD + TD + Router + SD + Router 1 ms + 1 ms + 60 ms + 1 ms + 0 ms + 1 ms = 64 ms Total path delay = 64 ms + 71 ms = 135 ms If the traffic flow were real-time in nature, we would need to look at jitter as well. Jitter is the variation in network delay and it has a substantial impact on performance of real-time flows. Although it is difficult to estimate, it is relatively easy to predict areas of the network where jitter can occur and attempt to engineer around those problem spots by utilizing technologies that have very stable delay characteristics.

Now that we have a reasonable estimate of path delay, we can look at TCP behavior in more depth. TCP behavior TCP has a number of features to compensate for network congestion and errors. The feature we are going to concentrate on is called windowing, and TCP uses this to perform flow control. The TCP window is the amount of unacknowledged data the sender can have outstanding in bytes. The size of the window can be as large as 1 GB, but in practice, it is usually not this large and the actual value is machine-specific. Many operating systems come with a default setting, which is usually a good value for estimating throughput. During the TCP connection request, the smaller of the senders offered window size and the receivers maximum window size is chosen for the connection. While actual throughput in a TCP connection is affected by a number of factors, the maximum possible throughput is determined by the window size and network path delay. To calculate the maximum throughput of the TCP connection, we simply use the following equation: ((window size) * 8 bits/byte)/(2 * delay) = maximum throughput At our example company, we use Windows 2000 servers to transfer the data, and these servers have a default configuration of 64-KB windows. If we have a 64-KB window and a path delay of 135 ms, maximum throughput looks like this: (64 KB * 8b/B)/(2 * .135s) = 1.90 Mb/s So we can keep the frame-relay circuit busy, but adding much more bandwidth without either decreasing the path delay or increasing the window size will not help the situation. Changing the maximum window size to 1 Mb results in this throughput value: (1MB * 8b/B)/(2 * .135s) = 30 Mb/s Lets assume that our CIO wants us to look into fractional DS3 service and a particular 10-Mb/s managed VPN service to connect San Francisco and New York. Preliminary testing shows that the managed VPN service increases the transmission delay to 210 ms, and during midday peak periods, the delay climbs to 410 ms. Our path delay has now increased by about 300 ms for off-peak and by 700 ms during peak periods. We recalculate the throughput like this: (1 MB * 8b/B)/(2 * .435s) = 9.195 Mb/s (1 MB * 8b/B)/(2 * .835s) = 4.790 Mb/s Weve made a substantial increase in performance over the 1.90-Mb/s throughput we had before, but we are still not able to optimally use the VPN service. We could further increase the TCP window size settings on the two

machines doing the transfer, but this might adversely affect other applications running on the two machines. Replacing the T1 frame relay circuit with a fractional DS3 running at 10.5 Mb/s (seven DS1s) would decrease both the serialization and transmission delay. This would provide throughput in excess of 30 Mb/s and would eliminate the peak hour issue, albeit at an increase in cost. Conclusion Back to our example. The organization now has an accurate estimation of the transfer times of the 10-GB file between Seattle and San Francisco. Based on our estimates, it is approximately 2.5 hours during off-peak times and five hours during peak times for the VPN solution, and just over two hours for the fractional DS3 solution. Had we not taken a look at delay, its very possible the organization would have assumed that the two solutions were equal in terms of performance and made their selection solely on the basis of cost. If our goal had been to construct a network supporting an intranet consisting of mostly SMTP, HTTP, and occasional database transactions, the delay in our example would have been manageable and most of the end users of the network would not be affected by the increased delay incurred in the VPN-based solution.

Test it yourself: Networks


By Allen Fear and Julie Rivera

Network testing and troubleshooting can be complicated, but it doesn't have to be if you have the right tools. One tool that we recommend at CNET Labs is Qcheck, a free software utility available from IXIA. CNET Labs uses IXIA's IxChariot for some of its network testing, and the same endpoint architecture that is built into IxChariot is also a part of Qcheck. In fact, Qcheck and IxChariot share the same endpoint software. You can run throughput and response time tests with Qcheck that should yield results comparable to those of our tests at CNET Labs. If you are considering an upgrade and would like to know how your current networking solution stacks up against the newest technologies, test your network using Qcheck, and compare your results with CNET Labs' performance tests.

How Qcheck works


Qcheck consists of two components: a console that allows you to initiate and monitor performance tests, and a performance endpoint that runs in the background on each system in the test. The console is used to instruct the first endpoint of an endpoint pair to execute a particular test. Upon completion of the test run, endpoint 1 passes the result back to the console, which displays the result. You can export a detailed record of the test to an HTML document. Qcheck is capable of running several different kinds of tests, including throughput, response time, and streaming. It can also help you check traffic patterns via traceroute.

Installation To run a Qcheck throughput test, you will need to download and install the Qcheck console. To use Qcheck, you also need IXIA's Performance Endpoint softwareinstalled on each networked computer you intend to test. As part of the installation process, the endpoint for Windows operating systems are automatically installed on the computer where you installed Qcheck. Additional endpoints for other operating systems can be downloaded from IXIA's site. Both the endpoint and the console are free downloads. The console runs only on Windows operating systems; however the endpoints run on a wide variety of platforms. This means you can use Qcheck to test networks consisting of machines running Windows, Mac, and Linux operating systems as long as one of the machines is running Windows for the console.

Qcheck and Firewalls


If you have a firewall filtering traffic on your network, you may have to take certain steps to get Qcheck to successfully run tests between endpoints. Software firewalls located on any of the systems running the console or endpoints can prevent Qcheck tests from completing. Qcheck uses fixed port numbers to communicate with the endpoints, making it is easy to configure your firewall to allow Qcheck's data through. Theses ports are referenced in the Qcheck user guide.

Running a throughput test


After you have installed the console and the performance endpoints on networked machines, you are ready to test your network. Enter the IP addresses of the two machines on your network that you would like to test in the endpoint boxes at the top of the console's graphical user interface. The boxes are labeled "From Endpoint 1" and "To Endpoint 2." If you don't know the IP address

of a Windows machine you would like to include in the test, open a command prompt on that machine and enter ipconfig. This calls up IP address information for that machine. A cluster of radio buttons in the center of the console allows you to select the network protocol and type of test you would like to run.

Running a throughput test Click to enlarge.

Select TCP as your protocol and Throughput as the test you would like to perform. You can also enter the number of iterations of the test you've selected and the data size to be transferred between end points. The default settings are appropriate for most home network scenarios, but on fast networks, such as 100Mbps Ethernet, you may find that you need to increase the data size in order to get a reliable timing result. Increasing the default value by a multiple of 10 will generally do the trick. In the test described here, this would change the data size from 100 bytes to 1,000 bytes, but you can increase the data size up to 32,000 bytes. Larger data sizes increase the duration of the test. Once you have selected your endpoints and chosen your test and protocol, you can begin a test by clicking the Run button at the bottom of the console. At this point, the console forwards your test instructions to endpoint 1, which extracts its instructions, sending the necessary test information to endpoint 2, and begins the test. During the test run, endpoint 1 collects the results and forwards that information to the console. The results appear in a dialog box at the bottom of the console. You can get more detailed information, including a log of your test configuration, by clicking the Details button to the right of the Run button. Run the test a few times to make sure that the results are consistent and accurate. Next, select the UDP protocol and run the test again. Use the average of the TCP and UDP results to compare the throughput of your own network with CNET Labs' performance results. You may want to try running the test both in the absence of additional network traffic and during a simulated high-traffic situation with multiple simultaneous downloads. This will give you an idea of the degree to which your network is able to bear heavy traffic loads.

Interpreting throughput measurements You may have noticed that vendors of networking products typically advertise throughput speeds that are substantially more than the data-transfer speeds you experience at home or read about in CNET's reviews. This is because vendors use theoretical throughput ratings that include the sum total of data transferred across the network under optimal conditions. The sum total of data in a file transfer is always more than the data that makes up the file. When you transfer a file from one computer to another, the data is broken up into packets, each of which includes not only the data of the file you are transferring, also called payload data, but also data associated with connection setup, packet headers, trailers, flow control, and packet retransmissions. In some ways, a data packet is like a letter you send through the postal system. In addition to the letter, you also send the envelope and the information on the envelope. But the letter analogy applies only to packets containing payload data. Some of the packets sent in the course of a file transfer don't include any payload data at all and are more like envelopes without letters. For example, a packet may be involved in sending information associated with more

10

technical aspects of the file transfer, such as connection maintenance. All of the data sent over the network that is not payload data is referred to as protocol overhead. When the manufacturer of an 802.11g system claims it offers 54Mbps throughput, it's projecting the sum total of bits that the network is capable of transferring under optimal conditions, including all of the protocol overhead. At CNET Labs, we filter the protocol overhead out of our performance tests and report the throughput in terms of the payload data only. In many cases, the actual payload throughput at the application layer is often substantially lower than the total data throughput through the entire protocol stack. Payload throughput offers a much more realistic reflection of the speed of the network as you experience it. Qcheck's throughput test delivers this information as well, so it offers an ideal comparison to CNET Labs' tests. In the case of some networking products, the actual payload throughput can be less than 50 percent of the bandwidth advertised by the manufacturer.

Testing response time


Throughput is the most relevant indicator of performance for large file transfers, but another useful performance indicator for a network is response time, which refers to the amount of time it takes to send a request and receive a response over a network. Qcheck allows you to capture highly accurate response-time measurements because it uses the same protocols used by most applications--namely, TCP and UDP. Another common tool for measuring response time is ping. If you haven't already tried ping, you can do so on any Windows or Unix/Linux system (including Mac OS X systems) by opening a command prompt or a console and typing pingfollowed by a space and the IP address of any connected system. If you're not connected to a network, you can use the IP address of your local machine. Ping sends out a series of requests and displays the time of each response. But ping doesn't use TCP or UDP to measure response time. It uses Internet Control Management Protocol (ICMP). The problem with using ICMP to measure response time is that this protocol is rarely used by TCP/IP applications, so the test is a bit artificial and can lead to inaccurate results. Across a small network, the difference in response time between ICMP and TCP may be negligible, but in some cases, the gap could be significant. ICMP packets are smaller than TCP and UDP packets, so depending on the configuration of your networking equipment, ICMP and TCP may be handled differently by connecting devices such as routers. Running response tests using TCP or UDP gives you the opportunity to collect highly reliable measurements that yield results more in line with those of real-world applications and actual network use. The parameters you can manipulate for a Qcheck response time test are Protocol, Data Size, and the number of test Iterations. Another advantage to Qcheck is that it allows you to measure response time remotely from the console between any two endpoints on the network, whereas ping is executed from the local machine.

Running traceroute and streaming tests


You can do much more with a network than share files and printers. With the right equipment, you can build a network that can sustain real-time streaming media applications such as games, Voice Over IP, or video broadcasts. One of the requirements for successful streaming is a minimum of latency on the network. Latency is the amount of time it takes for a packet to reach its destination after it is sent, and it increases when you have a bottleneck on your network--for example, a hub that is regularly pounded by heavy traffic on multiple ports or an access point operating at a low connection speed that is slow to process information. Dividing a connection's response time by two generally gives

11

you a good idea of the latency, assuming the response retraces the path of the request, but this is not always the case. If the paths of the request and response differ, they may have a different latency, and you may need to consider options for reducing the latency of the slow path or of redirecting the transmission to the faster path. For streaming applications, data must be delivered quickly and reliably. If data is lost, you may experience a break in the stream, generally in the form of choppy video or poor sound quality. Qcheck's streaming tests measure the amount of lost data between endpoints. You can determine the actual path of a stream with Qcheck's traceroute tool, which shows you the number and order of hops a packet takes across the network, the round-trip time to each hop in the series, and the node name of each hop. Together, Qcheck's streaming test and traceroute features can help you diagnose and find solutions for latency-related network problems.

The right choice


Qcheck lets you examine the performance of your own network in a variety of ways and with a number of different protocols. It has a graphical user interface that is extremely easy to use, and it can tell you a lot about the condition of your network. More detailed information on Qcheck is available in a white paper by Jeffrey T. Hicks and John Q. Walker. Whether you are trying to troubleshoot a problem or just want to compare your own network throughput with CNET Labs' results for the latest networking solutions, Qcheck is the right choice for throughput and response-time testing on your network.

12

Вам также может понравиться