Вы находитесь на странице: 1из 13
iPerf - The ultimate speed test tool for TCP, UDP and SCTP Test the limits of your network + Internet neutrality test dome EEE DownloadiPert binaries SB PubliciPert3 servers Wh ivectuserdocs $Q French iPertforum ] contact Table of contents : 1. Change between iPerf 2.0, iPerf 3.0 and iPerf 3.1 2 iPerf 3 user documentation 3. Change between iPerf 2.0.6, iPerf 2.0.7 and iPerf 20.8 4, iPerf 2 user documentation Change between iPerf 2.0, iPerf 3.0 and iPerf 3.1 + iPerf2 features currently supported by iPerf3: © TCP and UDP tests Set port (-p) Setting TCP options: No delay, MSS, etc. Setting UDP bandwidth (-b) Setting socket buffer size (+) Reporting intervals (-) Setting the iPerf butfer (I) Bind to specific interfaces (-B) IPv6 tests (-6) Number of bytes to transmit (-n) Length of test (1) Parallel streams (-P) Setting DSCP/TOS bit vectors (-S) Change number output format () + New Features in iPerf 3.0: © Dynamic server (client/server parameter exchange) - Most server options from iPerf2 can now be dynamically set by the client Client/server results exchange AiPerf3 server accepts a single client simultaneously (multiple clients simultaneously for iPerf2) iPerf API (libiperf) - Provides an easy way to use, customize and extend iPerf functionality -R, Reverse test mode - Server sends, client receives -O, --omit N : omit the first n seconds (to ignore TCP slowstart) bandwidth n[KM] for TCP (only UDP for IPERF 2) Set target bandwidth to n bits/sec (default 1 Mbit/sec for UDP, unlimited for TCP). V,--verbose : more detailed output than before json : output in JSON format zerocopy : use a'zero copy’ sendfile() method of sending data, This uses much less CPU. title str: prefix every output line with this string, + New Feature: file name : xmit/recy the specified file affinity n/n,m: set CPU affinity (cores are numbered from 0 - Linux and FreeBSD only) lockcount #[KMG] : number of blocks (packets) to transmit (instead of -t or -n) -version4 : only use IPv4 versioné : only use IPVé flowlabel : set |Pvé flow label (Linux only) linux-congestion : set congestion control algorithm (Linux and FreeBSD only) (Z in iPerf2) -debug : emit debugging output. Primarily (perhaps exclusively) of use to developers. server :iPerf2 can handle muttiple client requests. iPerf3 will only allow one iperf connection at a time. Perf 3.1: idfile file write a file with the process ID, most useful when running as a daemon, port : Specify the client-side port. ~-sctp use SCTP rather than TCP (Linux, FreeBSD and Solaris). idp-counters-64bit : Support very long-running UDP tests, which could cause a counter to overflow ~-logfile file: send output to alog file. * iPerf2 Features Not Supported by iPerf3: Bidirectional testing (-< /-r) Data transmitted from stdin (-) TTL: time-to-live, for multicast (-T) Exclude C(connection) D(data) M(multicast) S(settings) V(server) reports (-x) Report as a Comma-Separated Values (-y) Compatibility mode allows for use with older version of iPerf (-C) iPerf 3 user documentation ‘GENERAL OPTIONS Description [The server port for the server to listen on and the client to connect to. This should be the same in both client and server. Default is 5201. [Option to specify the client-side port. (new in iPerf 3.1) [A letter specifying the format to print bandwidth numbers in. Supported formats are ‘kt = Kbits/sec ‘Kt = KBytes/sec ‘m! = Mbits/sec ‘wt = MBytes/sec [The adaptive formats choose between kilo- and mega- as appropriate. interval n ISets the interval time in seconds between periodic bandwidth, jitter, and loss reports. Ifnon-zero, a Ireport is made every intervalseconds of the bandwidth since the last report. If zero, no periodic reports lare printed. Default is zero, file client-side: read from the file and write to the network, instead of using random data, Iname _|server-side: read from the network and write to the file, instead of throwing the data away. A, __ |Set the CPU affinity if possible (Linux and FreeBSD only). On both the client and server you can set the affinity local affinity by using the n form of this argument (where n is a CPU number). In addition, on the client In/nm-F _|side you can override the server's affinity for just that one test, using the n,m form of argument. Note that when using this feature, a process will only be bound to a single CPU (as opposed toa set containing potentialy multiple CPUs). |B, -bind [Bind to host, one of this machine's addresses. For the client this sets the outbound interface. For a lhost _|server this sets the incoming interface. This is only useful on multihomed hosts, which have multiple Inetwork interfaces. LV,= __|givemore detailed output lverbose 4, ~json_loutput in JSON format. logfile file lsend output to a log file. (new in iPerf 3.1) lemit debugging output. Primarily (perhaps exclusively) of use to developers. debug [Show version information and quit. lversion help [Show a help synopsis and quit. SERVER SPECIFIC OPTIONS. Command inti iene Description line option| [Run iPerf in server mode. (This will only allow one iperf connection at a time) IRun the server in background as a daemon. |write a file with the process ID, most useful when running as a daemon. (new in iPerf 3.1) pidflefile CLIENT SPECIFIC OPTIONS ‘Command| vot Commar Description line option] [Run iPerf in client mode, connecting to an iPerf server running on host. |Use SCTP rather than TCP (Linux, FreeBSD and Solaris). (new in iPerf 3.1) \Use UDP rather than TCP. See also the -b option, lbandwidth In[KM] Set target bandwidth to n bits/sec (default 1 Mbit/sec for UDP, unlimited for TCP). If there are multiple streams (-P flag), the bandwidth limit is applied separately to each stream. You can also add a’ and a lnumber to the bandwidth specifier. This is called "burst mode’. It will send the given number of packets |without pausing, even if that temporarily exceeds the specified bandwidth limit. [The time in seconds to transmit for. iPerf normally works by repeatedly sending an array of len bytes for| time seconds. Default is 10 seconds. See also the -|,:k and =n options. [The number of buffers to transmit, Normally, iPerf sends for 10 seconds. The -n option overrides this land sends an array of lenbytes num times, no matter how long that takes. See also the-I,-k and -t loptions. lblockcount| InfKM] [The number of blocks (packets) to transmit. (instead of t or -n) See also the -t -land -n options. length InfkM) [The length of buffers to read or write. Perf works by writing an array of /enbytes a number of times. [Default is 128 KB for TCP, 8 KB for UDP. See also the =n, -k and -t options. iP, [The number of simultaneous connections to make to the server. Default is 1 parallel n - [Run in reverse mode (server sends, client receives). reverse Lw.-- __ |Sets the socket buffer sizes to the specified value. For TCP, this sets the TCP window size. (this gets sent lwindow _ |to the server and used on that side too) InIKM] |-M,—-set- Attempt to set the TCP maximum segment size (MSS). The MSSis usually the MTU - 40 bytes for the Imssn__|TCP/IP header. For ethernet, the MSS is 1460 bytes (1500 byte MTU). -N.—no- [Set the TCP no delay option, disabling Nagle's algorithm. Normally thisis only disabled for interactive ldelay _ applications like telnet. lonly use IPva. lonly use IPvé [The type-of-service for outgoing packets. (Many routers ignore the TOS field) You may specify the \value in hex with a ‘Ox’ prefix, in octal with a '0' prefix, or in decimal. For example, '0x10" hex ='020' loctal = '16' decimal. The TOS numbers specified in RFC 1349 are: IPT0$_LOWDELAY minimize delay ox10 IPTOS_THROUGHPUT maximize throughput 0x08 T2T0: T2370: RELIABILITY maximize reliability 0x04 LOWCOST minimize cost x02 lflowlabel n} ISet the IPv6 flow label (currently only supported on Linux). lzerocopy \Use a "zero copy" method of sending data, such as sendfile(2), instead of the usual write(2). This uses Imuch less CPU, 0, --omit [Omit the first n seconds of the test, to skip past the TCP ICP slowstart period. IPrefix every outout line with this string, lcongestion| lalgo ‘Set the congestion control algorithm (Linux only for iPerf 3.0, Linux and FreeBSD for iPerf 3.4). See also https://github.com/esnet/ipert Change © 206 + 207 + 208 between iPerf 2.0.6, iPerf 2.0.7 and iPerf 2.0.8 change set (rjmemahon@rjmcmahon.com) March 2014: Increase the shared memory for report headers reducing mutex contention, Needed to increase performance. Minor code change that should be platform/os independent change set (rimemahon@rjmemahon.com) August 2014 : Linux only version which supports end/end latency (assumes clocks synched) ‘Support for smaller report interval (5 milliseconds or greater) End/end latency with UDP (mean/mir/max), display in milliseconds with resolution of microseconds Socket read timeouts (server only) so iperf reports occur regardless of no received packets, Report timestamps now display millisecond resolution Local bind supports port value using colon as delimeter (-B 10.10.10.1:60001) Use linux realtime scheduler and packet level timestamps for improved latency accuracy ‘Suggest PTP on client and server to synch clocks to microsecond ‘Suggest a quality reference for the PTP grandmaster such as a GPS disciplined oscillator from ‘companies like Spectracom change set (as of 12 january 2015): Fix portability, compile and test with Linux, Win10, Win7, WinXP, MacOS and Android Client now requires -u for UDP (no longer defaults to UDP with -b) Maintain legacy report formats ‘Support for -e to get enhanced reports Support TCP rate limited streams (via the -b) using token bucket Support packets per second (UDP) via pps as units, (e.g. -b 1000pps) Display PPS in both client and server reports (UDP) Support realtime scheduler as a command line option (realtime or -z) Improve client tx code path so actual tx offerred rate will converge to the -b value Improve accuracy of microsecond delay calls (in platform independent manner) (Use of Kalman filter to predict delay errors and adjust delays per predicted error) Display target loop time in initial client header (UDP) Fix final latency report sent from server to client (UDP) Include standard deviation in latency output Suppress unrealistic latency output (-/-/-/-) ‘Support SO_SNDTIMEO on send so socket write won't block beyond -t (TCP) Use clock gettime if available (preferred over gettimeofday()) TCP write and error counts (TCP retries and CWND for linux) TCP read count, TCP read histogram (8 bins) Server will close the socket after +t seconds of no traffic, See also https://sourceforge.net/projects/ipert2/ iPerf 2 user documentation ‘Tuning a TCP connection Tuning a UDP connection Running multicast servers and clients _—, [Pv6é Mode w=) Representative Streams Testing \ S Running iPerf mon the limits Running iPerf as a Windows Service ive Window Compiling network ‘GENERAL OPTIONS Command | Environment variable ae ommar " Description ine option option -f, format |$IPERF_FORMAT [A letter specifying the format to print bandwidth numbers in. Supported |[bkmaBKMA} formats are 'b! = bits/sec ‘3 'k! = Kbits/sec Kt im! = Mbits/sec ™ tg’ = bits/sec 's' ta! = adaptive bits/sec A’ = adaptive Bytes/sec [The adaptive formats choose between kilo- and mega- as appropriate. Fields lother than bandwidth always print bytes, but otherwise follow the requested format. Default is'a INOTE:here Kilo = 1024, Mega = 1024*2 and Gige = 1024*3 when dealing lwith bytes. Commonly in networking, Kilo = 1000, Mega = 10002, and Giga [+ 1000°3 so we use this when dealing with bits. If this really bothers you, use |b and do the math. interval #| |$IPERF_INTERVAL Isets the interval time in seconds between periodic bandwidth, jitter, and loss Ireports. If non-zero, a report is made every intervalseconds of the lbandwidth since the last report. If zero, no periodic reports are printed. [Default is zero. len #{KM] {SIPERF_LEN [The length of buffers to read or write. iPerf works by writing an array of fen lbytes a number of times. Default is 8 KB for TCP, 1470 bytes for UDP. Note ror UDP, this is the datagram size and needs to be lowered when using IPV6 laddressing to 1450 or less to avoid fragmentation. See also the -n and -t loptions. Lm lprint_mss |SIPERF_PRINT_MSS Print the reported TCP MSS size (via the TCP_MAXSEG option) and the lobserved read sizes which often correlate with the MSS. The MSS is usually the MTU - 40 bytes for the TCP/IP header. Often a slightly smaller MSS is. Ireported because of extra header space from IP options. The interface type Icorresponding to the MTU is also printed (ethernet, FDI, etc.). This option i Inot implemented on many OSes, but the read sizes may still indicate the MSS. [SIPERF_PORT [The server port for the server to listen on and the client to connect to. This |should be the same in both client and server. Default is 5001, the same as ittep. udp [SIPERF_UDP [Use UDP rather than TCP. See also the -b option, Iw, --window ISTCP_WINDOW SIZE| ISets the socket buffer sizes to the specified value. For TCP, this sets the TCP. KM] window size. For UDP it is just the buffer which datagrams are received in, land so limits the largest receivable datagram size. -B,-bind _|SIPERF_BIND IBind to host, one of this machine's addresses. For the client this sets the lhost loutbound interface. For a server this sets the incoming interface. This is only luseful on multihomed hosts, which have multiple network interfaces. FFor iPerf in UDP server mode, this is also used to bind and join to a multicast lgroup. Use addresses in the range 224.0.0.0 to 239.255.255.255 for |multicast. See also the -T option. r= ISIPERF_COMPAT [Compatibility mode allows for use with older version of iPert. This mode is compatibility lnot required for interoperability but itis highly recommended. In some cases |when using representative streaming you could cause a 1.7 server to crash or {cause undesired connection attempts. -M,—-mss# |SIPERF_MSS [Attempt to set the TCP maximum segment size (MSS) via the TCP_ MAXSEG. KM} loption. The MSS is usually the MTU - 40 bytes for the TCP/IP header. For lethernet, the MSS is 1460 bytes (1500 byte MTU). This option is not implemented on many OSes. LN, —nodelay |SIPERF_NODELAY [Set the TCP no delay option, disabling Nagle’s algorithm. Normally this is only \disabled for interactive applications like telnet. FV (fromvi.6 Bind to an IPv6 address lor higher) IServer side: I$ ipert-s-V (client side: \$ ipert-c -V INote: On version 1.6.3 and later a specific IPvé Address does not need to be lbound with the -B option, previous 1.6 versions do. Also on most OSes using, |this option will also respond to IPv4 clients using IPv4 mapped addresses. [Print out a summary of commands and quit. Print version information and quit. Prints ‘pthreads' if compiled with POSIX Ithreads, 'win32 threads' if compiled with Microsoft Win32 threads, or'single \threaded' if compiled without threads. SERVER SPECIFIC OPTIONS Command [Environment variable line option option Description server |SIPERF_SERVER [Run iPerfin server mode. (iPerf2 can handle multiple client requests) -D(fromvi2 |. IRun the server as a daemon (Unix platforms) lor higher) lon Win32 platforms where services are available, iPerf will start running 2s a ervice. -R (only for IRemove the iPerf service (ifit's running). windows, Irom v1.2 or higher) -0 (only for Redirect output to given file. windows. rom v1.2 or higher) \SIPERF_CLIENT if iPerfis in server mode, then specifying a host with -c will limit the {connections that iPerf will accept to the host specified. Does not work well ffor UDP. |-P, parallel #$IPERF_PARALLEL [The number of connections to handle by the server before closing. Default is (0 (which means to accept connections forever). CLIENT SPECIFIC OPTIONS ‘Command line option Environment variable option Description |SIPERF_BANDWIDTH| [The UDP bandwidth to send at, in bits/sec. This implies the -u option. Default yandwidth # is 1 Mbit/sec. KM] client |SIPERF_CLIENT [Run iPerfin client mode, connecting to an iPerf server running on host. lhost [SIPERF_DUALTEST IRun iPerfin dual testing mode. This will cause the server to connect back to Ithe client on the port specified in the -L option (or defaults to the port the {client connected to the server on). This is done immediately therefore Irunning the tests simultaneously. If you want an alternating test try -t. ISIPERF_NUM [The number of buffers to transmit. Normally, Perf sends for 10 seconds. The Ln option overrides this and sends an array of len bytes num times, no matter lhow long that takes. See also the -| and -toptions. ISIPERF_TRADEOFF IRun iPerf in tradeoff testing mode. This will cause the server to connect back {to the client on the port specified in the -L option (or defaults to the port the {client connected to the server on). This is done following the client {connection termination, therefore running the tests alternating. If you want lan simultzncous test try -d. [SIPERF_TIME [The time in seconds to transmit for. Perf normally works by repeatedly lsending an array of Jen bytes for time seconds. Default is 10 seconds. See lalso the -| and -n options. listenport # |SIPERF_LISTENPORT [This specifies the port that the server will connect back to the client on. It {defaults to the port used to connect to the server from the client. |-P, parallel @SIPERF_PARALLEL |The number of simultaneous connections to make to the server. Default is 1. [Requires thread support on both the client and server. tos# — |SIPERF_TOS |The type-of-service for outgoing packets. (Many routers ignore the TOS fneld.) You may specify the value in hex with a ‘Ox’ prefix, in octal with a'0' [prefix, or in decimal. For example, ‘0x10! hex ='020' octal = '16' decimal. The TOS numbers specified in RFC 1349 are: IPTOS_LOWDELAY minimize delay ox10 TPTOS_THROUGHPUT maximize throughput 0x08 IPTOS_RELIABILITY maximize reliability 0x04 IPTOS_LOWCOST minimize cost 0x02 [SIPERF_TTL [The time-to-live for outgoing multicast packets. This is essentially the inumber of router hops to go through, and is also used for scoping. Default is, 1, link-local. |-F (from v1.2 |Use a representative stream to measure bandwidth, e.g. = lor higher) '$ iperf -c -F (from v1.2 \Same as -F, input from stdin. lor higher) Tuning a TCP connection ‘The primary goal of iPerf is to help in tuning TCP connections over a particular path. The most fundamental tuning issue for TCP is the TCP window size, which controls how much data can be in the network at any one point. IFit is ‘too small, the sender will be idle at times and get poor performance. The theoretical value to use for the TCP window size is the bandwidth delay product, bottleneck bandwidth * round trip time In the below modi4/cyclops example, the bottleneck link is a 45 Mbit/sec DS3 link and the round trip time measured with ping is 42 ms. The bandwidth delay product is 45 Mbit/sec* 42 ms = (4506) * (42e-3) 1890000 bits 230 KByte That is a starting point for figuring the best window size; setting it higher or lower may produce better results. In our example, buffer sizes over 130K did not improve the performance, despite the bandwidth delay product of 230K. Note that many OSes and hosts have upper limits on the TCP window size. These may be as low as 64 KB, or as high as several MB. iPerf tries to detect when these occur and give a warning that the actual and requested window sizes. are not equal (as below, though that is due to rounding in IRIX). For more information on TCP window sizes, see the Lafibre.info. Here is an example session, between node1 in Illinois and node2 in North Carolina. These are connected via the vBNS backbone and a 45 Mbit/sec DS3 link. Notice we improve bandwidth performance by a factor of 3 using proper TCP window sizes. Use the adaptive window sizes feature on platforms which allow setting window sizes in the granularity of bytes. node2> iperf -s erver listening on TCP port 5001 TCE window size: 60.0 KByte (default) 4) local port 5001 connected with port 2357 ID] Interva Transfer Bandwidt: 4] 0.0-10.1 sec 6.5 M3ytes 5.2 Mbits/sec nodel> iperf -c node2 © nodel, TCP port 5001 9 KByte (default) 59 3] local port 2357 connected with port 5001 ID] Interva Transfer Bandwidtn 3] 0,0-10.0 sec 6.5 MBytes 5.2 Mbits/sec node2> iper w 130% Server listening on TCP port 5001 CP window size: 130 KB: 4} local port 5001 connected with po: ID] Interval Transfer i 4] 0.0-10.1 sec 19.7 Maytes 15.7 Mbits/sec nodel> ip: -c node2 ~ Client connecting te nede2, TCP port P window size: 129 KByte (WARNING requested 130 KByte) 3] local port 2530 connected with port 5001 ID] Interval Transfer Bandwidth 3] 0.0-10.0 sec 19.7 MBytes 15.8 Mbits/sec Another test to dois run parallel TCP streams. Ifthe total aggregate bandwidth is more than what an individual stream gets, something is wrong. Either the TCP window size is too small, or the OS's TCP implementation has bugs, or the network itself has deficiencies. See above for TCP window sizes; otherwise diagnosing which is somewhat difficult. If Perf is compiled with pthreads, a single client and server can test this, otherwise setup multiple clients and servers on different ports. Here's an example where a single stream gets 16.5 Mbit/sec, but two parallel streams together get 16.7 + 9.4 = 26.1 Mbit/sec, even when using large TCP window sizes: node2> iperf -s -w 300k ng on TCP port 5001 window size: 300 KByte [ 4] local port 5001 connected with port 6902 [ ID] Interval Transfer Bandwidth [ 4] 0,0-10.2 sec 20.9 MByLes 16.5 Mbits/sec [ 4] local <2P Addr node2> port S001 connected with port 6911 [5] locat node2> port 5001 connected with port 6912 [ 1D] Interval Transfer Bandwidth C i 5] 0.0-10.1 sec 21.0 MBytes 16.7 Mbits/sec 4) 0.0-10.3 sec 12.0 MBytes 9.4 Mbits/sec nodel> ./iperf -c node2 -w 300k Client connecting to node2, TCP port 5001 TCP window size: 299 KByte (WARNING: requested 300 KByte) [ 3] local port 6902 connected with port 5001 ( 1D] Interval ‘Transfer Bandwidth [3) 0.0= 20.9 MBytes 16.4 Mbits/sec nodel> iperf -c node2 -w 300k -P 2 Client connecting to node2, TCP port 5001 TCP window size: 299 KByte (WARNING: requested 300 KByte) [4] local Addr node2> port 6912 connected with port 5001 [3] local Addr node2> port 6911 connected with port 5001 { ID] Interval Transfer Bandwidth [ 4] 0.0-10.1 sec 21.0 MBytes 16.6 Mbits/sec i 3] 0,0-10.2 see 12.0 MByLes 9.4 Mbits/sec secondary tuning issue for TCP is the maximum transmission unit (MTU). To be most effective, both hosts should support Path MTU Discovery. Hosts without Path MTU bandwidth and processing time. Use the -m option to display what MSS is being used, and see if scovery often use 536 as the MSS, which wastes matches what. you expect. Often itis around 1460 bytes for ethernet. Here i node3> iperf -s -m Server listening on TCP port 5001 TCP window size: 60.0 KByte (default) [ 4] local port 5001 connected with port 1096 { 1D] thtervat Transfer Bandwidth [ 4] 0.0- 2.0 sec 1.8 MBytes 6.9 Mbits/sec | 4] MSS size 1448 bytes (MTU 1500 bytes, ethernet) [ 4] Read lengths occurring in more than 5% of reads: [ 4] 952 bytes read 219 times (16.28) [ 4] 1648 bytes read 1128 times (83.6%) is a host that doesn't support Path MTU Discovery. It will only send and receive small 576 byte packets. node4> iperf -s -m Server listening on TCP port 5001 TCE window size: 32.0 KByte (default) [4] Local Addr node4> port 5001 connected with port 13914 { ID] Interval Transfer Bandwidth i 4] 0,0- 2.3 sec 632 KBytes 2.1 Mbits/sec WARNING: Path MTU Discovery may not be enabled. | 4) MSS size 536 bytes (MTU 576 bytes, minimum) [ 4] Read lengths occurring in more than 5% of reads: [ 4] 536 bytes read 308 times (58.43) ( 4) 1072 bytes read 91 times (17.34) [ 4) 1608 bytes read 29 times ( iPerf supports other tuning options, which were added for exceptional network situations like HIPPI-to-HIPPI over ATM. Tuning a UDP connection iPerf creates a constant bit rate UDP stream. This is a very artificial stream, similar to voice communic: much else. ‘You will want to adjust the datagram size (- to the size your application uses. ‘The server detects UDP datagram loss by ID numbers in the datagrams. Usually a UDP datagram becomes several IP packets, Losing a single IP packet will lose the entire datagram. To measure packet loss instead of datagram loss, make the datagrams small enough to fit into a single packet, using the -I option. The default size of 1470 bytes works for ethernet. Out-of-order packets are also detected. (Out-of-order packets cause some ambiguity in the lost packet count; iPerf assumes they are not duplicate packets, so they are excluded from the lost packet count) Since TCP does not report loss to the user, | find UDP tests helpful to see packet loss along a path. Jitter calculations are continuously computed by the server, as specified by RTP in RFC 1889. The client records a 64 bit second/microsecond timestamp in the packet. The server computes the relative transit time as (server's receive time - client's send time). The client's and server's clocks do not need to be synchronized; any difference is subtracted out in the jitter calculation. Jitter is the smoothed mean of differences between consecutive transit times. node2> iperf -s -u -i 1 Server liste: Recei uD: on UDP port 5001 60.0 KByte (default) Addr node2> port $001 connected with port 9726 ( 1D) Transf andwidth itter Lost/Total rams 4) sec 1.3 MBytes 10.0 Mbits/sec 0 1/894 8) 4) 0 sec 3 MBytes 10.0 Mbits/sec 0 os 332 4) 0 sec 3 MBytes 10.0 Mbits/sec 0 of 392 4) 0 sec 3 MBytes 10.0 Mbits/sec 0 0/893 4) +0 sec 3 MBytes 10.0 Mbits/sec 0. os 892 4) 10 see 3 MBytes 10.0 Mbits/sec 0. os 4) -Osec 1,3 MByLes 10.0 Mbits/sec 0 oy 4) :0 sec 1,3 MBytes 10.0 Mbits/sec 0. os 4) :0sec 1.3 MBytes 10.0 Mbits/sec 0. of (08 4) 0 sec 5 MBytes 10.0 Mbits/sec 0 vs (0.0118) nodel> iperf -c node? -u -b 10m Client connecting to node2, UDP port 900 Sending 1470 byte datagrams size: 60.0 KByte (default) 3] lecal port 9726 connected with port 5001 Notice the higher jitter due to datagram reassembly when using larger 32 KB datagrams, each split into 23 packets of 1500 bytes. The higher datagram loss seen here may be due to the burstiness of the traffic, which is 23 back-to- back packets and then a long pause, rather than evenly spaced individual packets. node2> iperf -s -u -1 32k -w 128k -i 1 ver listening on UDP port 5001 Receiving 32768 byte datagrams uD: ize: 128 KByte 3] local <=P Addr node2> port $001 connected with port 11303 [ Tp] Taterval Bandwidth Jitter Lost/Total Datagrams [3] 0.0- 1.0 10.0 Mbits/sec 0.430 ms 0/41 (08) [3] 1,0- 2.0 8.5 Mbits/sec 5.996 ms 6/40 (158) [ 3] 2.0- 3.0 9.7 Mbits/sec 0.796 ms 1/40 (2.58) [ 3] 3.0- 4.0 10.0 Mbits/sec 0.403 ms 0/40 (08) (3) 4 0 10.0 Mbits/sec 0.448 ms 0/40 (08) i315. 0 10.0 Mbits/sec 0.464 ms 0/40 (08) i 3) 6. 0 10.0 Mbits/sec 0.442 ms 0/ 40 (0%) (3) 7.0- 8.0 10.0 Mbits/sec 0.342 ms 0/40 (08) | 3) 8.0- 9.0 10.0 Mbits/sec 0.431 ms 0/40 (08) [31 3. 0 10.0 Mbits/sec 0.407 ms 0/40 (08) [3] 0. 0 9.8 Mbits/sec 0.407 ms 7/401 (1.78) nodel> iperf -c node2 -b 10m -1 32k -w 128k Client connecting to node2, UDP port 5001 Sending 32768 byte datagrams UDP buffer size: 128 KByte [ 3] local port 11303 connected with port 5001 ( ID] Interval Transfer Bandwidth [3] 0.0-10.0 sec 12.5 MBytes 10.0 Mbits/sec [ 3] Sent 40: datagrams Multicast ‘To test multicast, run several servers with the bind option (-B, --bind) set to the multicast group address. Run the client, connecting to the multicast group address and setting the TTL (-T,--ttl) as needed. Unlike normal TCP and UDP tests, multicast servers may be started after the client. In that case, datagrams sent before the server started show up as losses in the first periodic report (61 datagrams on arno below). nodeS> iperf -c 224.0.67.67 -u ~ 5 -t 8 Client connecting to 224.0.67.67, UDP port 5001 Sending 1470 byte datagrans Setting mlticast TTL to 5 UDP buffer size: 32.0 KByte (default) (3) local Addr nodeS> port 1025 connected with 224.0.67.67 port $001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 5.0 sec 642 KBytes 1.0 Mbits/sec [3] Sent 447 datagrams nodeS> iperf -s -u -B 224.0.67.67 -i 1 Server listening on UDP port 5001 Binding to local address 224.0.67.67 Joining multicast group 224.0.67.67 Receiving 1470 byte datagrams UDP buffer size: 32.0 KByte (default) [3] local 224.0.67.67 port 5001 connected with port 1025 [ ID] Interva! Transfer Bandwidth Jitter Lost/Total Datagrams [3] 0.0- 131 KBytes 1.0 Mbits/sec 0.007 ms 0/91. (08) (3) i t t t 1.0 1.0- 2.0 128 KBytes 1.0 Mbits/sec 0.008 ms 0/89 (0%) 3} 2.0- 3.0 128 1.0 Mbits/sec 0.010 ms 0/89 (0%) 3) 3.0- 4.0 128 1.0 Mbits/sec 0.013 ms 0/89 (0%) 3] 4.0- 5.0 128 1.0 Mbits/sec 0.008 ms 0/89 (08) 3] 0.0- 5.0 642 KBytes 1.0 Mbits/sec 0.008 ms 0/447 (08) node6> iperf -s -u -B 224.0.67 Server listening on UDP port 5: Binding to local address 224.9. Joining multic: Q 60.0 KByte (default) 0.67.67 port 5001 connected with port 1025 Transfer dwidth Jitter Lost/Total Datagrams 129 KBytes 1.0 Mbits/sec 0.778 ms 61/151 (408) 128 KBytes 1.0 Mbits/sec 0 ms 0/9 (08) 128 K3ytes 1.0 Mbits/sec 0.264 ms 0/89. (08) 128 Kaytes 1.0 Mbits/sec 0 ms 0/89 (08) 554 KBytes 1.0 Mbits/sec 0 ms 61/447 (148) Start multiple clients or servers as explained above, sending data to the sane multicast server. If you have multiple servers listening on the multicast address, each of the servers will be getting the data) IPv6é Mode Get the IPvé address of the node using the ‘ifconfig’ command, Use the -V option to indicate that you are using an IPvé address Please note that we need to explicitly bind the server address also. Server side: Siperf-s-V Client side: $iperf-c -V> Note: iPerf version 1.6.2 and eariler require a IPv6 address to be explicitly bound with the -B option for the server. g Representative Streams to measure bandwidth Use the -F or -I option. If you want to test how your network performs with compressed / uncompressed streams, just create representative streams and use the -F option to test it. This is usually due to the link layer compressing data The-F option is for file input. The -Loption is for input from stdin, Eg. Client: $iperf -c -F «file-name> Client: § ipert-c -I Running the server as adaemon. Use the -D command line option torun the server as a daemon. Redirect the output to a file, Eg. iperf-s -D > iperflog. This will have the iPerf Server running as a daemon and the server messages will be logged in the file iperfLog, Using iPerf as a Service under Win32 There are three options for Win32: -ooutputhlename ‘output the messages into the specified file -s-D install iPerf asa service and run it sR uninstall the iPerf service Examples: iperf-s -D -o iperflog.txt install the iPerf service and run it. Messages will be reported into ‘owindirs\system32\iperflog.txt’ iperf-5-R will uninstall the iPerf service if tis installed. Note: If you stop want to restart the iPerf service after having killed it with the Microsoft Management Console or the Windows Task Manager, make sure to use the proper OPTION in the service properties dialog. Adaptive window sizes (under development) Use the -W option on the client to run the client with the adaptive window size. Ensure that the server window size is fairly big for this option. Eg. If the server TCP window size is BKB, it does not help having a client TCP window size of 256KB. 256KB Server TCP Window Size should suffice for most high bandwidth networks. Client changes the TCP window size using a binary exponential algorithm. This means that you may notice that TCP window size suggested may vary according to the traffic in the network, iPerf will suggest the best window size for the current network scenari Compiling (Once you have the distribution, on UNIX, unpack it using gzip and tar. That will create a new directory ‘iperf- ' with the source files and documentation, iPerf compiles cleanly on many systems including Linux, SGI IRIX, HP-UX, Solaris, AIX, and Cray UNICOS. Use ‘make’ to configure for your OS and compile the source code. gunzip ~ ~. tar.gz | tar -xvE - cd iperf- ./configure make To install iPerf, use ‘make install, which will ask you where to install it. To recompile, the easiest way is to start over. Do ‘make distclean' then /configure; make’ See the Makefile for more options.

Вам также может понравиться