Вы находитесь на странице: 1из 49

Spirent Journal of Benchmark PASS Test Methodologies

August 2011 Edition

February 2011 Edition

PASS

Introduction Todays Devices Under Test (DUT) represent complex, multi-protocol network elements with an emphasis on Quality of Service (QoS) and Quality of Experience (QoE) that scale to terabits of bandwidth across the switch fabric. The Spirent Catalogue of Test Methodologies represents an element of the Spirent test ecosystem that helps answer the most critical Performance, Availability, Security and Scale Tests (PASS) test cases. The Spirent Test ecosystem and Spirent Catalogue of Test Methodologies are intended to help development engineers and product verification engineers to rapidly develop and test complex test scenarios. How to use this Journal This provides test engineers with a battery of test cases for the Spirent Test Ecosystem. The journal is divided into sections by technology. Each test case has a unique Test Case ID that is universally unique across the ecosystem. Tester Requirements To determine the true capabilities and limitations of a DUT, the tests in this journal require a test tool that can measure router performance under realistic Internet conditions. It must be able to simultaneously generate wire-speed traffic, emulate the requisite protocols, and make real-time comparative performance measurements. High port density for cost-effective performance and stress testing is important to fully load switching fabrics and determine device and network scalability limits. In addition to these features, some tests require more advanced capabilities, such as Integrated traffic, routing, and MPLS protocols (e.g., BGP, OSPF, IS-IS, RSVP-TE, LDP/CR-LDP) to advertise route topologies for large simulated networks with LSP tunnels while simultaneously sending traffic over those tunnels. Further, the tester should emulate the interrelationships between protocol s through a topology. Emulation of service protocols (e.g., IGMPv3, PIM-SM, MP-iBGP) with diminution. Correct single-pass testing with measurement of 41+ metrics per pass of a packet. Tunneling protocol emulation (L2TP) and protocol stacking. True stateful layer 2-7 traffic. Ability to over-subscribe traffic dynamically and observe the effects.

Finally, the tester should provide conformance test suites for ensuring protocol conformance and interoperability, and automated applications for rapidly executing the test cases in this journal. Further Resources Additional resources are available on our website at http://www.spirent.com

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Table of Contents
Testing Industry Standards .....................................................................................................3 BNCH_001 BNCH_002 BNCH_003 BNCH_004 BNCH_005 RFC 3918 Multicast Join/Leave Latency Test ....................................................... 4 CloudMark Virtualized Performance Benchmark ............................................... 8 EtherSAM (ITU-T Y.1564) EBS and CBS Burst Test with TurboQoS .................... 18 EtherSAM (ITU-T Y.1564) Service Configuration Ramp Test with TurboQoS ..... 27 EtherSAM (ITU-T Y.1564) Service Performance Test with TurboQoS ................. 35

Appendix A Telecommunications Definitions ..................................................................... 42

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Testing Industry Standards


To provide a stand way of measuring and evaluating a Device Under Test (DUT), the industry has created a series of benchmarks. These benchmarks establish a uniform procedure for the generation of traffic to and from the DUT with a normalized procedure of analysis and reporting. The goal of the benchmark is to generate metrics in a reproducible and unbiased fashion for comparability. Benchmarking standards can come from any organization for potential industry wide adoption. Two key organizations, IEEE and IETF, help set standards in the industry by coordinating recommendations. Though the RFC (Request for Comment) and WG (Working Group) system, sub-groups like the BMWG (Bench mark Working Group) help coordinate and refine recommendations. Key recommendations such as RFC-2544, RFC-2889, and RFC-3918 have come from this process. In order to execute benchmarks, test and measurement equipment like Spirent TestCenter, Spirent Avalanche, and Spirent Landslide help the user systematically generate, analyze, and report based on the industry derived standards. Spirent TestCenter is especially architected to rapidly test and measure industry standards to its microkernel-architected and high-port-density design, allowing up to 32 multiple automated processes to execute in parallel per chassis. This document describes the methodologies associated with testing key industry standards. These constructive generic frameworks help users understand and execute key standards and reduce the time associated with testing benchmarks.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

BNCH_001 RFC 3918 Multicast Join/Leave Latency Test


Abstract
This test case determines the time it takes the DUT to start/stop forwarding multicast frames once it receives a successful IGMP group membership report/leave group message.

Description
This test is part of the RFC 3918 tests, in this case to determine the time it takes a DUT to start/stop forwarding multicast frames from the time a successful IGMP group membership report/leave group message is sent. In this test, Spirent TestCenter ports act as both multicast clients and sources with the DUT in between. The DUT shouldnt forward multicast traffic until it receives a request from the client. It should process multicast Join/Leave requests from the client with the minimum possible latency. This is critical for presenting the user with a good quality of experience, especially with video applications.

Target Users
All NEMs and service providers

Target Device Under Test (DUT)


Core Equipment

Reference
RFC 3918

Relevance
This test case show cases the DUTs capability to handle Join/Leave requests from the Multicast Clients.

Version
1.0

Test Category
Testing Benchmarking Standards

PASS
[x] Performance [x] Availability [ ] Security [] Scale

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Required Tester Capabilities


The tester must support: RFC 3918 Multicast protocols support IGMP/MLD Results reporting capabilities in a template format with key

Topology Diagram

Multicast Clients

DUT
Multicast Server

Test Procedure
1. 2. 3. 4. 5. Launch the RFC 3918 Wizard in the Spirent TestCenter GUI. Select the Multicast Join/Leave Test. Select the ports that will be used in the test. a. Configure multicast hosts and the group automatically with the wizard or manually. Select the endpoint mapping and multicast source and client ports. Configure the following parameters a. Multicast Client Version (IGMP Version). b. Join Group Delay. c. Leave Group Delay. d. Multicast Message Tx Rate. e. Multicast Group Base IP Address, Step and the how much should it Increment in each step. f. Layer 4 Header None, TCP, UDP. i. If TCP/UDP is selected, give a port number range. g. TOS or Class of Service. h. VLAN P-bit, if any. i. TTL. j. Latency Type. i. Selection from LILO, FIFO, FILO or LIFO. k. The Multicast Group Distribution Mode Even or Weighed, between Client ports, if more than 1.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

6.

Configure the test options. a. Number of Trials to be run. b. Duration in seconds. c. Test Start Delay for the DUT to be able to ramp up. d. Frame Size. i. Option to have Fixed, Random, Step, Custom or iMIX. e. ARP/Learning parameters. f. Results Collection Delay after the test has finished, the analyzer will wait this amount of time before it calculates the results. 7. Finish the wizard. It automatically creates a sequence of steps in the Command Sequencer. 8. Run the Command Sequencer and allow the Results Reporter to open up when the first iteration has finished. 9. The Results Reporter tool launches when the first iteration is complete and displays the results in a pre-defined template with all the necessary information. 10. Create a PDF, HTML or Excel report from the template if desired. 11. Run the test for IPv6/MLD if desired. 12. End of test case.

Control Variables & Relevance


Variable Number of IGMP Groups IGMP Version MLD Version Number of Multicast Hosts IGMP/MLD Group Addresses Multicast Group Address Step TOS IP TTL Multicast Group Distribution Mode Latency Type Relevance The more the number of IGMP Groups, more processing required on the DUT This would be normally Version 2 or 3. This would be normally Version 2 or 3 This defines the number of Multicast Clients per port usually the number of Multicast Clients and Groups are in 1:1 ratio The Starting Group Address should be a Class D address. There are some Class D addresses, which are private and should not be used. Which octet to increment if we have more than 1 Groups Class of Service for the Multicast packets Time to leave for the multicast packets How the number of groups are distributed amongst the ports, if more than one The way latency is calculated Default Value 1 2 2 1

225.0.0.1

/8 0 10 Even FIFO

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Key Measured Metrics


Matric Join Latency Relevance The time it takes for the first Multicast packet to arrive on the client port from the moment a join message is sent for the particular group Time it takes for the DUT to stop forwarding the Multicast packets for a particular group after a Leave has been sent Metric Unit Microseconds

Leave Latency

Microseconds

Desired Result
The DUT should be able to process the multicast Join/Leaves as fast as possible.

Analysis
The DUT should be able to process the multicast Join/Leaves as fast as possible for multiple Joins and Leaves happening together. Typically, a Join latency will be lower than a Leave latency but there shouldnt be too much of a gap. A high Join/Leave latency indicates a significant issue with the DUT processing engine as well as the buffers.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

BNCH_002 CloudMark Virtualized Performance Benchmark


Abstract
This benchmark tests the elements and systems of a cloud and ranks performance. Cloud Infrastructure is complex, overlapping, and inter-related set of protocols and systems that work together to form a system. The results provide an independent understanding of the performance of the cloud service independent of implementation.

Description
The cloud is composed of switch fabrics, virtual switch fabrics, physical network elements, virtual networking elements, physical serves, virtual servers and client endpoints. A benchmark is required to measure the performance of the cloud infrastructure in a comparable, independent fashion. This benchmark test measures the performance of the cloud infrastructure. Cloud Infrastructure performance can be measured using the following test cases. Cloud Infrastructure Reliability (CiR) is the failure rate of the cloud infrastructure to provide the environment to allow cloud protocols to operate without infrastructure-related errors. The generalized goal of 99.999% uptime, means CiR<=0.001%. Cloud Infrastructure Quality of Experience (CiQoE) is the ratio of Quality of Experience of the protocols flowing over the Cloud Infrastructure to a client connected to a server though a backto-back cable. By expressing QoE as a normalized set compared to a back-to-back ratio, the local VM operating system, server implementation, etc. are normalized. The Cloud Infrastructure Quality of Experience variance (CiQoE Variance) is the variance over time of the user experience. As a general rule, a variance measurement should be from a sample of 12 hours or longer. Further, this measurement determines the reliability of the cloud to act in a deterministic fashion. Cloud Infrastructure Goodput (CiGoodput) measures the ability of the cloud to deliver a minimum bitrate across TCP.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

The following traffic distribution by cloud class by IP QoS DiffServ is used as background traffic: Device Under Test (DUT) Role DiffServ EF (Real-Time) DiffServ 0x31 (Critical) DiffServ 0x20 (General)
Routing 3% OSPF Routing Updates 2%, BGP Updates 1%), Database 17% (Oracle SQLNet Updates), Corporate Web 2% , IMAP4 5% SQL 7% SQLNet SQL Table Updates), HTTPS University Admin 3% (64 Bytes index.html, 5x 1K JPEG Images), Video Conference 5% (MPEG2TS, VBR, 480i), VoIP 5% (G.729A CODEC) Multicast Video 13% (480i, MPGEG-2, IGMPv2, 5 Multicast Channels), Telnet/SSH (2%), CIFS 10% (1:1:3 Small/Medium/Large Ratio) FTP 7% (Large Files), HTTPS Student Services, HTTP 3%, POP3/SMTP 9%, CIFS 8% (1:1:3 Small/Medium/Large Objects, bidirectional), Multicast Video 5% (480i)

DiffServ 0x00 (Best Effort)


Internet Web 5% HTTP (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg), BitTorrent 11% IM 12% (AIM) , BitTorrent 24%, HTTP 3% (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg), HTTPS 1% (64 Bytes index.html, 5x 1K JPEG Images), Mail 5%, FTP 1% (Large Files), Telnet/SSH 3% 50% P2P (Bit Torrent, 5% Peer to Tracker, 95% Peer-2-Peer), 30% HTTP (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg), 5% DNS, Video (MPEG2-TS 5%), SIP (G.729A 3%), Gaming (WoW 5%), 2% RAW TCP No Payload, RAW TCP No Payload, RAW TCP POP2/SMTP 15% (5:2:1 Ratio of Small/medium/Big ratio). HTTPS 20% (64 Bytes index.html, 5x 1K JPEG Images, CIFS 30% (1:1:3 Small/Medium/Large Objects, bidirectional, BitTorrent 10%, Internet Web 25% HTTP (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg) BitTorrent 10%

Enterprise Campus Apps VoIP 15% (SIP+RTP+G.729 A),Unicast Web Conference 2Way (MPEG2-TS, VBR), SIP 5% Higher Education Network Administration 2% (SSH)

Service Providers Telnet/SSH 1%

BGP Route Updates 1%

N/A

10G Max Bandwidth 1G max Bandwidth Small/Medium Business Apps

WAN Accelerator Network Control 5% (Windows Domain Controller Updates), Network Logins Internet AppMix 2011

CIFS 40% (1:1:3 HTTPS 10% (64 Bytes Small/Medium/Large Fields). index.html, 5x 1K JPEG Exchange 35%(5:2:1 Images) Small/Medium/Large ratio)

9
Table 1 Background traffic by Class

50% P2P (Bit Torrent, 5% Peer to Tracker, 95% Peer-2-Peer), 30% HTTP (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg), 5% DNS, Video (MPEG2-TS 5%), SIP (G.729A 3%), Gaming (WoW 5%), 2% RAW TCP

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Target Users
Network equipment providers, service providers, product verification

Target Device Under Test (DUT)


The DUT is a mix of physical and virtual elements, forming the cloud Infrastructure

Relevance
Measuring cloud infrastructure allows the user to measure the performance of the cloud in real world environments in an independent fashion.

Version
1.0

Test Category
BNCH

PASS
[x] Performance [x] Availability [x] Security [x] Scale

Required Tester Capabilities


The tester must have the ability to measure non-simple, real world application layer traffic, including live baselining off of a real internet server. The tester must support L2-7 and physical and virtual ports.

10

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Topology Diagram

General Preparation
1. Tester ports. a. Tester ports shall be connected to three locations in the topology. i. Client cloud access emulation. 1. 2. Physical test ports emulate the client cloud. The client cloud is differentiated by user profiles. In general, user profiles are evenly spread across all physical test interfaces. For the purpose of baselining the virtual switch fabric, the virtual switch ports are divided into client and server endpoints. Physical user profiles should be tested with the following attributes: a. Business Class User. i. 0.1% packet Loss ii. 30 mSec Delay iii. /16 Subnet Mask b. Fixed Internet User Class. i. 3% packet Loss ii. 150 mSec

3.

4.

11

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

iii. /16 Subnet mask c. Smartphone / Tablet Class. i. 1% Packet Loss ii. 200 mSec Delay iii. /16 Subnet mask ii. Servers. 1. Servers are emulated on both the physical and the virtual switch fabrics. All servers serve all protocols.

2. b.

Background Traffic. i. Between the client and the server, the background traffic patterns in Table 1 are continuously emulated, at 50% of line rate of the link speed.

c.

Service flows. i. ii. iii. Service flows represent the measured traffic in test bed. Start table 1 traffic. User actions: 1. The client opens a page from the server with 8-level pipelining request. The server responds with a 1 KB html page, one image that is 50 KB, 20 images that are 1 KB, 5 Java applets of 50 KB each, and an embedded 300 kbps adaptive streaming video. The client spends 20 seconds on the page. The client POSTs form data to the server with 5 fields. The server responds with a continuing 1 KB html page and five 5 KB JPEG objects. The user closes the HTTP session. The QoE metric is measured as either a fail or a pass (All objects loaded without error) and a page load time (Time it takes to transfer all objects on a page from the server to the client). The user opens the FTP server and logs in with a unique username and password.

2.

3. 4. 5.

6.

12
7.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

8. 9.

The user transfers 1 KB, 1MB and 10 MB files. The user closes the FTP session. The QoE metric for this step is a pass (No errors ) or fail (Some Errors) and the total time transferring 3 files with FTP.

10. The user streams a 4 mbps video for 60 seconds. The QoE metric for this step is a pass (No errors ) or fail (Some Errors) and the MOS-AV Score. 11. The user will place a VoIP Call using G.729 for 30 seconds. The QoE metric for this step is a pass (No errors ) or fail (Some Errors) and the MOS-LQ score. 12. User session ends.

Test Procedure Cloud Infrastructure Reliability Test (CiR)


1. 2. 3. 4. 5. 6. 7. 8. Setup and bring on line all physical and virtual test interfaces. Begin servers on physical and virtual endpoints. Set the loading profile to a 45 degree ramp up to a value in excess of the DUT. Start Table 1 Traffic on client and server. Client traffic should be evenly split between physical server and virtual server endpoints. Start test traffic client. Stop once either a failed transaction or fail connection occurs. Set a new loading profile to ramp up to the measured failure point minus one connection or transaction. In the case of multiple failures, use the SimUser count for the lowest value minus 1. In the sustaining portion of the load profile, run the duration in excess of 12 hours. Start traffic.

9.

10. Stop traffic if and when a failed connection or transaction was detected. 11. Calculate the CIR by creating the ratio of 1 failure divided by the cumulative number of SimUsers. 12. In the case of no failure, keep doubling the time until a failure is reached or until the CIR Ratio becomes less than 0.001%.

13

13. The CiR is reported as X % reliability at Y concurrent user sessions.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Control Variables & Relevance


Variable Number of open users Cumulative successful SimUsers session before failure Cumulative users at Failure minus one Test Duration Relevance Peak level of reliability Used to build ratio of reliability Default Value Measured Measured

Upper level of measurement Time of Steady State Phase

Measured 60 minutes

Key Measured Metrics


Matric CiR Relevance Ratio of first failure to the number of open SimUser sessions of success Metric Unit percent

Test Procedure Cloud Infrastructure Quality of Experience (CiQoE)


1. Calculate the virtual server QoE baseline. a. b. Turn off all background traffic. Using the service flow describe above, setup a single virtual endpoint as a client and a single virtual endpoint as a server. The pathway should traverse the virtual switch fabric. The virtual client should run one user session to the virtual server. Measure the QoE metrics as describe above. These become the baseline for the virtual servers. Reset all virtual endpoints as virtual servers.

c. d.

e. 2.

Calculate the physical server QoE baseline. a. b. Turn off all background traffic. Using the service flow describe above, setup a single physical endpoint as a client and a single physical endpoint as a server. The pathway should traverse the virtual switch fabric. The physical client should run one user session to the physical server. Measure the QoE metrics as describe above. These become the baseline for the physical servers

c. d.

14

3.

Use the loading profile in the CiR test (Error minus one). Set the load profile to send 50% of the traffic to the virtual servers and 50% to the physical servers. Ramp up to the peek value and sustain for the desired duration of the test. (Minimum of 60 minutes.)

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

4. 5.

Start traffic. If any QoE metrics failed, demine the number of concurrent SimUsers at failure and adjust the ramp down to that level. Go to Step 4 until no QoE failures are detected. Measure the maximum value measure (Longest page load time, slowest FTP transfer, smallest MOS-AV and MOS-LQ scores). Divide the measure QoE number by its baseline equivalent. This is a percent impact of the infrastructure on traffic. CiQoE is this calculated percent impact by protocol at a peek concurrent SimUser count.

6.

7.

8.

Control Variables & Relevance


Variable Peek Concurrent SimUsers with No QoE Errors Baseline QoE Values Measure Infrastructure QoE Values Test Duration Relevance Upper limit of users Perfect case value Cloud impacted QoE Metrics Time of Steady State Phase Default Value Measured Measured Measured 60 minutes

Key Measured Metrics


Metric CiQoE Relevance Quality of Experience Impact Metric Unit percent

Test Procedure Cloud Infrastructure Quality of Experience Variance (CiQoEv)


1. Calculate the virtual server QoE baseline. a. b. Turn off all background traffic. Using the service flow described above, setup a single virtual endpoint as a client and a single virtual endpoint as a server. The pathway should traverse the virtual switch fabric. The virtual client should run one user session to the virtual server. Measure the QoE metrics as described above. These become the baseline for the virtual servers. Reset all virtual endpoints as virtual servers.

c. d.

e. 2.

Calculate the physical server QoE baseline. a. Turn off all background traffic. Using the service flow described above, setup a single physical endpoint as a client and a single physical endpoint as a server. The pathway should traverse the virtual switch fabric. The physical client should run one user session to the physical server.

15

b.

c.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

d.

Measure the QoE metrics as described above. These become the baseline for the physical servers.

3.

Use the loading profile calculated in the CiR test (Error minus one). Set the load profile to send 50% of the traffic to the virtual servers and 50% to the physical servers. Ramp up to the peek value and sustain for the desired duration of the test. (minimum of 60 minutes.) Start Traffic. If any QoE metrics failed, demine the number of concurrent SimUsers at failure and adjust the ramp down to that level. Go to Step 4 until no QoE failures are detected Measure the maximum value measure (Longest page load time, slowest FTP transfer, smallest MOS-AV and MOS-LQ scores) every 4 seconds during the duration of the test. By protocol, divide the measured QoE by the Baseline for each 4 second interval. This is the instantaneous cloud infrastructure impact percent. With the set calculated, determine the standard deviation and variance. The CiQoEv value is presented as a variance of at a measured standard deviation.

4. 5.

6.

7.

8. 9.

Control Variables & Relevance


Variable Peek Concurrent SimUsers with No QoE Errors Baseline QoE Values Measure Infrastructure QoE Values Test Duration Relevance Upper limit of users Perfect case value Cloud impacted QoE Metrics Time of Steady State Phase Default Value Measured Measured Measured 60 minutes

Key Measured Metrics


Metric CiQoEv Variance CiQoEv Std. Dev. Relevance Variance of change Deviation of change Metric Unit

Test Procedure Cloud Infrastructure Quality of Experience Variance (CiGoodput)


1. 2. Start traffic in Table 1. Setup client and server traffic in a fully partial mesh. All clients should talk evenly to both virtual and physical servers. Use the load profile calculated in the CiR test case. Generate traffic for the desired duration. Measure the minimum goodput achieved after the ramping phase by protocol. Report Minimum average and maximum goodput by protocol.

3. 4.

16

5. 6.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Control Variables & Relevance


Variable Peek Concurrent SimUsers with No QoE Errors Test Duration Relevance Upper limit of users Time of Steady State Phase Default Value Measured 60 minutes

Key Measured Metrics


Matric Minimum / Average / Maximum Goodput Relevance Achievable goodput by protocol Metric Unit bandwidth

Desired Result
The CiR ratio should be less than 0.001%. The CiQoE Ratio should be greater than .999. The CiQoEv Deviation and variance should be less than 0.001. The CiGoodput should be as high as possible. Results are only comparable if the number of concurrent SimUsers are the same.

Analysis
When presenting values, present the calculated ratio with the number of peek concurrent users. In addition, document the number and scope of failed transactions, connections, and users. Document all calculations performed as appendix information to the report.

17

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

BNCH_003 EtherSAM (ITU-T Y.1564) EBS and CBS Burst Test with TurboQoS
Abstract
EtherSAM (ITU-T Y.1564) is an industry standard for independently measuring Ethernet services QoS across a Device Under Test (DUT). This test measures the impact of EBS and CBS EtherSAM bursting on Pathway QoS worthiness for the Ethernet services across a Device Under Test (DUT).

Description
EtherSAM (ITU-T Y.1564) is a successor to RFC 2544, focusing on Ethernet service level agreements (SLAs). Key terms for EtherSAM include: Committed burst size (CBS): Number of allocated bytes available for bursts of ingress service frames transmitted at temporary rates above the CIR while meeting the SLA guarantees provided at the CIR. Committed information rate (CIR): Average rate in bits/s of service frames up to which the network delivers service frames and meets the performance objectives defined by the class of service attribute. Excess burst size (EBS): Number of allocated bytes available for bursts of ingress service frames sent at temporary rates above the CIR + EIR while remaining EIR conformant. Excess information rate (EIR): Average rate in bits/s of service frames up to which the network may deliver service frames but without any performance objectives. ITU-T Y.156sam defines test streams with service attributes linked to the Metro Ethernet Forum (MEF) 10.2 definitions. Services are traffic streams with specific attributes identified by different classifiers, such as 802.1q VLAN, 802.1ad and class of service (CoS) profiles. These services are defined at the UNI level, with different frame and bandwidth profiles, such as the services maximum transmission unit (MTU) or frame size, committed information rate (CIR), and excess information rate (EIR). ITU Y.156sam defines three key test rates based on the MEF service attributes for Ethernet virtual circuit (EVC) and user-to-network interface (UNI) bandwidth profiles. CIR defines the maximum transmission rate for a service where the service is guaranteed certain performance objectives. These objectives are typically defined and enforced via SLAs. EIR defines the maximum transmission rate above the committed information rate, which is considered excess traffic. This excess traffic is forwarded as capacity allows and is not subject to meeting guaranteed performance objectives (best effort forwarding).

18

Overshoot rate defines a testing transmission rate above CIR or EIR and is used to ensure that the DUT or network under test (NUT) does not forward more traffic than specified by the CIR or EIR of the service.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

These rates can be associated with color markings: Green traffic is equivalent to CIR Yellow traffic is equivalent to EIR Red traffic represents discarded traffic (overshoot CIR or overshoot EIR) ITU-T Y.156sam is built around two key subtests, the service configuration test and the service performance test, which are performed sequentially. Forwarding devices, such as switches, routers, bridges and network interface units, are the basis of any network as they interconnect segments. If a service is not correctly configured on any one of these devices within the end-to-end path, network performance can be greatly affected, leading to potential service outages and network-wide issues such as congestion and link failures. The service configuration test measures the ability of DUT or NUT to properly forward in three different states. In the CIR phase, where performance metrics for the service are measured and compared to the SLA performance objectives In the EIR phase, where performance is not guaranteed and the services transfer rate is measured to ensure that CIR is the minimum bandwidth In the discard phase, where the service is generated at the overshoot rate and the expected forwarded rate is not greater than the committed information rate or excess rate (when configured) As network devices come under load, they must prioritize one traffic flow over another to meet the KPIs set for each traffic class. With only one traffic class, no prioritization is performed by the network devices since there is only one set of KPIs. As the number of traffic flows increase, prioritization is necessary and performance failures may occur. The service performance test measures the ability of the DUT or NUT to forward multiple services while maintaining SLA conformance for each service. Services are generated at the CIR, where performance is guaranteed, and pass/fail assessment is performed on the KPI values for each service according to its SLA. Service performance assessment must also be maintained for a medium- to long-term period, as performance degradation will likely occur as the network is under stress for longer periods of time. The service performance test is designed to soak the network under a full committed load for all services, and to measure performance over medium and long test time. Y.156sam focuses on the following KPIs for service quality: Bandwidth: Bit rate of available or consumed data communication resources expressed in bits/second or multiples (kilobits/s, megabits/s). Frame transfer delay (FTD): Also known as latency, this is a measurement of the delay between the transmission and the reception of a frame. Typically this is a round-trip measurement, meaning that the calculation measures both the near-end to far-end and far-end to near-end direction simultaneously.
Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

19

Packet jitter: A measurement of the variation in the time delay between packet deliveries. As packets travel through a network to their destination, they are often queued and sent in bursts to the next hop. Prioritization may occur at random moments, also resulting in packets being sent at random rates. Packets are consequently received at irregular intervals. The direct result of jitter is stress on the receiving buffers of the end nodes, where buffers can be overused or underused when there are large swings of jitter. Frame loss: Typically expressed as a ratio, this is the number of packets lost over the total number of packets sent. Frame loss can result from a number of issues, such as network congestion or errors during transmissions.

Target Users
NEMS, service providers

Target Device Under Test (DUT)


Any router or switch using Ethernet services

Reference
ITU-T Y.1564, MEF 10.2

Relevance
This test measures Ethernet SLAs in the network and the worthiness of a path to maintain full SLA traffic.

Version
1.0.

Test Category
BNCH

PASS
[ ] Performance [x] Availability [ ] Security [ ] Scale

20

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Required Tester Capabilities


To properly measure EtherSAM KPIs, the tester must have the following attributes: FPGA. The precision and scalability of FPGAs permit deep scale while measuring precise timing. Ultra-Low Latency. The tester must be able to test and measure down to 2.5 ns. Support for 10 Mbps to 100 Gbps Ethernet, including virtual ports. Ability to select RFC 4814 MAC addressing to ensure longest CAM lookups. True Jitter. Variable TX simulating such services as video, while isolating the impact of the DUT on EVCs. True sequencing. Measure and differentiate loss from late, duplicate, reorder, and out of order packets. Simultaneous measurement of KPIs. Measure everything in one packet pass.

Topology Diagram

Test Procedure
1. 2. Select and reserve the left and right side ports, including virtual machine endpoints. Set all ports to the Latency+Jitter Mode, clear all counters, and turn off latency compensation on all ports. Define multiple EVC Ethernet Services. For each service specify: a. b. c. Service Name and color. Service bandwidth units (Frames/Second, % Load, or Kbps or Mbps). Whether the service is VLAN Tagged. i. ii. d. If VLAN Tagged, specify the VLAN ID and PRI level. (Default PRI should be zero.) Indicate whether to use RFC-4814 MAC addressing.

3.

21

Indicate whether the service is Ipv4 Only, Ipv6 Only, or Both.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

i. ii. e.

Specify starting IP address, subnet mask, and default gateway. Specify the Ipv4 DiffServ Codepoint and/or Ipv6 Traffic Class.

Specify the Layer 4 header: i. ii. iii. iv. UDP Stateless TCP Stateful TCP Specify source and destination ports

f.

Indicate the service to be tested i. ii. Loopback (Generated and Terminated on the same port) Bi-Directional (Generated and Terminated on different Ports)

g.

Ask for Service KPIs, unless indicated as required, off the user to measure the KPI with a checkbox i. ii. iii. iv. v. CIR (Required) EIR (Required) Overshoot (Required) Maximum True Packet Loss Count (required) Latency and Jitter units 1. 2. vi. vii. viii. ix. x. xi. uSec (Default) nSec

RFC-4689 Absolute Average Jitter RFC-4689 Max Jitter (Required) Maximum Latency (Required) Average latency Maximum Out of Order Packet Count Maximum Delayed Packet Count Maximum Late Packet Count Define an AND or an OR (Default is AND) to combine KPIs

22

xii. xiii.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

4. 5.

Map EVC Services to Test Ports. Define Test case Constants. a. b. c. d. e. Specify frame sizes to test across. Specify the Host Count Iteration. Specify a per-Iteration unit (Frame or time) and value. Specify a starting bandwidth. Specify the number of steps to reach the CIR.

6.

Run the CBS Burst Test. a. For each EVC Service across all test ports (Disable the non-current EVC service StreamBlocks). i. For Each Frame Size. 1. For Each Host Count per EVC Service. a. b. c. Clear all Counters and Dynamic Views. Pause for Pre-Burst duration. Set Rate of StreamBlock to BURST, set the transmission mode to burst for burst duration. Start Traffic. PAUSE for POST-BURST Duration. Set Rate to CIR. BURST Traffic. Measure KIPs and Record.

d. e. f. g. h. 7.

Run the EBS (CIR=0) Burst Test. a. For each EVC Service across all test ports (Disable the non-current EVC service StreamBlocks). i. For Each Frame Size. 1. For Each Host Count per EVC Service. a. b. Clear all Counters and Dynamic Views. Pause for Pre-Burst duration.

23

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

c.

Set Rate of StreamBlock to BURST, set the transmission mode to burst for burst duration. Start Traffic. PAUSE for POST-BURST Duration. Set Rate to EIR. BURST Traffic. Measure KIPs and Record.

d. e. f. g. h. 8.

Run the EBS (CIR>0) Burst Test. a. For each EVC Service across all test ports (Disable the non-current EVC service StreamBlocks). i. For Each Frame Size. 1. For Each Host Count per EVC Service. a. b. c. Clear all Counters and Dynamic Views. Pause for Pre-Burst duration. Set Rate of StreamBlock to BURST, set the transmission mode to burst for burst duration. Start Traffic. Set Rate to CIR. Start Traffic. Set Rate to EIR. Start Traffic. Measure KIPs and Record.

d. e. f. g. h. i.

Control Variables & Relevance


Variable Left and Right Side Ports Per EVC Service Name and Color Relevance Pick the physical or virtual port to be used Name Up to 64 characters and a color picker to pick the Service Color Ask the Rate Unit, CIR, and EIR, and RFC-4814 MAC addressing Default Value NA Name EVC Service 1 and color is Blue Rate Unit in Mbps, Default rate 1mbps CIR, with a CIR of 100 kbps.

24
Per EVC Bandwidth

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Variable Per EVC Service VLAN Tag Per EVC Service IP information Per EVC Service QoS Codepoint Per EVC Service Layer 4 Header Per EVC Traffic orientation Per EVC KPIs (Key Performance Indicators)

Relevance If the service is VLAN tagged IP Type, Starting IP Address, Mask, and Gateway QoS Level Layer 4 Header Type and Source/Destination Ports Type of Generation and termination Sequence of KPIs determines PASS/Fail for the Iteration

Default Value Yes, Starting at VLAN 100, PRI 0, RFC4814 (Yes) Ipv4, 10.x.y.2 /24 = Starting Ipv4 address, Default gateway is 10.x.y.1 BE (0x00h) UDP (Source Port 53, Dest Port 1024) Bi-Directional Defaults: CIR (1 Mbps) EIR (100 Kbps) Overshot (100 kbps) Maximum Packet Loss Count (o packets) Absolute Average Jitter (50 nSec) Maximum Average Jitter (200 nSec) Maximum Latency (1 uSec) Average Latency (700 nSec) Out of order / Delayed / Late Maximum Packet Count (0 Packets) AND Between each operator If UDP only, 64->128->256->512->768>1024->1280->1518->9022, If Some TCP, then 72>128->256->512>768->1024->1280->1518->9022, User should be able to pick an IMIX Pattern or define it, but if TCP is use, 72 should be validated. Default (List, Count=254). Validate Minimum of 1 host per service Time 30 Seconds Default 1.2 Mbps 120 Seconds 30 Seconds

Frame Size Iterations

Ask the user what frame sizes to test across, with the option of IMIX

Hosts per EVC Service Iteration Pre-Burst time Burst Rate and Unit Burst Duration Post-Burst Duration

Iterates a list of host counts Time to pause before the burst Burst Rate Value Time To Burst Time to Wait After Burst

Key Measured Metrics


Metric KPIs Relevance KPI Adherence Up to CIR Metric Unit Pass if KPIs regular expression is TRUE

Desired Result
The DUT passes each EVC Ethernet service KPI up to its respective CIR, allows traffic with no guarantee of KPI adherence between CIR and CIR+EIR, and rate limits the EVC Ethernet Service at CIR+EIR.

25

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Analysis
For Each Frame size used, a table displays the frame size and Hosts per Service as the Y-Axis. Three columns, (CBS Burst, EBS (CIR=0), and EBS (CIR>0)) with PASS or FAIL and the measured KIP values.

26

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

BNCH_004 EtherSAM (ITU-T Y.1564) Service Configuration Ramp Test with TurboQoS
Abstract
EtherSAM (ITU-T Y.1564) is an industry standard for independently measuring Ethernet services QoS across a Device Under Test (DUT). EtherSAM requires simultaneous inspection of CIR, Frame Loss, Jitter, and Latency. EtherSAM ensures Pathway QoS worthiness for the Ethernet services across a DUT.

Description
EtherSAM (ITU-T Y.1564) was designed as a successor to RFC-2544. Focusing on Ethernet Service SLA, EtherSAM requires the measurement of many QoS KPI (Key Performance Indicator) per Service per pass. The key objectives of EtherSAM are: 1) To serve as a network service level agreement (SLA) validation tool, ensuring that a service meets its guaranteed performance settings in a controlled test time 2) To ensure that all services carried by the network meet their SLA objectives at their maximum committed rate, proving that under maximum load network devices and paths can support all the traffic as designed ITU-T Y.156sam defines test streams with service attributes linked to the Metro Ethernet Forum (MEF) 10.2 definitions. Services are traffic streams with specific attributes identified by different classifiers, such as 802.1q VLAN, 802.1ad and class of service (CoS) profiles. These services are defined at the UNI level, with different frame and bandwidth profiles, such as the services maximum transmission unit (MTU) or frame size, committed information rate (CIR), and excess information rate (EIR). ITU Y.156sam defines three key test rates based on the MEF service attributes for Ethernet virtual circuit (EVC) and user-to-network interface (UNI) bandwidth profiles. CIR defines the maximum transmission rate for a service where the service is guaranteed certain performance objectives. These objectives are typically defined and enforced via SLAs. EIR defines the maximum transmission rate above the committed information rate, which is considered excess traffic. This excess traffic is forwarded as capacity allows and is not subject to meeting guaranteed performance objectives (best effort forwarding). Overshoot rate defines a testing transmission rate above CIR or EIR and is used to ensure that the DUT or network under test (NUT) does not forward more traffic than specified by the CIR or EIR of the service.

27

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

These rates can be associated with color markings: Green traffic is equivalent to CIR Yellow traffic is equivalent to EIR Red traffic represents discarded traffic (overshoot CIR or overshoot EIR) ITU-T Y.156sam is built around two key subtests, the service configuration test and the service performance test, which are performed sequentially. Forwarding devices, such as switches, routers, bridges and network interface units, are the basis of any network as they interconnect segments. If a service is not correctly configured on any one of these devices within the end-to-end path, network performance can be greatly affected, leading to potential service outages and network-wide issues such as congestion and link failures. The service configuration test measures the ability of DUT or NUT to properly forward in three different states. In the CIR phase, where performance metrics for the service are measured and compared to the SLA performance objectives In the EIR phase, where performance is not guaranteed and the services transfer rate is measured to ensure that CIR is the minimum bandwidth In the discard phase, where the service is generated at the overshoot rate and the expected forwarded rate is not greater than the committed information rate or excess rate (when configured) As network devices come under load, they must prioritize one traffic flow over another to meet the KPIs set for each traffic class. With only one traffic class, no prioritization is performed by the network devices since there is only one set of KPIs. As the number of traffic flows increase, prioritization is necessary and performance failures may occur. The service performance test measures the ability of the DUT or NUT to forward multiple services while maintaining SLA conformance for each service. Services are generated at the CIR, where performance is guaranteed, and pass/fail assessment is performed on the KPI values for each service according to its SLA. Service performance assessment must also be maintained for a medium- to long-term period, as performance degradation will likely occur as the network is under stress for longer periods of time. The service performance test is designed to soak the network under a full committed load for all services, and to measure performance over medium and long test time. Y.156sam focuses on the following KPIs for service quality: Bandwidth: Bit rate of available or consumed data communication resources expressed in bits/second or multiples (kilobits/s, megabits/s). Frame transfer delay (FTD): Also known as latency, this is a measurement of the delay between the transmission and the reception of a frame. Typically this is a round-trip measurement, meaning that the calculation measures both the near-end to far-end and far-end to near-end direction simultaneously.
Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

28

Packet jitter: A measurement of the variation in the time delay between packet deliveries. As packets travel through a network to their destination, they are often queued and sent in bursts to the next hop. Prioritization may occur at random moments, also resulting in packets being sent at random rates. Packets are consequently received at irregular intervals. The direct result of jitter is stress on the receiving buffers of the end nodes, where buffers can be overused or underused when there are large swings of jitter. Frame loss: Typically expressed as a ratio, this is the number of packets lost over the total number of packets sent. Frame loss can result from a number of issues, such as network congestion or errors during transmissions.

Target Users
NEMS, service providers

Target Device Under Test (DUT)


Any router or switch using Ethernet services

Reference
ITU-T Y.1564, MEF 10.2

Relevance
This test measures Ethernet SLAs in the network and the worthiness of a path to maintain full SLA traffic.

Version
1.0.

Test Category
BNCH

PASS
[ ] Performance [x] Availability [ ] Security [ ] Scale

Required Tester Capabilities


To properly measure EtherSAM KPIs, the tester must have the following attributes: FPGA. The precision and scalability of FPGAs permit deep scale while measuring precise timing. Ultra-Low Latency. The tester must be able to test and measure down to 2.5 ns. Support for 10 Mbps to 100 Gbps Ethernet, including virtual ports. Ability to select RFC 4814 MAC addressing to ensure longest CAM lookups. True Jitter. Variable TX simulating such services as video, while isolating the impact of the DUT on EVCs. True sequencing. Measure and differentiate loss from late, duplicate, reorder, and out of order packets.
Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

29

Simultaneous measurement of KPIs. Measure everything in one packet pass.

Topology Diagram

Test Procedure
1. 2. Select and reserve the left and right side ports, including virtual machine endpoints. Set all ports to the Latency+Jitter Mode, clear all counters, and turn off latency compensation on all ports. Define multiple EVC Ethernet Services. For each service specify: a. b. c. Service Name and color. Service bandwidth units (Frames/Second, % Load, or Kbps or Mbps). Whether the service is VLAN Tagged. i. ii. d. If VLAN Tagged, specify the VLAN ID and PRI level. (Default PRI should be zero.) Indicate whether to use RFC-4814 MAC addressing.

3.

Indicate whether the service is Ipv4 Only, Ipv6 Only, or Both. i. ii. Specify starting IP address, subnet mask, and default gateway. Specify the Ipv4 DiffServ Codepoint and/or Ipv6 Traffic Class.

e.

Specify the Layer 4 header: i. UDP Stateless TCP Stateful TCP

30

ii. iii.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

iv. f.

Specify source and destination ports

Indicate the service to be tested i. ii. Loopback (Generated and Terminated on the same port) Bi-Directional (Generated and Terminated on different Ports)

g.

Indicate the Service KPIs for which to measure the KPI i. ii. iii. iv. v. CIR (Required) EIR (Required) Overshoot (Required) Maximum True Packet Loss Count (required) Latency and Jitter units 1. 2. vi. vii. viii. ix. x. xi. xii. xiii. uSec (Default) nSec

RFC-4689 Absolute Average Jitter RFC-4689 Max Jitter (Required) Maximum Latency (Required) Average latency Maximum Out of Order Packet Count Maximum Delayed Packet Count Maximum Late Packet Count Define an AND or an OR (Default is AND) to combine KPIs

4. 5.

Map EVC Services to Test Ports. Define Test case Constants. a. b. Specify frame sizes to test across. Specify the Host Count Iteration. Specify a per-Iteration unit (Frame or time) and value. Specify a starting bandwidth. Specify the number of steps to reach the CIR.

31

c. d. e.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

6.

Run the CBS Burst Test. a. For each EVC Service across all test ports (Disable the non-current EVC service StreamBlocks). i. For Each Frame Size. 1. For Each Host Count per EVC Service. a. For Each Ramping Step from Starting bandwidth to CIR. i. Phase 1 - Clear All Counters and Dynamic Views. ii. Calculate per port the current bandwidth. iii. Set the host and frame size. iv. Set the burst to time or packets with the correct rate. v. Transmit Traffic. vi. Determine failed KPI streams, record source and destination. vii. Record all KPI counts and TX Bandwidth per flow equals RX Bandwidth per flow. viii. Pass = All Streams passed KPI rules, else Fail. Record pass or fail. ix. Phase 2 - Now, Clear Counters and Dynamic Views. x. Add EIR rate to CIR. xi. Transmit for 1 Burst. xii. Pass is CIR <=Rx rate<=CIR+EIR, Fail is CIR >=Rx Rate. xiii. KPIs are not guaranteed at CIR+EIR, grab dynamic view and counters and record. xiv. Phase 3 - Clear Counters and Dynamic views. xv. Add OVERSHOT RATE to CIR+EIR. xvi. Transmits for 1 burst.

32

xvii. Pass means that RX Rate=CIR+EIR (Rate Limiting) and Fail means that RX Rate >CIR+EIR. xviii. Record Pass Fail and Counters.
Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Control Variables & Relevance


Variable Left & Right Side Ports Per EVC Service Name and Color Per EVC Bandwidth Relevance Pick the physical or virtual port to be used Name Up to 64 characters and a color picker to pick the Service Color Ask the Rate Unit, CIR, and EIR Default Value NA Name EVC Service 1 and color is Blue Rate Unit in Mbps, Default rate 1mbps CIR, with a CIR of 100 kbps. Yes, Starting at VLAN 100, PRI 0. RFC-4814 Mac Addressing (Yes) Ipv4, 10.x.y.2 /24 = Starting Ipv4 address, Default gateway is 10.x.y.1 BE (0x00h) UDP (Source Port 53, Dest Port 1024) Bi-Directional Defaults: CIR (1 Mbps) EIR (100 Kbps) Overshot (100 kbps) Maximum Packet Loss Count (o packets) Absolute Average Jitter (50 nSec) Maximum Average Jitter (200 nSec) Maximum Latency (1 uSec) Average Latency (700 nSec) Out of order / Delayed / Late Maximum Packet Count (0 Packets) AND Between each operator If UDP only, 64->128->256>512->768->1024->1280>1518->9022, If Some TCP, then 72>128->256>512->768->1024->1280>1518->9022, User should be able to pick an IMIX Pattern or define it, but if TCP is use, 72 should be validated. Default (List, Count=254). Validate Minimum of 1 host per service

Per EVC Service VLAN Tag Per EVC Service IP information Per EVC Service QoS Codepoint Per EVC Service Layer 4 Header Per EVC Traffic orientation Per EVC KPIs (Key Performance Indicators)

If the service is VLAN tagged and RFC-4814 MAC Addressing. IP Type, Starting IP Address, Mask, and Gateway QoS Level Layer 4 Header Type and Source/Destination Ports Type of Generation and termination Sequence of KPIs determines PASS/Fail for the Iteration

Frame Size Iterations

Ask the user what frame sizes to test across, with the option of IMIX

33

Hosts per EVC Service Iteration

Iterates a list of host counts

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Variable Per Iteration Duration

Starting Bandwidth Number of Steps to Reach CIR

Relevance Ask the user for each vertical duration unit (Frames or time). This is the period where the KPIs are inspected for compliance The beginning bandwidth of the test CIR / Number of steps defines the raise of bandwidth from iteration to iteration

Default Value Units (Time in Seconds) Duration (120 Seconds)

Default 1 kbps Default is 5

Key Measured Metrics


Metric Phase 1 Phase 2 Phase 3 Relevance KPI Adherence Up to CIR Traffic Forwarding between CIR and CIR+EIR Rate Limited at CIR+EIR Metric Unit Pass if KPIs regular expression is TRUE Pass if Traffic is forwarded between CIR and CIR+EIR Rate limited to CIR+EIR

Desired Result
The DUT passes each EVC Ethernet service KPI up to its respective CIR, allows traffic with no guarantee of KPI adherence between CIR and CIR+EIR, and rate limits the EVC Ethernet Service at CIR+EIR.

Analysis
A bar chart represents service levels on the X-Axis and bandwidth on the Y-Axis. A bar colored green maps from 0 up to the last passing bandwidth up to the CIR. A grid represents service levels on the X-Axis KPI measurements of the last passing bandwidth for each EVC service level on the Y-Axis. A table generated for each tested frame size (or iMIX pattern) displays rows representing EVC Ethernet Service Levels. Columns represent hosts per service per port pair. Individual cells indicate PASS or FAIL and the recorded KPI information.

34

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

BNCH_005 EtherSAM (ITU-T Y.1564) Service Performance Test with TurboQoS


Abstract
EtherSAM (ITU-T Y.1564) is an industry standard for independently measuring Ethernet services QoS across a Device Under Test (DUT). This test case measures the ability of the DUT to hand multiple services and KPIs simultaneously for a duration of time. EtherSAM ensures SLA Pathway QoS worthiness for the Ethernet services across a DUT.

Description
EtherSAM (ITU-T Y.1564) was designed as a successor to RFC-2544. Focusing on Ethernet Service SLA, EtherSAM requires the measurement of many QoS KPI (Key Performance Indicator) per Service per pass. The key objectives of EtherSAM are: 1) To serve as a network service level agreement (SLA) validation tool, ensuring that a service meets its guaranteed performance settings in a controlled test time 2) To ensure that all services carried by the network meet their SLA objectives at their maximum committed rate, proving that under maximum load network devices and paths can support all the traffic as designed ITU-T Y.156sam defines test streams with service attributes linked to the Metro Ethernet Forum (MEF) 10.2 definitions. Services are traffic streams with specific attributes identified by different classifiers, such as 802.1q VLAN, 802.1ad and class of service (CoS) profiles. These services are defined at the UNI level, with different frame and bandwidth profiles, such as the services maximum transmission unit (MTU) or frame size, committed information rate (CIR), and excess information rate (EIR). ITU Y.156sam defines three key test rates based on the MEF service attributes for Ethernet virtual circuit (EVC) and user-to-network interface (UNI) bandwidth profiles. CIR defines the maximum transmission rate for a service where the service is guaranteed certain performance objectives. These objectives are typically defined and enforced via SLAs. EIR defines the maximum transmission rate above the committed information rate, which is considered excess traffic. This excess traffic is forwarded as capacity allows and is not subject to meeting guaranteed performance objectives (best effort forwarding). Overshoot rate defines a testing transmission rate above CIR or EIR and is used to ensure that the DUT or network under test (NUT) does not forward more traffic than specified by the CIR or EIR of the service.

35

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

These rates can be associated with color markings: Green traffic is equivalent to CIR Yellow traffic is equivalent to EIR Red traffic represents discarded traffic (overshoot CIR or overshoot EIR) ITU-T Y.156sam is built around two key subtests, the service configuration test and the service performance test, which are performed sequentially. Forwarding devices, such as switches, routers, bridges and network interface units, are the basis of any network as they interconnect segments. If a service is not correctly configured on any one of these devices within the end-to-end path, network performance can be greatly affected, leading to potential service outages and network-wide issues such as congestion and link failures. The service configuration test measures the ability of DUT or NUT to properly forward in three different states. In the CIR phase, where performance metrics for the service are measured and compared to the SLA performance objectives In the EIR phase, where performance is not guaranteed and the services transfer rate is measured to ensure that CIR is the minimum bandwidth In the discard phase, where the service is generated at the overshoot rate and the expected forwarded rate is not greater than the committed information rate or excess rate (when configured) As network devices come under load, they must prioritize one traffic flow over another to meet the KPIs set for each traffic class. With only one traffic class, no prioritization is performed by the network devices since there is only one set of KPIs. As the number of traffic flows increase, prioritization is necessary and performance failures may occur. The service performance test measures the ability of the DUT or NUT to forward multiple services while maintaining SLA conformance for each service. Services are generated at the CIR, where performance is guaranteed, and pass/fail assessment is performed on the KPI values for each service according to its SLA. Service performance assessment must also be maintained for a medium- to long-term period, as performance degradation will likely occur as the network is under stress for longer periods of time. The service performance test is designed to soak the network under a full committed load for all services, and to measure performance over medium and long test time. Y.156sam focuses on the following KPIs for service quality: Bandwidth: Bit rate of available or consumed data communication resources expressed in bits/second or multiples (kilobits/s, megabits/s). Frame transfer delay (FTD): Also known as latency, this is a measurement of the delay between the transmission and the reception of a frame. Typically this is a round-trip measurement, meaning that the calculation measures both the near-end to far-end and far-end to near-end direction simultaneously.
Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

36

Packet jitter: A measurement of the variation in the time delay between packet deliveries. As packets travel through a network to their destination, they are often queued and sent in bursts to the next hop. Prioritization may occur at random moments, also resulting in packets being sent at random rates. Packets are consequently received at irregular intervals. The direct result of jitter is stress on the receiving buffers of the end nodes, where buffers can be overused or underused when there are large swings of jitter. Frame loss: Typically expressed as a ratio, this is the number of packets lost over the total number of packets sent. Frame loss can result from a number of issues, such as network congestion or errors during transmissions.

Target Users
NEMS, service providers

Target Device Under Test (DUT)


Any router or switch using Ethernet services

Reference
ITU-T Y.1564, MEF 10.2

Relevance
This test measures Ethernet SLAs in the network and the worthiness of a path to maintain full SLA traffic.

Version
1.0.

Test Category
BNCH

PASS
[ ] Performance [x] Availability [ ] Security [ ] Scale

37

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Required Tester Capabilities


To properly measure EtherSAM KPIs, the tester must have the following attributes: FPGA. The precision and scalability of FPGAs permit deep scale while measuring precise timing. Ultra-Low Latency. The tester must be able to test and measure down to 2.5 ns. Support for 10 Mbps to 100 Gbps Ethernet, including virtual ports. Ability to select RFC 4814 MAC addressing to ensure longest CAM lookups. True Jitter. Variable TX simulating such services as video, while isolating the impact of the DUT on EVCs. True sequencing. Measure and differentiate loss from late, duplicate, reorder, and out of order packets. Simultaneous measurement of KPIs. Measure everything in one packet pass.

Topology Diagram

Test Procedure
1. 2. Select and reserve the left and right side ports, including virtual machine endpoints. Set all ports to the Latency+Jitter Mode, clear all counters, and turn off latency compensation on all ports. Define multiple EVC Ethernet Services. For each service specify: a. b. c. Service Name and color. Service bandwidth units (Frames/Second, % Load, or Kbps or Mbps). Whether the service is VLAN Tagged. i. ii. d. If VLAN Tagged, specify the VLAN ID and PRI level. (Default PRI should be zero.) Indicate whether to use RFC-4814 MAC addressing.

3.

38

Indicate whether the service is Ipv4 Only, Ipv6 Only, or Both.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

i. ii. e.

Specify starting IP address, subnet mask, and default gateway. Specify the Ipv4 DiffServ Codepoint and/or Ipv6 Traffic Class.

Specify the Layer 4 header: i. ii. iii. iv. UDP Stateless TCP Stateful TCP Specify source and destination ports

f.

Indicate the service to be tested i. ii. Loopback (Generated and Terminated on the same port) Bi-Directional (Generated and Terminated on different Ports)

g.

Indicate the Service KPIs for which to measure the KPI i. ii. iii. iv. v. CIR (Required) EIR (Required) Overshoot (Required) Maximum True Packet Loss Count (required) Latency and Jitter units 1. 2. vi. vii. viii. ix. x. xi. uSec (Default) nSec

RFC-4689 Absolute Average Jitter RFC-4689 Max Jitter (Required) Maximum Latency (Required) Average latency Maximum Out of Order Packet Count Maximum Delayed Packet Count Maximum Late Packet Count Define an AND or an OR (Default is AND) to combine KPIs

39

xii. xiii. 4.

Map EVC Services to Test Ports.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

5.

Define Test case Constants. a. b. c. Specify frame sizes to test across. Specify the Host Count Iteration. Specify a per-Iteration unit (Frame or time) and value.

6.

Run the Service performance test. a. For Each Frame Size. i. For Each Host count. 1. 2. 3. 4. Clear Counters and Dynamic Views. Setup all Services mapped and set their rates to their respective CIR. Run Traffic. Record Each EVC Ethernet service results, noting which services failed.

Control Variables & Relevance


Variable Left & Right Side Ports Per EVC Service Name and Color Per EVC Bandwidth Relevance Pick the physical or virtual port to be used Name Up to 64 characters and a color picker to pick the Service Color Ask the Rate Unit, CIR, and EIR Default Value NA Name EVC Service 1 and color is Blue Rate Unit in Mbps, Default rate 1mbps CIR, with a CIR of 100 kbps. Yes, Starting at VLAN 100, PRI 0, RFC-4814 MAC addressing (Yes) Ipv4, 10.x.y.2 /24 = Starting Ipv4 address, Default gateway is 10.x.y.1 BE (0x00h) UDP (Source Port 53, Dest Port 1024) Bi-Directional

Per EVC Service VLAN Tag Per EVC Service IP information Per EVC Service QoS Codepoint Per EVC Service Layer 4 Header Per EVC Traffic orientation

If the service is VLAN tagged and if the test should use RFC-4814 MAC addressing. IP Type, Starting IP Address, Mask, and Gateway QoS Level Layer 4 Header Type and Source/Destination Ports Type of Generation and termination

40

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Variable Per EVC KPIs (Key Performance Indicators)

Relevance Sequence of KPIs determines PASS/Fail for the Iteration

Frame Size Iterations

Ask the user what frame sizes to test across, with the option of IMIX

Hosts per EVC Service Iteration Total Test Duration

Iterates a list of host counts

Time to Run Each Iteration

Default Value Defaults: CIR (1 Mbps) EIR (100 Kbps) Overshot (100 kbps) Maximum Packet Loss Count (o packets) Absolute Average Jitter (50 nSec) Maximum Average Jitter (200 nSec) Maximum Latency (1 uSec) Average Latency (700 nSec) Out of order / Delayed / Late Maximum Packet Count (0 Packets) AND Between each operator If UDP only, 64->128->256>512->768->1024->1280>1518->9022, If Some TCP, then 72>128->256>512->768->1024->1280>1518->9022, User should be able to pick an IMIX Pattern or define it, but if TCP is use, 72 should be validated. Default (List, Count=254). Validate Minimum of 1 host per service Default (24 hours)

Key Measured Metrics


Metric Service Performance Test Relevance KPI Adherence at CIR Metric Unit Pass or Fail

Desired Result
The DUT passes each EVC Ethernet service KPI at its respective CIR for the duration of the test.

Analysis
For each Frame Size and host count, a vertical line chart represents the time on the X-Axis and a bar per EVC service indicates the time the KPI regular expression failed, with a table displaying recorded KPI data.

41

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Appendix A Telecommunications Definitions


APPLICATION LOGIC. The computational aspects of an application, including a list of instructions that tells a software application how to operate. APPLICATION SERVICE PROVIDER (ASP). An ASP deploys hosts and manages access to a packaged application by multiple parties from a centrally managed facility. The applications are delivered over networks on a subscription basis. This delivery model speeds implementation, minimizes the expenses and risks incurred across the application life cycle, and overcomes the chronic shortage of qualified technical personnel available in-house. APPLICATION MAINTENANCE OUTSOURCING PROVIDER. Manages a proprietary or packaged application from either the customer's or the provider's site. ASP INFRASTRUCTURE PROVIDER (AIP). A hosting provider that offers a full set of infrastructure services for hosting online applications. ATM. Asynchronous Transport Mode. An information transfer standard for routing high-speed, highbandwidth traffic such as real-time voice and video, as well as general data bits. AVAILABILITY. The portion of time that a system can be used for productive work, expressed as a percentage. BACKBONE. A centralized high-speed network that interconnects smaller, independent networks. BANDWIDTH. The number of bits of information that can move through a communications medium in a given amount of time; the capacity of a telecommunications circuit/network to carry voice, data, and video information. Typically measured in Kbps and Mbps. Bandwidth from public networks is typically available to business and residential end-users in increments from 56 Kbps to 45 Mbps. BIT ERROR RATE. The number of transmitted bits expected to be corrupted per second when two computers have been communicating for a given length of time. BURST INFORMATION RATE (BIR). The rate of information in bits per second that the customer may need over and above the CIR. A burst is typically a short duration transmission that can relieve momentary congestion in the LAN or provide additional throughput for interactive data applications. BUSINESS ASP. Provides prepackaged application services in volume to the general business market, typically targeting small to medium size enterprises. BUSINESS-CRITICAL APPLICATION. The vital software needed to run a business, whether custom-written or commercially packaged, such as accounting/finance, ERP, manufacturing, human resources and sales databases.

42

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

BUSINESS SERVICE PROVIDER. Provides online services aided by brick-and-mortar resources, such as payroll processing and employee benefits administration, printing, distribution or maintenance services. The category includes business process outsourcing (BPO) companies. COMMERCE NETWORK PROVIDER. Commerce networks were traditionally proprietary value-added networks (VANs) used for electronic data interchange (EDI) between companies. Today the category includes the new generation of electronic purchasing and trading networks. COMPETITIVE ACCESS PROVIDER (CAP). A telecommunications company that provides an alternative to a LEC for local transport and special access telecommunications services. CAPACITY. The ability for a network to provide sufficient transmitting capabilities among its available transmission media, and respond to customer demand for communications transport, especially at peak usage times. CLIENT/DEVICE. Hardware that retrieves information from a server. CLUSTERING. A group of independent systems working together as a single system. Clustering technology allows groups of servers to access a single disk array containing applications and data. COMPUTING UTILITY PROVIDER (CUP). A provider that delivers computing resources, such as storage, database or systems management, on a pay-as-you-go basis. CSU/DSU. Channel Server Unit/Digital Server Unit. A device used to terminate a telephone company connection and prepare data for a router interface. DATA MART. A subset of a data warehouse, intended for use by a single department or function. DATA WAREHOUSE. A database containing copious amounts of information, organized to aid decisionmaking in an organization. Data warehouses receive batch updates and are configured for fast online queries to produce succinct summaries of data. DEDICATED LINE. A point-to-point, hardwired connection between two service locations. DEMARCATION LINE. The point at which the local operating company's responsibility for the local loop ends. Beyond the demarcation point (also known as the network interface), the customer is responsible for installing and maintaining all equipment and wiring. DISCARD ELIGIBILITY (DE) BIT. Relevant in situations of high congestion, it indicates that the frame should be discarded in preference to frames without the DE bit set. The DE bit may be set by the network or by the user; and once set cannot be reset by the network. DS-1 OR T-1. A data communication circuit capable of transmitting data at 1.5 Mbps. Currently in widespread use by medium and large businesses for video, voice, and data applications.

43

DS-3 OR T-3. A data communications circuit capable of transmitting data at 45 Mbps. The equivalent data capacity of 28 T-1s. Currently used only by businesses/institutions and carriers for high-end applications.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

ELECTRONIC DATA INTERCHANGE (EDI). The electronic communication of business transactions (orders, confirmations, invoices etc.) of organizations with differing platforms. Third parties provide EDI services that enable the connection of organizations with incompatible equipment. ENTERPRISE ASP. An ASP that delivers a select range of high-end business applications, supported by a significant degree of custom configuration and service. ENTERPRISE RELATIONSHIP MANAGEMENT (ERM). Solutions that enable the enterprise to share comprehensive, up-to-date customer, product, competitor and market information to achieve long-term customer satisfaction, increased revenues, and higher profitability. ENTERPRISE RESOURCE PLANNING (ERP). An information system or process integrating all manufacturing and related applications for an entire enterprise. ERP systems permit organizations to manage resources across the enterprise and completely integrate manufacturing systems. ETHERNET. A local area network used to connect computers, printers, workstations, and other devices within the same building. Ethernet operates over twisted wire and coaxial cable. EXTENDED SUPERFRAME FORMAT. A T1 format that provides a method for easily retrieving diagnostics information. FAT CLIENT. A computer that includes an operating system, RAM, ROM, a powerful processor and a wide range of installed applications that can execute either on the desktop or on the server to which it is connected. Fat clients can operate in a server-based computing environment or in a stand-alone fashion. FAULT TOLERANCE. A design method that incorporates redundant system elements to ensure continued systems operation in the event of the failure of any individual element. FDDI. Fiber Distributed Data Interface. A standard for transmitting data on optical-fiber cables at a rate of about 100 Mbps. FRAME. The basic logical unit in which bit-oriented data is transmitted. The frame consists of the data bits surrounded by a flag at each end that indicates the beginning and end of the frame. A primary rate can be thought of as an endless sequence of frames. FRAME RELAY. A high-speed packet switching protocol popular in networks, including WANs, LANs, and LAN-to-LAN connections across long distances. GBPS. Gigabits per second, a measurement of data transmission speed expressed in billions of bits per second. HOSTED OUTSOURCING. Complete outsourcing of a company's information technology applications and associated hardware systems to an ASP. HOSTING PROVIDER. Provider who operates data center facilities for general-purpose server hosting and collocation. INFRASTRUCTURE ISV. And independent software vendor that develops infrastructure software to support the hosting and online delivery of applications.

44

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

INTEGRATED SERVICES DIGITAL NETWORK (ISDN). An information transfer standard for transmitting digital voice and data over telephone lines at speeds up to 128 Kbps. INTEGRATION. Equipment, systems, or subsystem integration, assembling equipment or networks with a specific function or task. Integration is combining equipment/systems with a common objective, easy monitoring and/or executing commands. It takes three disciplines to execute integration: 1) hardware, 2) software, and 3) connectivity transmission media (data link layer), interfacing components. All three aspects of integration have to be understood to make two or more pieces of equipment or subsystems support the common objective. INTER-EXCHANGE CARRIER (IXC). A telecommunications company that provides telecommunication services between local exchanges on an interstate or intrastate basis. INTERNET SERVICE PROVIDER (ISP). A company that provides access to the Internet for users and businesses. INDEPENDENT SOFTWARE VENDOR (ISV). A company that is not a part of a computer systems manufacturer that develops software applications. INTERNETWORKING. Sharing data and resources from one network to another. IT SERVICE PROVIDER. Traditional IT services businesses, including IT outsourcers, systems integrators, IT consultancies and value added resellers. KILOBITS PER SECOND (KBPS). A data transmission rate of 1,000 bits per second. LEASED LINE. A telecommunications line dedicated to a particular customer along predetermined routers. LOCAL ACCESS TRANSPORT AREA (LATA). One of approximately 164 geographical areas within which local operating companies connect all local calls and route all long-distance calls to the customer's interexchange carrier. LOCAL EXCHANGE CARRIER (LEC). A telecommunications company that provides telecommunication services in a defined geographic area. LOCAL LOOP. The wires that connect an individual subscriber's telephone or data connection to the telephone company central office or other local terminating point. LOCAL/REGIONAL ASP. A company that delivers a range of application services, and often the complete computing needs, of smaller businesses in their local geographic area. MEGABITS PER SECOND (MBPS). 1,024 kilobits per second. METAFRAME. The world's first server-based computing software for Microsoft Windows NT 4.0 Server, Terminal Server Edition multi-user software (co-developed by Citrix). MODEM. A device for converting digital signals to analog and vice versa, for data transmission over an analog telephone line. MULTIPLEXING. The combining of multiple data channels onto a single transmission medium. Sharing a circuit - normally dedicated to a single user - between multiple users.

45

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

MULTI-USER. The ability for multiple concurrent users to log on and run applications on a single server. NET-BASED ISV. An ISV whose main business is developing software for Internet-based application services. This includes vendors who deliver their own applications online, either directly to users or via other service providers. NETWORK ACCESS POINT (NAP). A location where ISPs exchange traffic. NETWORK COMPUTER (NC). A thin-client hardware device that executes applications locally by downloading them from the network. NCs adhere to a specification jointly developed by Sun, IBM, Oracle, Apple and Netscape. They typically run Java applets within a Java browser, or Java applications within the Java Virtual Machine. NETWORK COMPUTING ARCHITECTURE. A computing architecture in which components are dynamically downloaded from the network onto the client device for execution by the client. The Java programming language is at the core of network computing. ONLINE ANALYTICAL PROCESSING (OLAP). Software that enables decision support via rapid queries to large databases that store corporate data in multidimensional hierarchies and views. OPERATIONAL RESOURCE PROVIDER. Operational resources are external business services that an ASP might use as part of its own infrastructure, such as helpdesk, technical support, financing, or billing and payment collection. OUTSOURCING. The transfer of components or large segments of an organization's internal IT infrastructure, staff, processes or applications to an external resource such as an ASP. PACKAGED SOFTWARE APPLICATION. A computer program developed for sale to consumers or businesses, generally designed to appeal to more than a single customer. While some tailoring of the program may be possible, it is not intended to be custom-designed for each user or organization. PACKET. A bundle of data organized for transmission, containing control information (destination, length, origin, etc.), the data itself, and error detection and correction bits. PACKET SWITCHING. A network in which messages are transmitted as packets over any available route rather than as sequential messages over circuit-switched or dedicated facilities. PEERING. The commercial practice under which nationwide ISPs exchange traffic without the payment of settlement charges. PERFORMANCE. A major factor in determining the overall productivity of a system, performance is primarily tied to availability, throughput and response time. PERMANENT VIRTUAL CIRCUIT (PVC). A PVC connects the customer's port connections, nodes, locations, and branches. All customer ports can be connected, resembling a mesh, but PVCs usually run between the host and branch locations. POINT OF PRESENCE (POP). A telecommunications facility through which the company provides local connectivity to its customers.

46

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

PORTAL. A company whose primary business is operating a Web destination site, hosting content and applications for access via the Web. REMOTE ACCESS. Connection of a remote computing device via communications lines such as ordinary phone lines or wide area networks to access distant network applications and information. REMOTE PRESENTATION SERVICES PROTOCOL. A set of rules and procedures for exchanging data between computers on a network, enabling the user interface, keystrokes, and mouse movements to be transferred between a server and client. RESELLER/VAR. An intermediary between software and hardware producers and end users. Resellers frequently add value (thus Value-Added Reseller) by performing consulting, system integration and product enhancement. ROUTER. A communications device between networks that determines the best path for optimal performance. Routers are used in complex networks of networks such as enterprise-wide networks and the Internet. SCALABILITY. The ability to expand the number of users or increase the capabilities of a computing solution without making major changes to the systems or application software. SERVER. The computer on a local area network that often acts as a data and application repository and that controls an application's access to workstations, printers and other parts of the network. SERVER-BASED COMPUTING. A server-based approach to delivering business-critical applications to end-user devices, whereby an application's logic executes on the server and only the user interface is transmitted across a network to the client. Benefits include single-point management, universal application access, bandwidth-independent performance, and improved security for business applications. SINGLE-POINT CONTROL. One of the benefits of the ASP model, single-point control helps reduce the total cost of application ownership by enabling widely used applications and data to be deployed, managed and supported at one location. Single-point control enables application installations, updates and additions to be made once, on the server, which are then instantly available to users anywhere. SPECIALIST ASP. Provide applications which serve a specific professional or business activity, such as customer relationship management, human resources or Web site services. SYSTEMS MANUFACTURER. Manufacturer of servers, networking and client devices. TELECOMS PROVIDER. Traditional and new-age telecommunications network providers (telcos). THIN CLIENT. A low-cost computing device that accesses applications and and/or data from a central server over a network. Categories of thin clients include Windows-Based Terminals (WBT, which comprise the largest segment), X-Terminals, and Network Computers (NC).

47

TOTAL COST OF OWNERSHIP (TCO). Model that helps IT professionals understand and manage the budgeted (direct) and unbudgeted (indirect) costs incurred for acquiring, maintaining and using an application or a computing system. TCO normally includes training, upgrades, and administration as well as the purchase price. Lowering TCO through single-point control is a key benefit of server-based computing.

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

TOTAL SECURITY ARCHITECTURE (TSA). A comprehensive, end-to-end architecture that protects the network. TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL (TCP/IP). A suite of network protocols that allow computers with different architectures and operating system software to communicate over the Internet. USER INTERFACE. The part of an application that the end user sees on the screen and works with to operate the application, such as menus, forms and buttons. VERTICAL MARKET ASP. Provides solutions tailored to the needs of a specific industry, such as the healthcare industry. VIRTUAL PRIVATE NETWORK (VPN). A secure, encrypted private connection across a cloud network, such as the Internet. WEB HOSTING. Placing a consumer's or organization's web page or web site on a server that can be accessed via the Internet. WIDE AREA NETWORK. Local area networks linked together across a large geographic area. WINDOWS-BASED TERMINAL (WBT). Thin clients with the lowest cost of ownership, as there are no local applications running on the device. Standards are based on Microsoft's WBT specification developed in conjunction with Wyse Technology, NCD, and other thin client companies.

48

Spirent Journal of Benchmark Test Methodologies | Spirent Communications 2011

Вам также может понравиться