Академический Документы
Профессиональный Документы
Культура Документы
August 2013
Report *******
October
2013
Report 130815
Contents
1.0 Executive Summary ............................................................................................................... 3
2.0 About the Dell Networking S6000 10/40 GbE Switch ............................................................. 4
3.0 Methodology .......................................................................................................................... 5
3.1 Test Bed Diagram ............................................................................................................... 6
3.2 Hardware and Software Featured in Testing ....................................................................... 6
4.0 Performance Testing .............................................................................................................. 7
4.1 RFC 2544 Throughput ........................................................................................................ 7
4.2 RFC 2544 Latency .............................................................................................................. 9
4.3 RFC 2889 Fully Meshed Throughput ................................................................................ 13
4.4 RFC 2889 Fully Meshed Latency ...................................................................................... 15
4.5 RFC 3918 Layer 3 Multicast Throughput .......................................................................... 17
4.6 RFC 3918 Layer 3 Multicast Latency ................................................................................ 19
4.7 RFC 3918 Layer 3 Group Join Delay and Group Leave Delay .......................................... 20
5.0 Scalability Test ..................................................................................................................... 22
6.0 Power Consumption and Efficiency Test .............................................................................. 23
7.0 VDI Scalability Testing ......................................................................................................... 24
8.0 Features............................................................................................................................... 27
8.1 Tool-less and Hot-swappable Maintenance ...................................................................... 27
8.2 Default Configuration ........................................................................................................ 27
Page 2
DR130815
8October2013
Easily transmits frame sizes of 128 to 12000 bytes at full line rate with low latency and no
loss in RFC 2544, 2889 and 3918 testing
Verified table capacities support high port density and high performance: 16,384 IPv4
routes, 163,836 MAC addresses and 52,251 ARP addresses
RFC 2544 Layer 2 performance testing validated full line rate throughput of 2.56 Tbps
with all ports fully loaded and a forwarding rate of 1,464,007,507 frames per second (fps)
Energy consumption ranged from .25 of a watt per Gbps for the smallest frame size
tested, 64 bytes, to .12 for the largest frame size tested, 12000 bytes
Redundant power supplies and hot-swappable cooling fans and hard drives simplify
ongoing maintenance
With a base configuration of 32 ports of 40 GbE QSFP+, the switch can play an important role as
a spine switch in a leaf-spine architecture that is the foundation of a cloud-based environment. It
also can help to connect physical hardware and virtual machines in a virtualized environment.
The S6000 also can be configured with 96 ports of 10 GbE and eight additional ports of 40 GbE.
This is a migration path as speed in the network core approaches 40 Gbps.
Miercom was impressed with the S6000, which exhibited high performance and low latency in
performance testing while operating in store-and-forward mode and running two different
versions of the FTOS firmware, pre-release 9-0 (2-28) and production 9.0 (2.0). It also exhibited
a high level of scalability in a VDI environment.
The Dell S6000 Top-of-Row/End-of-Rack switch operating in store-and-forward mode has
earned Miercom Performance Verified.
Rob Smithers
CEO
Miercom
Page 3
DR130815
8October2013
Aggregation switch for enterprise LAN serving mid-sized and large customers or handling
high-frequency financial trading, Web 2.0, big data and other heavy workload operations
Traditional Ethernet switch with redundant connections to 10 GbE rack and blade servers
The S6000 delivers high performance, 2.56 Tbps of switching I/O bandwidth in full duplex mode,
from a compact 1U form factor, which conserves rack space. The MTU verified in testing is
12000 bytes, a super jumbo frame size.
The primary configuration is 32 ports of 40 GbE QSFP+. An alternate configuration, 96 ports of
10 GbE and eight additional ports of 40 GbE, can create a pathway for the migration of speed in
the network core to 40 Gbps. Configuration of the FTOS switch firmware is done via the CLI. The
default forwarding mode is store-and-forward.
Large tables support the high port density and high performance of the S6000. Testing verified
the following capacities: IPv4 routing table, 16,384; MAC address table, 163,836; and ARP
address table, 52,251. All are beyond the vendor-stated capacity.
Priority-Based Flow Control (PFC), Data Center Bridge Exchange (DCBX) and Enhanced
Transmission Selection (ETS) make the S6000 a good fit for the Data Center Bridging (DCB)
environment and iSCSI storage networking.
Layer 2 multi-path support via Virtual Link Trunking (VLT) is a key feature. A proprietary Layer 2
link aggregation protocol, VLT offers servers connected to different access switches a redundant,
load-balancing connection to the network core in a loop-free environment that has benefits
beyond that of Spanning Tree Protocol.
The S6000 also supports Multi-domain Virtual Link Trunking (mVLT), a proprietary Dell design
for multi-dimensional VLT that allows multiple VLT domains to be linked with a VLT LAG. VLT
and mVLT enable the S6000 to be positioned as core-aggregation layer and to serve as a Layer
2 top-of-rack core or aggregation switch. The combination also provides a robust, multi-chassis
lagging feature that permits the switch infrastructure to maintain high availability even during
chassis upgrade times.
Tool-less mounting kits, redundant power supplies and hot-swappable hard drives and cooling
fans reduce the time and labor needed to install and maintain the S6000.
Page 4
DR130815
8October2013
3.0 Methodology
The Dell Networking S6000 switch was
evaluated in testing running a pre-release
version of the FTOS firmware, 9-0 (2-28) as
well as a production version, 9.0 (2.0).
In performance testing, throughput and
latency were measured in accordance with
the RFC 2544, 2889 and 3918 benchmarking
methodologies. Layer 2 and/or Layer 3 traffic
was utilized in all performance tests, the
result of which was the average of three tests.
The RFC 2889 test was used to verify the
throughput and latency of fully meshed traffic.
The RFC 3918 test was used to verify the
throughput and latency of Layer 3 IPv4
multicast traffic.
RFC 2544, 2889 and 3918 latency values
were verified utilizing 10 GbE and 40 GbE
ports of the Dell S6000, which was configured
in store-and-forward mode.
In addition, scalability testing verified the IPv4 route capacity and the capacity of the MAC and
ARP tables.
Power consumption was monitored during booting and idling as well as at full line rate with
different frame sizes. The S6000 switch supports fiber optic cable, but does not support the
Energy-Efficient Ethernet (EEE) standard, IEEE 802.az.
The Ixia XM12 chassis running the Ixia IxNetwork application drove network traffic through the
S6000 switch in RFC 2544 and 2889 testing. The combination of the XM12 and the IxAutomate
application was used in the RFC 3918 testing.
Ixia (www.ixiacom.com) is an industry leader in performance testing of networking equipment.
Ixias exclusive approach employs coordination of energy measurements with network traffic
load, allowing energy consumption to be charted against network traffic volume. Real-world traffic
is generated by Ixias test platform and test applications, principally the IxAutomate application
for Layer 2 and Layer 3 switching and routing traffic.
The Ixia XM12 also was used to determine the capacity of the MAC address table and the IPv4
routing table.
The BreakingPoint Firestorm chassis was used to determine the capacity of the ARP address
table. The BreakingPoint Firestorm can saturate Class B subnets by injecting traffic that simulates
more than thousands of servers and clients with unique IP addresses and MAC addresses. It is
possible to configure port numbers for traffic injection as well as total and
per-port bandwidth, session counts, and end device counts. The BreakingPoint Firestorm is
managed via multiple interfaces, including a Web-based graphical user interface and a RS-232C
or SSL CLI interface.
The peer mapping function of the WildPackets OmniPeek network analyzer was used to analyze
the characteristics of the VDI traffic.
Page 5
DR130815
8October2013
Ixia XM12
BreakingPoint FireStorm
Function
Software Version
Dell S6000
Spine switch
Ixia XM12
Traffic generator
BreakingPoint FireStorm
Traffic generator
Software
Name
Function
Version
Ixia IxNetwork
7.0.801.25 EA
Ixia IxAutomate
7.40.123.5 GA-Patch1
Ixia IxExplorer
Power Consumption
Testing Tool
6.40.900 Build 6
WildPackets OmniPeek
Network Analyzer
7.5
Page 6
DR130815
8October2013
The test results will show the maximum throughput the switch is able to achieve without any
frame loss. In addition, a latency value will be captured for each frame size tested.
Page 7
DR130815
8October2013
70
60
50
Layer 2
40
Layer 3
30
20
10
0
128
256
512
1024
1280
1518
2176
9216
12000
The Dell S6000 exhibited line rate throughput for Layer 2 and Layer 3 traffic using the RFC 2544
benchmarking methodology. The minimum frame size at which the switch handled 100% line rate
throughput was 90 bytes. The maximum was a super jumbo frame size, 12000 bytes. Testing verified a
forwarding rate for 64-byte packets of 1,464,007,507 frames per second (fps) for the switch. 96 x 10 GbE
and 8 x 40 GbE ports were used in testing.
Page 8
DR130815
8October2013
0.82
0.80
Latency (s)
0.70
0.60
0.50
0.72
0.61
0.80
0.70
0.60
0.84
0.74
0.62
0.85
0.78
0.65
0.88
0.81
0.67
0.90
0.89
0.79
0.79
0.78
0.78
0.79
0.66
0.67
0.65
0.65
0.65
0.88
0.87
0.89
Maximum
Average
0.40
Minimum
0.30
0.20
0.10
0.00
64
128
256
512
1024
1280
1518
2176
9216
12000
The Dell S6000 Switch exhibited consistently low latency values in the RFC 2544 Layer 2 Latency Test
that utilized 96 x 10 GbE ports. Average latency ranged from a low of 0.70 s for 128-byte frames to a
high of 0.81 s for 1024-byte frames. The switch was configured in store-and-forward mode and was
tested with an Ixia XM12 using RFC standard benchmark test suites.
Page 9
DR130815
8October2013
0.81
0.80
Latency (s)
0.70
0.60
0.50
0.78
0.72
0.69
0.62
0.60
0.84
0.74
0.62
0.87
0.77
0.65
0.91
0.87
0.90
0.87
0.89
0.89
0.80
0.78
0.79
0.77
0.78
0.78
0.65
0.65
0.65
0.64
0.65
0.65
Maximum
Average
0.40
Minimum
0.30
0.20
0.10
0.00
74
128
256
512
1024
1280
1518
2176
9216
12000
The Dell S6000 Switch exhibited consistently low latency values in the RFC 2544 Layer 3 Latency Test
that utilized 96 x 10 GbE ports. Average latency ranged from a low of 0.69 s for 128-byte frames to a
high of 0.80 s for 1024-byte frames. The switch was configured in store-and-forward mode and was
tested with an Ixia XM12 using RFC standard benchmark test suites.
Page 10
DR130815
8October2013
0.581
0.588 0.589
Latency (s)
0.58
0.54
0.604
0.596
0.60
0.56
0.607
0.58
0.565
0.565
0.592
0.603
0.587
0.601
0.584
0.575 0.571
0.563
0.561
0.608
0.587
0.568
0.609
0.611
0.593
0.583
0.564
0.564
9216
11982
Maximum
Average
Minimum
0.549
0.541
0.52
0.50
128
256
512
5120
The Dell S6000 Switch exhibited consistently low latency values in the RFC 2544 Layer 2 Latency Test
that utilized 32 x 40 GbE ports. Average latency ranged from a low of 0.565 s for 128-byte frames to a
high of 0.593 s for 9216-byte frames. The switch was configured in store-and-forward mode and was
tested with an Ixia XM12 using RFC standard benchmark test suites.
Page 11
DR130815
8October2013
0.611
0.591 0.589
0.589
0.570
0.571
0.596
0.60
0.581
Latency (s)
0.607
0.605
0.58
0.568
0.56
0.560
0.54
0.52
0.50
0.567
0.603
0.609
0.587
0.586
0.568
0.569
0.608
0.588
0.568
0.609
0.604
0.594
0.582
0.564
0.571
Maximum
Average
Minimum
0.546
0.534
128
256
512
1024
1280
1518
3054
5120
9216
11982
The Dell S6000 Switch exhibited consistently low latency values in the RFC 2544 Layer 3 Latency Test
that utilized its 32 x 40 GbE ports. Average latency ranged from a low of 0.560 s for 128-byte frames to
a high of 0.594 s for 9216-byte frames. The switch was configured in store-and-forward mode and was
tested with an Ixia XM12 using RFC standard benchmark test suites.
Page 12
DR130815
8October2013
The maximum throughput the switch can achieve without any frame loss will be verified. In
addition, a latency value will be captured for each frame size tested.
Page 13
DR130815
8October2013
70
60
50
Layer 2
40
Layer 3
30
20
10
0
128
256
512
1024
1280
1518
2176
9216
12000
The Dell S6000 switch exhibited line rate throughput for Layer 2 and Layer 3 traffic using the RFC 2889
benchmarking methodology. The S6000 achieved 100% line rate throughput for Layer 2 and Layer 3 fully
meshed traffic for all frame sizes from 128 bytes to 12000 bytes. An Ixia XM12 using RFC standard
benchmark suites conducted the tests, which utilized 96 x10 GbE ports on the switch.
Page 14
DR130815
8October2013
0.82
0.80
Latency (s)
0.70
0.60
0.50
0.80
0.72
0.70
0.60
0.60
0.84
0.74
0.62
0.88
0.78
0.65
0.90
0.89
0.90
0.89
0.81
0.79
0.80
0.78
0.65
0.64
0.65
0.64
0.91
0.90
0.81
0.79
0.67
0.66
Maximum
Average
0.40
Minimum
0.30
0.20
0.10
0.00
64
128
256
512
1024
1280
1518
2176
9216
12000
The Dell S6000 Switch exhibited consistently low latency values in the RFC 2889 Fully Meshed Layer 2
Latency Test. Average latency ranged from a low of 0.70 s for 128-byte frames to a high of 0.81 s for
1024- and 9216-byte frames. The S6000 was configured in store-and-forward mode. An Ixia XM12 using
RFC standard benchmark suites conducted the test, which utilized 96 x 10 GbE ports on the switch.
Page 15
DR130815
8October2013
0.82
0.80
Latency (s)
0.70
0.60
0.72
0.63
0.50
0.79
0.70
0.60
0.84
0.74
0.62
0.88
0.78
0.64
0.91
0.81
0.66
0.90
0.89
0.89
0.89
0.91
0.79
0.80
0.78
0.78
0.79
0.65
0.65
0.64
0.65
0.66
Maximum
Average
0.40
Minimum
0.30
0.20
0.10
0.00
74
128
256
512
1024
1280
1518
2176
9216
12000
The Dell S6000 Switch exhibited consistently low latency values in the RFC 2889 Fully Meshed Layer 3
Latency Test. Average latency ranged from a low of 0.70 s for 128-byte frames to a high of 0.81 s for
1024-byte frames. The S6000 was configured in store-and-forward mode. An Ixia XM12 using RFC
standard benchmark suites conducted the test, which utilized 96 x 10 GbE ports on the switch.
Page 16
DR130815
8October2013
The S6000 successfully transmitted traffic to all multicast member ports at 100% line rate for
frame sizes ranging from 68 to 12000 bytes.
Testing verified that the switch was capable of snooping the multicast groups and then properly
transmitting multicast traffic at 100% line rate with zero loss to each multicast group member.
Page 17
DR130815
8October2013
100
90
80
70
60
50
40
30
20
10
0
68
128
256
The Dell S6000 switch exhibited line rate throughput for Layer 3 IPv4 traffic using RFC 3918 standard
tests. The S6000 achieved 100% line rate throughput for Layer 3 IPv4 multicast traffic for all frame sizes
from 68 bytes to 12000 bytes. An Ixia XM12 conducted the test, which utilized 96 x 10 GbE ports on
the S6000.
Page 18
DR130815
8October2013
Latency (s)
0.90
0.86
0.80
0.80
0.73
0.74
0.87
0.83
0.83
0.79
0.79
0.85
0.81
0.79
0.72
0.70
0.70
68
128
0.78
0.74
0.76
0.72
0.81
0.82
0.82
0.78
0.70
0.82
0.79
0.75
0.78
0.74
0.80
0.75
Maximum
Average
Minimum
0.60
0.50
256
512
1024
1280
1518
2176
9198
12000
The Dell S6000 Switch exhibited consistently low latency values in the RFC 3918 Multicast Latency Test.
Average latency ranged from a low of 0.72 s for 68- and 128-byte frames to a high of 0.83 s for 1024byte frames. The S6000 was configured in store-and-forward mode. An Ixia XM12 using RFC standard
benchmark suites conducted the test, which utilized 96 x10 GbE ports of the switch.
Page 19
DR130815
8October2013
4.7 RFC 3918 Layer 3 Group Join Delay and Group Leave Delay
The Group Join Delay Test determines how long it takes a switch to register multicast clients to a
new or existing group in its forwarding table.
The duration between the time a switch receives a group of IGMP/MLD Join requests and the
time the multicast clients begin receiving traffic for the groups they joined is measured. The
impact of different frame sizes on the duration is recorded.
The Group Leave Delay Test determines how long it takes a switch to remove a client from its
multicast table.
The duration between the time a switch receives a group of IGMP/MLD Leave requests and the
time the multicast clients stop receiving traffic for the groups they left is measured. The impact of
different frame sizes on the duration is recorded.
RFC 3918 Multicast Group Join Delay and Group Leave Delay Configuration
Page 20
DR130815
8October2013
In the Group Join Delay Test, the S6000 exhibited a gradual increase in latency that
corresponded with the increase in frame size.
68
128
256
12
1024
1280
1518
220.00
223.89
237.22
261.11
306.11
329.44
352.67
In the Leave Delay Test, the S6000 exhibited a fractional decrease in latency
as the frame size increased.
68
128
256
512
1024
1280
1518
30.13
30.06
30.01
29.99
29.97
29.97
29.96
With all receivers subscribed to nine multicast groups, the average Group Join Delay
latency of the S6000 was 275.63 nanoseconds compared to an average Group
Leave Delay latency of 30.01 seconds.
Testing verified the maximum multicast group capacity of the S6000 to be the
vendor-stated figure, 8,000.
Page 21
DR130815
8October2013
Table
Verified Capacity
IPv4 Routing
16,384
MAC Address
163,836
ARP Address
52,251
Page 22
DR130815
8October2013
Consumption
(Watts)
Efficiency
(Watts/Gbps)
Greater energy consumption is required for a switch to transmit smaller frames. In testing, the S6000
consumed the greater amount of energy to transmit the smallest frame size, .25 of a watt for 64 bytes.
The smallest amount of energy was required to transmit the largest frame size, 0.12 of a watt for
12,000 bytes.
Page 23
DR130815
8October2013
This VDI traffic distribution is the model on which the traffic distribution used in scalability testing was
based. It has a point-to-multipoint appearance.
The VMware designation of a Power User (standard) was selected for the Horizon View clients. It
is the third of four user types in ascending order in the VMware Horizon View Architecture
Planning Guide. Characteristics include a usage level of compute-intensive and a virtual machine
configuration of 1vCPU and 2GB RAM.
Miercom projected prior to testing that in order to accommodate a virtual desktop environment of
10,000 Power Users (standard), the S6000 would have to maintain at least 20 Gbps of traffic.
Page 24
DR130815
8October2013
To analyze the characteristics of the VDI traffic, the peer mapping function of the WildPackets
OmniPeek network analyzer was used. The Ixia XM12 injected fully meshed, custom IMIX traffic
that was equal to or more than that needed to support 10,000 VDI sessions.
Traffic flow
configuration path of
fully meshed traffic
between the Ixia
XM12 traffic
generation test and a
generic Device under
Test (DUT).
The distribution of frame sizes of VDI traffic generated by the Ixia XM12 and handled by the
S6000 is shown below. Because the VMware default frame size is 1300 bytes, 1024-1518
frames accounts for the largest % of the traffic distribution.
64 - 127
15.7
128 - 255
256 - 511
48.3
25.7
512 - 1023
1024 - 1518
5.7
4.7
VDI traffic frame-size distribution used in scalability testing for 10,000 users.
Note that large packets, 1024-1518 bytes, make up nearly half of the distribution.
Using just seven 10 GbE ports, the capacity of the S6000 was verified to be 63.2 Gbps of fully
meshed traffic at 99.5% line rate with low latency. There was no frame loss and no network
anomalies. See the table on the following page.
Page 25
DR130815
8October2013
The near-theoretical maximum of traffic throughput for these seven ports was achieved, far
exceeding the amount needed to support a 10,000-user VDI environment. The slight difference
from 100% line rate is attributable to the inter-frame gap (IFG) for the custom IMIX traffic
distribution used in testing.
Traffic Type
Throughput
(Gbps)
7 x10 GbE
Fully Meshed
99.5
63.20
Page 26
DR130815
8October2013
8.0 Features
8.1 Tool-less and Hot-swappable Maintenance
Once a spine switch is installed in a data center, shutting it down or doing emergency
maintenance on it is not easy. Many switches and other network equipment have removable,
redundant power supplies and hot-swappable hard drives and cooling fans.
The S6000 has six cooling fans that reside on a hot-swappable tray with a quick-remove tab.
This is an advantage that shortens the mean time to repair (MTTR).
Page 27
DR130815
8October2013
About Miercom
Miercom has hundreds of product comparison analyses published in leading network trade
periodicals including Network World, Business Communications Review - NoJitter,
Communications News, xchange, Internet Telephony and other leading publications. Miercoms
reputation as the leading, independent product test center is unquestioned.
Miercoms private test services include competitive product analyses, as well as individual
product evaluations. Miercom features comprehensive certification and test programs including:
Certified Interoperable, Certified Reliable, Certified Secure and Certified Green. Products may
also be evaluated under the NetWORKS As Advertised program, the industrys most thorough
and trusted assessment for product usability and performance.
Page 28
DR130815
8October2013