Академический Документы
Профессиональный Документы
Культура Документы
Executive Summary
The computing industry is experiencing an increasing demand for storage performance and
bandwidth due to increases in virtual machine density, increasing demands for application
performance and continual data growth. Fibre Channel storage area networks (SANs) carry the
bulk of storage traffic in the enterprise data center and are beginning to feel the stresses of these
increased demands.
In many cases, enterprises are currently constrained by the available bandwidth between the servers
and storage, or foresee a constraint as they observe their growing data consumption patterns. The
IBM Emulex 16Gb Fibre Channel (16GFC) host bus adapter (HBA) addresses these increasing
demands on storage performance by providing double the bandwidth of previous generation Fibre
Channel HBAs.
Demartek deployed an IBM System x3650 M4 server with the IBM Emulex 16GFC HBA
(81Y1662) and connected this server to an all-flash storage array with four 8GFC host ports in the
Demartek lab in Colorado. We ran a read-intensive data warehousing workload to determine if this
type of workload could take advantage of the increased bandwidth and performance that 16GFC
provides. We repeated the database workload test with previous generation IBM Emulex 8GFC
HBA and compared the results.
Key Findings
We found that for this database workload, the 16GFC HBA exceeded the performance of the
8GFC HBA and provided the additional bandwidth needed by the database workload allowing the
job to be completed in less time.
VM Density
When Demartek presents to users about next-generation storage networking technologies at various
industry events, we usually ask the audience of primarily technical users and first-line managers a
few questions about their environments. Among the responses are that virtual machine (VM)
density has been increasing over the last few years, with higher numbers of guest operating systems
running on one physical server than in the past. We expect this trend to continue.
8GFC Saturation
During the year 2012, when we asked the end-users in our audiences about saturation of Fibre
Channel links, we consistently heard from a few users who indicated that they had saturated their
8GFC links and needed something faster. The applications consistently identified as needing this
higher bandwidth are database applications, regardless of the brand of database. These include
single database instances running on physical hardware, multiple database instances running on
physical hardware and multiple database instances running in VMs. These users are generally
looking for something compatible with their existing infrastructure but that provides higher
bandwidth to meet their growing demands.
SSD
Solid State Disk (SSD) technology is another driver of bandwidth growth. Although relatively early
in the deployment cycles, we have found that those who deploy any form of SSD technology in the
enterprise experienced significant storage performance improvements. Many of these SSD
deployments are in SAN environments, which drive up storage networking bandwidth
consumption. Based on comments from users and many of the tests we have performed in our own
lab, we concluded that SSD technology and faster storage networking technology such as 16GFC
are well suited for each other.
TB of physical memory. It supports 64 virtual processors, along with 1 TB of memory per VM. This
enables virtualization environments not previously possible. When coupled with todays newer
server hardware environments and new technologies such as 16GFC, much heavier workloads can
be supported.
A new feature for Windows Server 2012 Hyper-V is the support for virtual Fibre Channel, also
known as Synthetic FC. This allows guest VMs to connect directly to Fibre Channel storage LUNs,
allowing guests to take advantage of existing Fibre Channel infrastructure. This includes the ability
for guest operating systems to be clustered over Fibre Channel. In order to take advantage of this
feature, newer Fibre Channel HBAs that support virtual Fibre Channel are required. The IBM
Emulex 16GFC HBA supports this feature and provides up to four virtual Fibre Channel ports per
VM. Also required for virtual Fibre Channel is NPIV in the switch and HBA, which the IBM
Emulex Fibre Channel HBA supports. Hyper-V in Windows Server 2012 supports the use of multi-
path I/O (MPIO) and virtual SANs, both of which are also supported by the IBM Emulex 16GFC
HBA.
When we discuss storage networks with enterprise users, we find that Fibre Channel is still the
dominant storage interface in large-scale data centers, and is expected to remain dominant as a
SAN interface for the foreseeable future.
The IBM Emulex series adapters provide several features designed for supporting enterprise I/O
workloads:
Twice the performance of 8GFC adapters
Backward compatible with 4GFC and 8GFC infrastructure
Support for Windows Server 2008 and 2012 with and without Hyper-V, VMware ESX and
ESXi, Red Hat Linux, SUSE Linux Enterprise Server (SLES)
In-box drivers for Windows Server 2012 and VMware vSphere 5.1
N_Port ID Virtualization (NPIV) support standard
An IBM-branded solution which has undergone extensive IBM interoperability testing for
connecting System x servers into storage and networking environments
Storage Array
Nimbus Data S-Class, 4x 8GFC host ports
24x 100GB 6Gb SAS SSD, configured as RAID0
This is unlike a synthetic benchmark that performs the same I/O operations repeatedly resulting in
relatively steady I/O rates which although potentially faster, do not resemble real customer
environments.
Database
Windows Server 2012
Microsoft SQL Server 2008 R2
RAM allocated to SQL Server: 4GB
Database size: 30GB
Total size of all database files: 54GB
The server was rebooted between runs to clear any host memory caching.
Bandwidth Results
For this set of tests, we used a single host connection to the SAN. This allowed us to make a simple
comparison of the 16GFC adapter with the previous generation 8GFC adapter.
This data warehouse workload was able to achieve more than 8 Gb/sec of bandwidth for some of
the queries, and in some cases, nearly line-rate with the 16GFC adapter. For those queries that
show a flat top on the graph below using the 8GFC adapter, this indicates that more
performance is available, but the 8GFC adapter is throttling the performance. Also note that when
running this test with the 16GFC adapter, the time to complete the run was 67% of the time
required by the 8GFC adapter, or 33% faster.
1000
800
600
400
200
0
438
115
134
153
172
191
210
229
248
267
286
305
324
343
362
381
400
419
457
476
495
514
533
552
571
1
39
20
58
77
96
IBM Emulex 16Gb IBM Emulex 16Gb @ 8Gb IBM Emulex 8Gb
400
300 575
538
200 385
100
0
IBM Emulex 16Gb IBM Emulex 16Gb @ 8Gb IBM Emulex 8Gb
These database workload test results using the IBM Emulex 16GFC HBA show approximately 33%
better performance than the same server and storage using 8GFC HBAs. Testing revealed that
8GFC HBAs throttle the performance and cause the application to run longer than necessary. IBM
Emulex 16GFC HBAs enable the doubling of throughput when needed, alleviating bottlenecks
under peak workload scenarios.
16GFC provides the performance horsepower for both new environments and existing
environments that demand higher performance than are available today with older technologies.
For existing environments with 4GFC or 8GFC HBAs, installing IBM Emulex HBAs provides a
simple plug-and-play performance upgrade.