Вы находитесь на странице: 1из 63
HPE MSA 2050/2052 storage and Microsoft Windows Server 2016 Implementation Guide Technical white paper
HPE MSA 2050/2052 storage and
Microsoft Windows Server 2016
Implementation Guide
Technical white paper

Technical white paper

Contents

Executive summary

 

3

Solution

overview

3

Solution

components

3

Hardware

3

 

HPE

MSA 2050/2052 storage

4

HPE ProLiant DL380 Gen9 server

4

HPE

Storage Networking

5

HPE StoreFabric SN1200E 16Gb Fibre Channel Host Bus Adapter

6

Software

 

6

 

Microsoft Windows Server 2016

6

 

Microsoft

Hyper-V

6

 

Windows Failover Cluster Manager

7

HPE

Storage Integrations

7

Best practices

and configuration guidance

7

HPE MSA 2050/2052 storage concepts

7

 

Storage pools

8

Disk groups

8

Virtual volumes

9

Default mappings for volumes

9

RAID level considerations

9

Microsoft Windows Storage integration

10

Windows Server 2016 Hyper-V

10

Recommendations for virtual networks

10

Redundant cabling for HA environments

11

Storage Area Network

(SAN)

11

Network

12

Microsoft Multipath I/O

12

PowerShell Hyper-V

Module

14

Hyper-V Switch and SET Teaming

14

Install and configure Failover Clustering

18

Failover Clustering/Cluster Shared Volumes

19

HPE MSA 2050/2052 storage provisioning

21

Provisioning HPE MSA SAN storage to Hyper-V

21

Provisioning HPE MSA storage to virtual machines

24

Shared-nothing live migration

30

Overview and configuration of HPE OneView for Microsoft System Center

35

HPE OneView Storage System Management Pack

35

HPE

Storage Management Pack for System Center

36

HPE Fabric Management Add-in for System Center

36

Licensing

36

Installation

36

Technical white paper

Configuring the SCOM server to process SNMP traps

44

Configure the SNMP service within Windows

44

HPE Fabric Management add-in for System Center

47

OneView SCVMM Integration Kit installation

48

Configuring the HPE Fabric Management Add-in within the SCVMM console

49

Configuring the HPE OneView Appliance within the SCVMM

50

Configuring the HPE MSA Storage system within the SCVMM

50

HPE ProLiant SCVMM Integration Kit installation

52

HPE Storage integration with SCVMM using SMI-S

53

How

does it work?

53

How do I configure it?

53

HPE

MSA Storage configuration

53

SCOM configuration

 

54

SCVMM configuration

55

Key findings

and recommendations

57

HPE

MSA

storage

57

HPE

MSA

configuration

58

HPE

MSA

2040

SAN

58

HPE

MSA

2050

SAN

59

Performance tuning

59

Summary

60

Terminology

60

Resources and additional links

62

Technical white paper

Page 3

Executive summary

When supported with the correct underlying storage platform, server virtualization delivers increased infrastructure consolidation, administrative efficiency, business continuity, and cost savings. As a result, server virtualization is not only transforming the data center, but also the businesses that those data centers fuel. However, these transformative results depend on advanced storage to deliver the performance, availability, and flexibility to keep up with the dynamic and consolidated nature of virtualized server environments.

To maximize the many benefits and features that virtualization provides, a shared-storage system is required. Small-to-medium businesses (SMBs) need integration, performance, and resiliency to power their virtual environments. Each SMB must work within smaller budget constraints to deploy the same features that modern enterprises typically demand. The latest generation of HPE MSA storage is designed and built to exceed the economic and operational requirements of virtual data centers by providing the storage performance, scalability, availability, and simplified management that SMBs with growing storage require.

When deployed together, HPE MSA storage with Microsoft® Hyper-V® Server provides SMBs with the benefits of consolidation savings by increasing virtual machine (VM) density, lowering storage costs, and realizing time savings from simplified storage management and provisioning.

Solution overview

Clustered, fault-tolerant, virtualization environments, such as Microsoft Hyper-V Server, rely heavily upon centrally managed, scalable SAN resources. The HPE MSA storage system provides a versatile entry-level SAN solution for Hyper-V host clusters. Microsoft Hyper-V administrators, as well as IT generalists, find storage-management tasks simple and intuitive with HPE MSA storage.

This solution demonstrates the integration between the HPE MSA storage system, HPE OneView for Microsoft System Center, Microsoft System Center 2016 - Virtual Machine Manager, and Microsoft System Center 2016 - Operations Manager.

Solution components

This section provides a description of each solution component, including the following:

Hardware components

– HPE MSA 2050/2052 storage (Generation 5)

– HPE ProLiant DL380 Generation9 (Gen9) server

– HPE storage networking with the HPE StoreFabric SN6000B Fibre Channel Switch

– HPE StoreFabric SN1200E 16 Gb Fibre Channel Host Bus Adapter

Software components

– Microsoft Windows Server® 2016

– Microsoft Hyper-V

– Windows® Failover Cluster Manager

– HPE storage integrations

Hardware

Clustered, fault-tolerant, virtualization environments, such as Microsoft Hyper-V, rely upon centrally-managed, scalable, storage-area network (SAN) resources. The HPE MSA storage system provides a versatile entry-level SAN solution for Hyper-V host clusters.

Technical white paper

Page 4

HPE MSA 2050/2052 storage

Technical white paper Page 4 HPE MSA 2050/2052 storage Figure 1. HPE MSA 2050/2052 storage system

Figure 1. HPE MSA 2050/2052 storage system front view

The HPE MSA 2050/2052 storage system provides the following features:

Enterprise-class storage at an entry price that is easy to install and maintain. No storage expertise necessary.

Accelerate applications with 1.6 TB solid-state drive (SSD) capacity, Advanced Optimization (AO), an all-inclusive software suite, and 512 snapshots out-of-the-box. Scale as needed with SSD, Midline, or SAS drives.

Includes two SSD’s totaling 1.6 TB read cache or 800 GB SSD capacity per storage pool.

Dual Converged SAN Controllers with 4x 16 GB or 8 GB FC ports, and 4x 10 GB or 1 GB Ethernet ports per controller.

Scale up to eight controllers, 48 GB of backend bandwidth using four lanes, 6 GB per controller.

All-inclusive software bundle.

Thin provisioning, wide striping, SSD-based read cache, snapshots (512), remote snap, and tiering.

Management: Web-based GUI and command line interface (CLI).

Supports Virtualization, SQL, Exchange, and more.

For more information, go to the HPE MSA 2052 Storage QuickSpecs.

HPE ProLiant DL380 Gen9 server

MSA 2052 Storage QuickSpecs. HPE ProLiant DL380 Gen9 server Figure 2. HPE ProLiant DL380 Gen9 server

Figure 2. HPE ProLiant DL380 Gen9 server

The HPE ProLiant DL380 Gen9 server delivers the best performance and expandability in the HPE 2-processor (2P) rack portfolio. Reliability, serviceability, and near-continuous availability, backed by a comprehensive warranty, make it ideal for any environment.

Purpose-built for flexibility, efficiency, and manageability, the HPE ProLiant DL380 Gen9 server is designed to adapt to the needs of any environment, from the large enterprise to the remote office or branch office (ROBO) customer.

Technical white paper

Page 5

The HPE ProLiant DL380 Gen9 features a future-proof design with flexible options. Choose the features and functions you need now; add more as necessary as your business needs grow; the modular chassis, networking, and controller designs allow for easy upgrades. Pay for what you need, when you need it.

With industry-leading performance and energy-efficiency, the HPE ProLiant DL380 Gen9 delivers faster business results and quicker returns on your investment. The HPE ProLiant DL380 Gen9 supports industry-standard Intel® E5-2600 v3 and E5-2600 v4 processors with up to 22 cores, 12 GB SAS, and HPE DDR4 Smart Memory bundled with high-efficiency redundant HPE Flexible Slot Power Supplies.

HPE Storage Networking

Generation9 (Gen9) Server QuickSpecs. HPE Storage Networking Figure 3. HPE SN6000B Fibre Channel Switch While this

Figure 3. HPE SN6000B Fibre Channel Switch

While this paper focuses on the best practices for deploying HPE MSA storage for Microsoft Hyper-V, it is important to make sure that the proper storage networking infrastructure exists to complement the server and storage requirements. HPE offers a full set of network solutions to complete your infrastructure.

A typical addition to an HPE MSA storage and Microsoft Hyper-V deployments is the HPE SN6000B Fibre Channel Switch, shown in Figure 3. The HPE StoreFabric SN6000B Fibre Channel Switch is a high-performance, ultra-dense, highly scalable, easy-to-use, enterprise-class storage networking switch, delivering market-leading Fibre Channel capabilities. It is designed to support data growth, demanding workloads, and data- center consolidation in small-to large-scale enterprise infrastructures, delivering 4, 8, 10 or 16 Gbps speeds in an efficiently designed 1U package.

The HPE SN6000B Fibre Channel Switch can scale from 24 to 48 ports with 48 SFP+ ports. A simplified deployment process and a point-and- click user interface make the HPE StoreFabric SN6000B Switch easy to use.

The HPE SN6000B Fibre Channel Switch provides the following features:

Delivers 16 Gbps performance with up to 48 ports in an energy-efficient, 1U form factor, providing maximum flexibility for diverse deployment and cooling strategies.

Features Ports on Demand (PoD) capabilities for fast, easy, and cost-effective scaling from 24 to 48 ports in 12-port increments.

Provides a flexible, simple, and easy-to-use SAN solution with industry-leading technology.

Supports highly virtualized, private-cloud storage with multitenancy and non-stop operations.

Offers best-in-class port density and scalability for midrange-enterprise SAN switches, along with redundant, hot-pluggable components and non-disruptive software upgrades.

Metro cloud connectivity features, including integrated DWDM and dark fiber support (optional license).

In-flight compression and encryption included, providing efficient link utilization and security.

Yields exceptional price/performance value, exceeding comparable Ethernet storage-based alternatives.

Unique bundle model: The HPE StoreFabric SN6000B 16 Gb Bundled Fibre Channel Switch model facilitates ease of ordering by offering the HPE SN6000B 48/24 Fibre Channel Switch along with 24 16 Gb optical transceivers in one package. The HPE StoreFabric SN6000B 16 Gb Bundled Fibre Channel Switch model is offered at a lower price than the sum of the list prices of the HPE SN6000B 48/24 Fibre Channel Switch and the 16 GB optical transceivers.

For details, see the HPE SN6000B Fibre Channel Switch QuickSpecs at

Technical white paper

Page 6

HPE StoreFabric SN1200E 16Gb Fibre Channel Host Bus Adapter

HPE StoreFabric SN1200E 16Gb Fibre Channel Host Bus Adapter Figure 4. HPE SN1200E 16 Gb Host

Figure 4. HPE SN1200E 16 Gb Host Bus Adapters (single and dual channel)

The HPE StoreFabric SN1200E 16 Gb Fibre Channel (FC) Host Bus Adapter (HBA) is a critical element to improve storage performance in this environment. The HPE StoreFabric SN1200E delivers the high bandwidth, low latency, and high IOPs to meet any application requirements, from online transaction processing to data warehousing. Emulex’s Dynamic Multicore Architecture can dynamically apply all ASIC resources to any port, delivering performance when and where it is needed. Supporting higher virtual machine density on powerful host servers, the HPE StoreFabric SN1200E 16 Gb HBAs delivers increased return-on-investment, enabling more applications and virtual machines to run on a single server and I/O port without impacting SLAs. The 16 Gb FC HBA is backward compatible with 8 and 4 Gb FC storage networks to protect legacy investments.

The HPE StoreFabric SN1200E FC HBA is available in two configurations:

16 Gb Single-Port Fibre Channel Host Bus Adapter

16 Gb Dual-Port Fibre Channel Host Bus Adapter

The HPE StoreFabric SN1200E 16 Gb Host Bus Adapters provide near limitless scalability to support increased virtual machine (VM) density with 2x more on-chip resources and bandwidth than previous offerings. This FC HBA also improves the virtual desktop infrastructure (VDI) end- user experience with low-latency features providing noticeable improvements during boot storms (degradation of service that occurs when a significant number of end users boot up within a very narrow time frame and overwhelm the network). Management and installation is simplified with the OneCommand Manager Plugin for VMware® vCenter® server.

The HPE StoreFabric SN1200E 16 Gb Fibre Channel Host Bus Adapters are designed to support emerging NVM Express (NVMe) over Fibre Channel storage networks, providing the latest security features that avert unauthorized access to the HBA firmware. T10 Protection Information (T10-PI) data integrity with high-performance hardware offload provides data protection from the server to the storage array.

For more information about HPE Storage Networking solutions and products, visit https://www.hpe.com/us/en/storage/networking.html

Software

Microsoft Windows Server 2016 Windows Server 2016 is Microsoft’s most recent cloud-ready operating system. It supports current workloads and introduces new technologies to make the transition to Cloud computing even easier. Each of the core areas such as compute, networking, and storage have seen major enhancements, as well as other areas such as Active Directory, Identity Access and Administration, Hyper-V, Failover Clustering, and Security Assurance.

For a complete list of what’s new in Windows Server 2016 visit https://docs.microsoft.com/en-us/windows-server/get-started/what-s-new-in-

Microsoft Hyper-V Hyper-V is Microsoft’s hardware virtualization product that allows administrators to virtualize computers, known as virtual machines. Virtual machines provide flexibility and scalability; they are an efficient way of managing physical hardware resources. Hyper-V runs each virtual machine in its own isolated space, allowing for multiple virtual machines to reside on a single piece of hardware. Hyper-V containers provide improved isolation and control, which allows for provisioning within centralized multi-tenant environments.

Technical white paper

Page 7

Microsoft Hyper-V is provided as a Windows Server Role that can be installed when necessary.

Windows Failover Cluster Manager Failover Clustering is a feature in Windows Server 2016 that allows you to group servers into a fault-tolerant cluster. Grouping servers in a cluster provides improved scalability and availability over single-server instances. It also allows for centralization of many types of workloads. In addition, clustering in Windows Server 2016 provides the ability to configure roles. Roles within the cluster allow for another server to assume and run the role if the server hosting the role were to fail.

HPE Storage Integrations HPE OneView for Microsoft System Center, also known as HPE Storage Integrations, includes three management packs. 1 The first two management packs provide for integration with Microsoft System Center Operations Manager (SCOM) and the third provides for integration with Microsoft System Center Virtual Machine Manager (SCVMM). Each of the integration kits will be described in detail later in this paper, as follows:

HPE Storage Management Pack for System Center (part of HPE Storage SCOM Integration Kit)

HPE OneView Storage Systems Management pack (Part of HPE OneView SCOM Integration Kit)

HPE Fabric Management Add-in for System Center (Part of HPE OneView SCVMM Integration Kit)

Best practices and configuration guidance

HPE MSA 2050/2052 storage concepts

Growing storage needs for virtualized servers now require greater levels of storage performance and functionality at a lower cost of ownership. The HPE MSA 2050/2052 storage arrays are positioned to provide an excellent value for small-to-medium business customers who require increased performance to support initiatives, such as consolidation and virtualization.

HPE MSA 2050/2052 storage delivers these advantages through a number of unique architectural features. Some of the virtualization benefits are shown below.

The HPE MSA 2050/2052 provides a number of new features:

2X I/O performance from fourth generation.

New processors, Broadwell Dual Core 2.2GHz.

8 GB cache per controller (Data [Read/Write] cache = 4 GB and metadata and system OS memory = 4 GB).

HPE MSA SAS models.

HPE MSA 2050/2052 storage supports Virtual Disk Groups only.

The Small Form Factor (SFF) array now supports 24 slots / enclosure, max drive counts 192SFF/96LFF.

The Hybrid Flash Array (2052) incorporates 2x800 GB mixed use SSD drives that can be used as Read Cache or a Read/Write Performance Tier.

Volume copies can be performed between controllers.

Virtual copies are now thin copies because the system only supports Virtual Disk Groups.

Remote Snap v2:

– Replication frequency can be as often as 30 minutes.

– Up to 4 1:1 peer connection relationships.

– Peer connection authentication. Establishing a peer connection now requires a username and password.

– The peer connection must be established from the 2050/2052 when created.

Technical white paper

Page 8

– Replication can happen over FC and iSCSI connections.

– Replication queueing.

– Each system has a limit of 32 volumes or snapshots.

– Volume group limit is 16 volumes.

Controllers support SFTP.

Storage pools Storage pools in the HPE MSA storage array provide a way of virtualizing the physical disk array to support oversubscription, uninterrupted volume scaling, and storage tiering. Often referred to as a virtual pool, each storage pool is associated with one of the HPE MSA controllers. Each pool can support multiple disk groups to provide different tiers of storage—performance, standard, and archival types of storage. The HPE MSA storage array also features a unique ability to add a read cache disk group to a storage pool to provide caching for the most actively accessed data in the storage pool.

Disk groups

A virtual disk group is an aggregation of disks used for the purpose of storing data. Virtual disk groups in the HPE MSA storage array are of two

types: virtual disk groups and read-cache disk groups.

Virtual disk groups require that you specify of a set of disks, RAID level, disk-group type, pool target (A or B), and a disk group name. If the

virtual pool does not exist when the disk group is added, the system automatically creates it. Multiple disk groups, up to 16, can be added to a single virtual pool. The pool can distribute the performance needs of the system over multiple disk groups. Additional features include the

following:

Virtual disk groups that contain solid state drives (SSDs) can only be created with a Performance Tier license. This restriction does not apply to read-cache disk groups.

If you are creating multiple disk groups, each group should be created with the same number of disks that all have the same capacity. All disks in a disk group must be the same disk type (SAS, MDL SAS, or SSD).

Virtual disk groups in the Storage Management Utility (SMU) default to RAID 6, but can be configured to support RAID-1, RAID 5, or RAID 10.

Both controllers should be configured identically when creating disk groups.

The number of data drives within a disk group should be a power of 2. See RAID level considerations below for more information.

Disk Group Affinity: When creating a volume, users can also specify what type of tier affinity the group should support.

For specifics on virtual disk group tier affinity types, see “Drive Type and Capacity Considerations when using Tiering” in the HPE MSA 1050/2050/2052 Best Practices White Paper.

A read-cache disk group is a special type of a virtual disk group that is used to cache virtual pages to improve read performance. The HPE MSA

2052 is required in order to use this feature, since it incorporates two 800 GB SSD’s and the Advanced Data Services License.

Read cache only helps with Random Reads. Sequential Reads are handled via HDD's. Read-cache disk groups do not add to the overall capacity

of a virtual-disk pool; therefore, read-cache can be added or removed from the pool without any adverse effect on the volumes other than to

impact read-access performance. However, if a restart or failover occurs, read cache contents are lost.

Data protection for read-cache disk groups is as follows:

When a read-cache group consists of one SSD, it automatically uses NRAID.

When a read-cache group consists of two SSDs, it automatically uses RAID 0.

Some advantages of Read-Cache disk groups are as follows:

The performance cost of moving data to read cache is lower than a full migration of data from a lower tier to a higher tier.

SSD’s do not need to be fault tolerant, potentially lowering system cost.

Technical white paper

Page 9

Controller read cache is effectively extended by two orders of magnitude or more.

Virtual volumes Virtual volumes are a subdivision of a storage pool that can be assigned to a host. Virtual volumes store user data in 4 MB virtual pages that are spread throughout the disks within the storage pool.

Some features of virtual volumes include the following:

Volume tier affinity – Volume affinity allows the user to specify tier affinity when creating a volume. This automatically moves data to a specific tier, if space is available. There are three settings for volume tier affinity:

No Affinity – Default – Volume data swaps into higher tiers of storage, based first upon the frequency of access, if space is available.

Archive – Data is prioritized for the archive tier; however, data can be moved into higher performing tiers, if space is available.

Performance – Data is prioritized for the Performance Tier. Performance affinity swaps into higher performing tiers based upon the frequency of access, if space is available. If no space is available, the next highest performing tier is used.

Snapshots – Redirects on write snapshots are thin and support branching. They do not require dedicated storage for storing the snapshots, and as a result, use much less I/O to improve performance.

Replication – Replication provides a crash-consistent, point-in-time copy of a volume, volume group, or snapshot on a remote system by periodically updating the remote copy.

Replication can happen over iSCSI or Fiber Channel with 1:1 relationships for up to four arrays.

Replication uses Peer Connection Authentication and provides for a more frequent replication (every 30 minutes). When replicating between an HPE MSA 2042 storage array and an HPE MSA 2050/2052 storage array, the peer connection must be instantiated on the HPE MSA 2050/2052 storage array.

For more information pertaining to volume tier affinity, refer to the “Mechanics of Volume Tier Affinity” section of the HPE MSA 1050/2050/2052 Best Practices white paper.

Default mappings for volumes By default, the HPE MSA maps all connected initiators to all volumes/volume groups. Therefore, all hosts/host groups have the ability to access all volumes using the specified access mode, LUN, and port settings. This is considered the default mapping.

The advantage of the default mapping is that all connected hosts can discover all volumes and volume groups with no additional work by the administrator. The disadvantage is that all connected hosts can discover volumes with no restrictions. Therefore, this process is not recommended for specialized volumes, which require restricted access. In clustered environments, to avoid multiple hosts mounting the volume and causing corruption, the hosts must be cooperatively managed using cluster software.

You can change the default mapping of a volume, or create, modify, and delete explicit mappings. A mapping can specify read-write, read-only, or no access to a volume through one or more controller host ports. When a mapping specifies no access, the volume is masked.

When creating an explicit mapping from a host/host group to a volume/volume group, a LUN identifier is established. A LUN identifies a mapped volume to a host. Both controllers share a set of LUNs, and any unused LUN can be assigned to a mapping. However, each LUN can only be used once as a default LUN. For example, if LUN 5 is the default for Volume 1, no other volume in the storage system can use LUN 5 as its default LUN. For explicit mappings, the rules differ: LUNs used in default mappings can be reused in explicit mappings for other volumes and other hosts.

RAID level considerations Because many possible disk configurations are available to the HPE MSA storage array, the level of RAID protection is often determined by the installed hardware configuration. The following can be used as a baseline for implementing data protection for Adaptive Optimization (AO) and parity-based disk groups.

When using AO, the best practice is easy to identify. The performance tier should be configured with RAID 1, the standard tier with RAID 5, and the archive tier with RAID 6.

Technical white paper

Page 10

For optimal write performance, parity-based disk groups (RAID 5 and RAID 6) should be created with “The Power of 2” method. This method means that the number of data (non-parity) drives contained in a disk group should be a power of 2, as shown in Table 1.

Table 1. Power of 2 Method.

RAID Type

Total drives per disk group

Data Drives

Parity Drives

RAID 5

3

2

1

RAID 5

5

4

1

RAID 5

9

8

1

RAID 6

4

2

2

RAID 6

6

4

2

RAID 6

10

8

2

For more information refer to the HPE MSA 1050/2050/2052 Best Practices white paper.

Microsoft Windows Storage integration

Windows Server 2016 Hyper-V

Microsoft MPIO should be used to communicate with the HPE MSA SAN storage across all available paths.

Utilize PowerShell to install, configure, and manage the Hyper-V Switch and SET Teaming.

PowerShell cmdlet Test-Cluster should be used to validate the cluster configuration prior to implementation.

Cluster Shared Volumes (CSVs) should be used to maximize efficiency and availability within the cluster.

Taking advantage of Dynamic Memory within virtual machines (VM’s) can result in higher density within the Hyper-V host.

Do not attempt to provision pass-through disks to VM's because they no longer provide benefits seen with earlier operating systems.

Develop Naming Standards within the Hyper-V environment to ease administration.

HPE OneView’s comprehensive integration with HPE storage simplifies administration within the Windows environment. Take advantage of its many benefits to allow for greater control of technology environments, reducing the risk of downtime and enabling a faster response to system outages.

Recommendations for virtual networks There are a number of recommendations that can be made for Hyper-V networking; however, it all starts with the physical hardware available to the Hyper-V host. Recommendations for one physical configuration may not necessarily be good for another physical configuration. Below are some key points to consider when implementing virtual networks.

First, let’s look at the technology available. Servers come with a variety of NIC options: 1 GB, 10 GB, and even 25 GB options. These days 1 GB NIC’s are typically used for iLO connections, and 10 GB is becoming the mainstream. 25 GB NIC’s are available, but also require the infrastructure to support them, so they will not be covered here.

In many situations, dual 10 GB NIC’s are more than enough to support Hyper-V nodes within a Hyper-V Cluster. That said, the technology within the operating system is also changing. Past practice was to set up one or more network teams with redundant NIC’s; then segregate specific types of traffic across each network team.

With Windows Server 2016 Hyper-V comes Switch Embedded Teaming (SET). This is an alternative to NIC Teaming and is designed for environments that incorporate Hyper-V or Software Defined Networking (SDN). SET integrates with the Hyper-V Switch to provide additional capabilities. Keep in mind, however, there may be no additional benefit from creating more than one Virtual Switch per Hyper-V host. This practice also results in higher resource consumption.

Some differences between NIC Teaming and SET Teaming include the following:

SET Teaming configures both network adapters as active, no adapters are in standby mode.

NIC Teaming provides the choice of three different teaming modes, while SET Teaming supports Switch Independent mode only.

Technical white paper

Page 11

Note

With Switch Independent mode, the physical switch or switches to which the SET Team members are connected are unaware of the presence of the SET Team and do not determine how to distribute network traffic to SET Team members; instead, the SET Team distributes inbound network traffic across both SET Team members. Essentially, this provides 20 GB of bandwidth to a Hyper-V host configured with two 10 GB NIC’s. HPE recommends configuring a SET Team on each Hyper-V host and to converge Network Traffic Types across the SET Team. Not only does this simplify the configuration, it also provides better utilization of physical resources down to the physical switch-port level.

The preceding paragraphs discussed the physical networking available to each host. Without this foundation, virtual networking is not possible. After configuring a highly available and redundant configuration at the physical layer, it is time to deploy the same configuration at the virtual layer. In order to do this, create Virtual Network Adapters (vNICs). vNICs are used by VM’s and the Management OS to communicate outside of the Hyper-V host.

When creating vNIC’s, as with physical networking, high availability and redundancy should be your first thought. HPE recommends creating redundant vNICs for the Management OS or other types of traffic that need segregation. Once the vNIC’s are created, they can be assigned to a particular Physical NIC using the Set-VMNetworkAdapterTeamMapping cmdlet within PowerShell.

The convergence of networks across physical NIC’s and segregation of traffic types across VLAN’s depend on how the Hyper-V Cluster is used. In less complex settings, there could be one or two VM networks with cluster traffic converged across the SET Team. In other settings, such as multi-tenant environments, there could be multiple VLAN’s configured for each tenant, plus cluster traffic.

The administrator needs an understanding of the type and amount of traffic that each application, tenant, etc. requires. The original design should be based upon those requirements. Keep in mind that as the environment grows, the networking requirements may also increase. Make sure that physical ports are configured with the same VLAN configuration and make sure that networks within FCM are renamed and appropriately labeled.

Redundant cabling for HA environments Building a cost-effective, scalable, highly available architecture requires proper planning and implementation. Figure 5 and Figure 6 show examples of connecting HPE MSA 2050/2052 using Fibre Channel and iSCSI to support architectures with these characteristics.

Storage Area Network (SAN) SAN-attached configurations consist of multiple redundant connections from each MSA SAN Controller (Controller A and Controller B in the figure below) to redundant switches. Servers also have redundant connections from each host, connecting to redundant switches. In the diagram below, note that the HPE MSA storage array has a connection from Controller A and Controller B to each of the SAN switches (Fabrics). Likewise, Server A and Server B have a connection to each of the SAN switches and are running Microsoft MPIO multi-pathing software.

FC Switch Connections FC Switch Connections Switch 1 Switch 1 Switch 2 Switch 2 Controller
FC Switch Connections
FC Switch Connections
Switch 1
Switch 1
Switch 2
Switch 2
Controller A
Controller A
Controller B
Controller B
A A
B B
DL380 G9 Servers
DL380 G9 Servers
HPE MSA 2050
HPE MSA 2050
Figure 5. HPE MSA storage array FC SAN connectivity

Technical white paper

Page 12

Network Network-attached configurations consist of multiple redundant connections from each MSA SAN Controller (Controller A and Controller B in Figure 6) to redundant switches. Servers also have redundant connections from each host connecting to redundant switches. Figure 6 shows that the HPE MSA storage array has a connection from Controller A and Controller B to each of the network switches. Likewise, Server A and Server B have a connection to each of the network switches.

Network Switch Connections

Network Switch Connections

Switch 1 Switch 1 Switch 2 Switch 2 Controller A Controller A Controller B Controller
Switch 1
Switch 1
Switch 2
Switch 2
Controller A
Controller A
Controller B
Controller B
A A
B B

DL380 G9 Servers

DL380 G9 Servers

HPE MSA 2050

HPE MSA 2050

Figure 6. HPE MSA storage array iSCSI SAN connectivity

Microsoft Multipath I/O Microsoft Multipath I/O (MPIO) is a Microsoft-provided framework that allows storage providers to develop multipath solutions that contain the hardware-specific information needed to optimize connectivity with their storage arrays. These modules are called device-specific modules (DSM’s). Microsoft MPIO software within Windows Server 2016 Datacenter should be used to communicate with the HPE MSA storage array across all available paths.

After the device has been cabled, zoned within the SAN Fabric, and has had a volume explicitly mapped to it, the HPE MSA storage array will be discoverable from within the Microsoft MPIO configuration tool. Simply click Add to configure MPIO for the device and accept the prompt to reboot, as shown in Figure 7 and Figure 8 below.

Technical white paper

Page 13

Technical white paper Page 13 Figure 7. Microsoft MPIO properties with the HPE MSA 2050 hardware

Figure 7. Microsoft MPIO properties with the HPE MSA 2050 hardware device

MPIO properties with the HPE MSA 2050 hardware device Figure 8. Microsoft MPIO properties with reboot

Figure 8. Microsoft MPIO properties with reboot message

After the reboot is complete, the HPE MSA storage array is listed in the MPIO Devices tab, which also eliminates multiple entries for the same volume listed in Windows Disk Management, as shown in Figure 9.

listed in Windows Disk Management, as shown in Figure 9 . Figure 9. Microsoft MPIO properties

Figure 9. Microsoft MPIO properties showing an HPE MSA 2050 hardware device post install

Technical white paper

Page 14

For more information about Microsoft MPIO go to https://technet.microsoft.com/en-us/library/ee619734(v=ws.10).aspx.

PowerShell Hyper-V Module The installation of Hyper-V does not install the Hyper-V Management Tools, which are necessary to perform the configuration of SET Teaming. Hyper-V Management Tools can be installed using Server Manager – Roles and Features. Go to Remote Server Administration Tools Role Administration Tools Hyper-V Management Tools, as shown in Figure 10.

Selecting Hyper-V Management Tools installs the Hyper-V Module for PowerShell and the Hyper-V GUI Management Tools.

Note

The Hyper-V Module for PowerShell is the preferred method for creating the Hyper-V switch and for creating the SET Team.

creating the Hyper-V switch and for creating the SET Team. Figure 10. Hyper-V Management Tools Hyper-V

Figure 10. Hyper-V Management Tools

Hyper-V Switch and SET Teaming A Hyper-V Switch is a software-based Layer 2 Ethernet switch. The Hyper-V Switch enables east-to-west and north-to-south communication for virtual machines. The switch comes in three types: an external switch, an internal switch, and a private switch. This paper focuses on the external switch because it provides virtual machines the ability to communicate with VLANs configured on internal data center networks, as well as the Internet.

Creating a Hyper-V Switch that uses SET Teaming can be accomplished in a few short steps:

1. Query PowerShell for Network Interface Card (NIC) devices installed within the system, as shown in Figure 11.

Technical white paper

Page 15

Technical white paper Page 15 Figure 11. PowerShell Cmdlet identifying the number and type of NIC

Figure 11. PowerShell Cmdlet identifying the number and type of NIC cards installed within the server

2. Determine which NICs will be used to create the Set Team and create the switch, as shown in Figure 12.

As seen in Figure 11, there are multiple NICs installed. Past practice would be to isolate network traffic across different NICs; however, with advancements in technology, NICs now provide greater bandwidth capabilities, as well as improved features. This allows for multiple networks to be converged across redundant NICs, simplifying the hardware requirements and providing a more cost-effective solution.

requirements and providing a more cost-effective solution. Figure 12. PowerShell command necessary to create a new

Figure 12. PowerShell command necessary to create a new vSwitch

3. After creating the Switch and SET Teaming, create the Virtual Network Adapters (vNIC’s) for the Management OS and VM’s to use for external access, as shown with the following commands and in Figure 13.

Add-VMNetworkAdapter –SwitchName ManagementSwitch –Name SMB_1 –managementOS Add-VMNetworkAdapter –SwitchName ManagementSwitch –Name SMB_2 –managementOS

–SwitchName ManagementSwitch –Name SMB_2 –managementOS Figure 13. PowerShell command necessary to add virtual

Figure 13. PowerShell command necessary to add virtual NIC’s to the vSwitch

4. In this example, two vNICs were created to allow for redundancy; then each vNIC was restarted.

Restart-NetAdapter "vEthernet (SMB_1)" Restart-NetAdapter "vEthernet (SMB_2)"

Technical white paper

Page 16

Technical white paper Page 16 Figure 14. VMNetworkAdapter team mapping commands After the configuration is complete,

Figure 14. VMNetworkAdapter team mapping commands

After the configuration is complete, the vNICs are visible within the Networks section of Failover Cluster Manager Network Connections tab, as shown in Figure 15.

Manager Network Connections tab, as shown in Figure 15 . Figure 15. View of vNIC’s added

Figure 15. View of vNIC’s added as Cluster Networks

The final configuration will look similar to Figure 16.

In this scenario, networks are converged across two 10 GB NICs configured with SET Teaming. The name of the Virtual NICs begin with “vEthernet” and are listed at the top of Figure 16.

NICs begin with “vEthernet” and are listed at the top of Figure 16 . Figure 16.

Figure 16. View of the final configuration

Technical white paper

Page 17

SET Teaming configures both network adapters as active, and no adapters are in standby mode. Another key difference between NIC Teaming and SET Teaming is that NIC Teaming provides the choice of three different teaming modes, while SET Teaming supports Switch Independent mode only.

With Switch Independent mode, the switch or switches to which the SET Team members are connected are unaware of the presence of the SET Team and do not determine how to distribute network traffic to SET Team members—instead, the Set Team distributes inbound network traffic across the Set Team members.

In order for multiple VLANs to traverse the Set Team, the physical NIC’s need to be plugged in to physical switch ports configured to pass traffic from multiple VLAN’s. This can be referred to as a trunk port or hybrid port. Physical switch port configuration is beyond the scope of this document and is not covered in detail. Please contact your network administrator for details on physical switch port configuration.

After the Hyper-V Switch is created, it can be viewed via Hyper-V Manager, as shown in Figure 17, but management capabilities are limited. HPE recommends managing the switch using PowerShell.

HPE recommends managing the switch using PowerShell. Figure 17. View of the Hyper-V Switch from within

Figure 17. View of the Hyper-V Switch from within Hyper-V Manager indicating limited management capabilities

Note

A single virtual switch (external switch) can be mapped to a specific physical NIC or NIC Team.

Technical white paper

Page 18

For more information pertaining to the Windows Server 2016 Hyper-V Switch, go to docs.microsoft.com/en-us/windows- server/virtualization/hyper-v-virtual-switch/hyper-v-virtual-switch

Install and configure Failover Clustering Failover Clustering is a feature in Windows Server that allows you to group servers into a fault tolerant cluster. Clustering increases availability and scalability of roles, formerly known as clustered applications and services. Failover Clustering also provides a consistent, distributed namespace called Cluster Shared Volumes (CSV), which clustered roles can use to access shared storage from all nodes.

Prior to installing Failover Cluster Manager, the best practice is to run the cluster validation test. This test is used to confirm the configuration is compatible with Failover Clustering and helps to identify issues that need to be addressed prior to creating the cluster. There are two options for running the test: it can be activated from PowerShell or it can be run from Failover Cluster Manager.

Running from either PowerShell or the Failover Cluster Manager UI allows the administrator to specify the types of tests to run. However, the best practice is to run all tests to ensure there are no unexpected surprises.

The example in Figure 18 specifies which nodes are to be tested, runs all tests, and outputs the results to the screen in verbose mode. The default location of the validation report is in C:\Windows\Cluster\Reports on the clustered server and can be used to drill down by category or

specific test. (For example, Validation Report 2017.07.17 At 09.43.22.htm)

p l e , Validation Report 2017.07.17 At 09.43.22.htm ) Figure 18. PowerShell Cmdlet used to

Figure 18. PowerShell Cmdlet used to validate the configuration will support clustering

Running Validate Configuration… from within Failover Cluster Manager, as shown in Figure 19, provides you with similar options. Running all tests is a best practice and the Microsoft recommended option.

Technical white paper

Page 19

Technical white paper Page 19 Figure 19. View validating the configuration from within the Failover Cluster

Figure 19. View validating the configuration from within the Failover Cluster Manager

For more information on running cluster validation tests and the Test-Cluster Cmdlet, go to the following:

Failover Clustering/Cluster Shared Volumes Failover Clustering provides a number of benefits, such as providing high availability, scalability, and server consolidation. But how can clustering be used to take advantage of HPE MSA storage? What are Cluster Shared Volumes?

Cluster Shared Volumes (CSVs) are volumes that are shared between all nodes in the cluster. They allow for the transfer of ownership from one system to another in the event of a node failure. When combined with Hyper-V, they allow systems utilizing those volumes to remain online and available.

Essentially each node within the cluster has access to the mapped LUN, which is then configured as a CSV within Failover Cluster Manager. VMs within Hyper-V are provisioned within those volumes providing high availability and redundancy to those systems.

What are CSVs and how are they used? Cluster Shared Volumes (CSVs) are highly available storage presented to nodes within a cluster. Utilizing CSVs for Hyper-V file storage reduces the number of LUNs required to host Hyper-V VMs and virtually eliminates downtime, since each node has read/write access to the LUN and does not have to seize the LUN from a failed node.

To create a CSV, first create a volume within the HPE MSA 2050 storage and then map that volume to nodes within the cluster. As shown in Figure 20, a number of Hyper-V volumes can be mapped to two nodes within the cluster: Engwinmsa01 and Engwinmsa02.

Technical white paper

Page 20

Technical white paper Page 20 Figure 20. View of volumes mapped to nodes within a cluster

Figure 20. View of volumes mapped to nodes within a cluster

As the cluster grows and the systems using the storage increase, naming standards throughout the environment become important. Properly labeling volumes can help prevent user error, such as accidental deletion. For example, within the images below, the HyperV_SysFiles volume name can be used throughout the various user interfaces necessary to manage the volume from the HPE MSA storage system to the Windows Server 2016 OS.

Figure 21 shows the volume name HyperV_SysFiles matches what is listed within the SMU.

HyperV_SysFiles matches what is listed within the SMU. Figure 21. View from Windows Disk Management listing

Figure 21. View from Windows Disk Management listing volumes

Figure 22 shows the view from Windows Failover Cluster Manager, listing Cluster Shared Volumes. The volume name HyperV_SysFiles matches what is listed within the SMU and Disk Management.

Technical white paper

Page 21

Technical white paper Page 21 Figure 22. View from Windows Failover Cluster Manager showing HyperV_SysFiles volume

Figure 22. View from Windows Failover Cluster Manager showing HyperV_SysFiles volume name

Figure 23 shows the volume name HyperV_SysFiles matches what is listed within the SMU and Disk Management and Failover Cluster Manager.

the SMU and Disk Management and Failover Cluster Manager. Figure 23. View from Windows Explorer listing

Figure 23. View from Windows Explorer listing Cluster Shared Volumes

Note

Each cluster node will have the same view within the ClusterStorage folder because each node has access to the same Cluster Shared Volumes.

HPE MSA 2050/2052 storage provisioning

Provisioning HPE MSA SAN storage to Hyper-V As described in the previous section, Hyper-V when combined with Failover Clustering takes advantage of Cluster Shared Volumes to ensure systems using the underlying storage remain online and available. This can be accomplished by configuring the Hyper-V host’s global setting for Virtual Hard Disks and Virtual Machines to use CSV’s.

By default, locations for Virtual Hard Disks and Virtual Machines reside within the C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks and C:\ProgramData\Microsoft\Windows\Hyper-V system drive, which is inherently a bad location since growth of VHD and VHDX files can cause the system to run out of disk space, as well as negatively impact the system due to increased disk activity.

To prevent issues such as these, Microsoft recommends relocating these default paths to a non-system drive. However, for these locations to become highly available and accessible by all nodes in the cluster, they need to be created as Cluster Shared Volumes. After they are relocated to the CSV location, virtual machines files within each host become highly available and redundant.

Figure 24 shows two volumes that are configured within Hyper-V global settings as default system and disk file locations. One volume is for VM System files and the other is for VM Disk files.

Technical white paper

Page 22

Technical white paper Page 22 Figure 24. View of Hyper-V volumes mapped to nodes within the

Figure 24. View of Hyper-V volumes mapped to nodes within the cluster

Note

Properly creating and balancing Storage Pools can help with the performance of an HPE MSA array. Hewlett Packard Enterprise recommends keeping pools balanced from a capacity utilization and performance perspective.

Now that HPE MSA storage is provisioned to each Hyper-V host, add the volume using disk management, making sure the disk is initialized and online with a simple volume created in the operating system. Next, add the disk to Failover Cluster Manager. After the disk is assigned as available storage, add it as a Cluster Shared Volume, as shown in Figure 25.

storage, add it as a Cluster Shared Volume, as shown in Figure 25 . Figure 25.

Figure 25. View of a Cluster Shared Volume (CSV)

Technical white paper

Page 23

Next, configure each Hyper-V host’s global settings to point to the Cluster Shared Volume location. To do that, open Hyper-V Manager, right- click the host name, and select Hyper-V Settings. The server settings window allows for configuration of many host settings, including the default location of the virtual hard disk and virtual machine files, as shown in Figure 26. Select the area of focus from within the left pane and then click Browse… to specify a directory within the Cluster Shared Volume C:\ClusterStorage location.

the Cluster Shared Volume C:\ClusterStorage location . Figure 26. View of how to access and configure

Figure 26. View of how to access and configure Hyper-V global settings

Figure 27 shows the volumes have been renamed to match what is listed in the SMU, Disk Management, Failover Cluster Manager, and the Cluster Shared Volume location.

Technical white paper

Page 24

Technical white paper Page 24 Figure 27 . View of volumes within the Cluster Shared Volume

Figure 27. View of volumes within the Cluster Shared Volume location

Provisioning HPE MSA storage to virtual machines Provisioning highly available redundant storage to Hyper-V virtual machines is quick and easy if Hyper-V hosts are configured to use CSVs, as described in the previous section. Simply use the Hyper-V Manager to create a new VM and accept the default location previously configured as a Global Hyper-V setting, as shown in Figure 28.

as a Global Hyper-V setting, as shown in Figure 28 . Figure 28. View of Virtual

Figure 28. View of Virtual Machine Wizard indicating the previously configured CSV location

Technical white paper

Page 25

Next determine the VM Generation, RAM allocation, whether or not the VM is connected to a switch, and create a virtual hard disk.

There are two generations to choose from:

Generation 1 supports 32-or 64-bit operating systems and all virtual hardware previous to Hyper-V 2016.

Generation 2 supports all current virtualization features and requires a 64-bit guest OS.

Note

Generation 2 VMs are not designed to support legacy devices.

Generation 2 functionality includes the following:

Boot from a SCSI virtual hard disk.

Boot from a SCSI virtual DVD.

Support for UEFI firmware on the VM.

Support for VM Secure Boot.

Support for PXE boot using a standard network adapter.

When assigning RAM to a VM, you have the option to use Static Memory or Dynamic Memory. Dynamic Memory allows the VM to be configured with a number of different parameters: startup RAM, minimum RAM, maximum RAM, memory buffer, and memory weight.

Configuring a VM with Dynamic Memory allows the RAM used by the VM to fluctuate between the minimum and maximum levels, allowing Hyper-V to reclaim unused memory and reallocate it as necessary, which can result in increased density for the Hyper-V Host. Figure 29 shows the Hyper-V Dynamic RAM settings.

Technical white paper

Page 26

Technical white paper Page 26 Figure 29. View of Hyper-V Dynamic RAM settings Each VM is

Figure 29. View of Hyper-V Dynamic RAM settings

Each VM is configured with a NIC. The Virtual Machine Wizard allows you to connect the VM to the Hyper-V host switch or leave it disconnected for connection at a later time. Even if the VM is connected to the switch, it still has to be configured for the proper subnet, and DNS settings. Any post-VM configuration can be completed by accessing the VM settings within Hyper-V Manager.

Now it is time to configure the VM virtual hard disk. The wizard provides a few options, as shown in Figure 30. The administrator can create a virtual hard disk, which defaults to the virtual hard disk location created within Hyper-V’s global settings. Or use an existing virtual hard disk, or attach a virtual hard disk later.

Technical white paper

Page 27

Technical white paper Page 27 Figure 30. View of virtual hard disk options within the Virtual

Figure 30. View of virtual hard disk options within the Virtual Machine Wizard

The Create a virtual hard disk option is the best option because a CSV has been previously configured to house the virtual machine system files for the Hyper-V VMs.

After the disk has been added, it appears within the hardware section of the VM Settings. At this point, the VM can be powered-on and the disk can be added to the VM OS using Disk Manager.

Note

Pass-through disks allow direct access to underlying storage rather than a .vhdx file residing within storage. Generation 1 VMs, which use the .vhd format, have a limit of 2040 GB. With earlier versions of Windows, if you wanted to provision a disk larger than 2040 GB, it was necessary to provision a pass-through disk (directly attached disk). Current releases of Windows Server 2016 use the .vhdx format, allowing for disks up to 64 TB and virtually eliminating the need for directly attached disks to achieve larger disk sizes.

The Use an existing virtual hard disk option (Figure 30) is another way of provisioning and allocating a .vhdx to a VM. Ideally, this would be another CSV, possibly one that is used for data-migration purposes or reallocation.

There are a number of steps involved to complete this task, which are outlined below, using the example of creating a new volume from HPE MSA SAN storage and eventually provisioning that volume to a VM as an existing virtual disk:

1. Create the volume from within the HPE MSA SMU.

2. Map the volume to the hosts within the cluster.

3. Bring the disk online from one of the Hyper-V hosts.

4. Initialize the disk.

5. Bring the disk online once again.

Technical white paper

Page 28

7. Add the disk to the cluster.

8. Rename the volume from within FCM and Disk Management to match the volume name within the HPE MSA SMU.

9. Add the disk as a Cluster Shared Volume.

10. Add the disk to the VM.

as a Cluster Shared Volume. 10. Add the disk to the VM. Figure 31. View of

Figure 31. View of an additional controller and hard disk that has been added to the VM

When adding an existing virtual hard disk to a VM it is important to select the proper disk. One way to make sure that happens is to properly label volumes provisioned from HPE MSA storage; then carry that volume name throughout the Host OS and applications, as indicated in Failover Clustering/Cluster Shared Volumes on page 19, and as shown in Figure 32.

When selecting the physical disk to add to a VM, the drop-down lists all disks visible to the Hyper-V host, even if they are in use. If volume labels were used when creating and allocating HPE MSA Storage system, the process can be less confusing. The volume label is not listed within the settings of the VM, but it will be possible to cross reference the volume label with the disk number listed in Disk Manager, thus identifying which physical hard disk to add. HPE recommends using a labeling system to assist in managing volumes provisioned from HPE MSA storage; however, the labeling system is specific to the individual customer.

Technical white paper

Page 29

Technical white paper Page 29 Figure 32. Snapshot of Disk Management on the left and the

Figure 32. Snapshot of Disk Management on the left and the Hyper-V VM Settings on the right

Figure 33 shows the SCSI Controller and a physical hard disk (in this case a CSV) that has been added to the VM.

Technical white paper

Page 30

Technical white paper Page 30 Figure 33. View of an additional SCSI Controller and a physical

Figure 33. View of an additional SCSI Controller and a physical hard disk

Shared-nothing live migration Reliable and highly available infrastructure not only provides protection from failure and downtime, it provides a means of efficiently managing the infrastructure. To meet demands from the business, the architecture needs to be agile and provide scalability, allowing the infrastructure to grow and shrink on demand. The infrastructure also needs to be resilient and provide a means of recovering from a disaster.

The HPE MSA storage system is capable of performing shared-nothing live migrations while maintaining uptime. Shared-nothing live migration provides these capabilities:

Migrating data within the system

Migrating data to another system

Migrating data to support new growth

Migrating data to conserve space

Migrating data to recover from an outage

The following sections highlight steps for performing a live migration of a virtual volume between two HPE MSA platforms (2042 and 2052). There are many options for replicating volumes or volume groups within and between systems; however, this document focuses on a specific use case. For additional information pertaining to volume replication, see http://h20195.www2.hpe.com/v2/GetPDF.aspx/4AA1-0977ENW.pdf

Creating a peer connection To perform asynchronous virtual replication between systems, a peer connection needs to be established. A peer connection defines the ports used between the two systems involved in the volume replication and can utilize iSCSI or FC ports. HPE MSA 2050 storage can have up to four 1:1 relationships between systems.

Technical white paper

Page 31

Note

When replicating volumes between an HPE MSA 2040 system and an HPE MSA 2050 system, the peer connection needs to be established from the HPE MSA 2050 system. The replication set and schedule can be completed from the HPE MSA 2040 system after the peer connection is created. To create a peer connection using the command line interface (CLI), connect to the HPE MSA 2052 controller via SSH. Log in as a user with the Manage role, and then use the create peer connection command to establish the connection, as shown in Figure 35.

to establish the connection, as shown in Figure 35 . Figure 34. Example of create peer

Figure 34. Example of create peer connection command

A peer connection can also be created via the SMU. Select Replications from the left pane of the SMU dashboard. Then from the Action menu, select Create Peer Connection. A pop-up window appears, as shown in Figure 35, allowing you to enter the connection name, destination port IP address, username, and password.

name, destination port IP address, username, and password. Figure 35. Example of pop-up window for creating

Figure 35. Example of pop-up window for creating a peer connection from within the SMU

After the peer connection has been established, you can query the connection information from the Action menu. Within the pop-up window (image not shown), enter the Remote Port Address to create the peer connection and click OK. The remote host port will be queried and return controller information for each controller within the system, as shown in Figure 36.

Technical white paper

Page 32

Technical white paper Page 32 Figure 36. View of information provided by querying the peer connection

Figure 36. View of information provided by querying the peer connection

Creating a replication set A replication set is a combination of a primary volume and a secondary volume created from the primary volume. Creating a replication set involves specifying the replication set name, the volume, the secondary volume name, the peer system storage pool, and whether or not recurring replications will be configured. See Figure 37 for an example of creating a replication set using the CLI.

The create replication set command in the CLI is used to create a replication set. At a minimum, the peer-connection ID, volume ID, or volume-group ID, and replication set name are required. There are additional optional arguments for each HPE MSA 2042/2052 platform.

optional arguments for each HPE MSA 2042/2052 platform. Figure 37. Command for creating a replication set

Figure 37. Command for creating a replication set using the CLI

Technical white paper

Page 33

Technical white paper Page 33 Figure 38. Pop-up view for creating a replication set from within

Figure 38. Pop-up view for creating a replication set from within the SMU

Scheduling replications If the option to configure scheduled replication was selected during the time the replication set was created, an additional pop-up window will allow for configuration of scheduled replications, as shown in Figure 39.

Notes

Fifth-generation arrays have an increased replication frequency that can occur as often as every 30 minutes.

On HPE MSA 2052 storage, the ADS Software Suite includes HPE MSA Remote Snap Software LTU, but it requires a license key from Hewlett Packard Enterprise and it must be installed on HPE MSA 2052 to enable Remote Snap.

To obtain a Remote Snap license, go to http://myenterpriselicense.hpe.com

Technical white paper

Page 34

Technical white paper Page 34 Figure 39. Pop-up view for creating a replication set schedule from

Figure 39. Pop-up view for creating a replication set schedule from within the SMU

If the option to schedule replications was not selected, as soon as the replication set is created, an additional pop-up window gives you the option to immediately start replication, as shown in Figure 40.

to immediately start replication, as shown in Figure 40 . Figure 40. Pop-up view asking for

Figure 40. Pop-up view asking for confirmation to immediately start volume replication

Alternatively, a manual replication can be started as follows:

Using the Action menu by selecting Initiate Replication.

Using the CLI.

Deleting a replication set Deleting a replication set stops replications from the primary volume to the secondary volume. When replicating from an HPE MSA 2040 system to an HPE MSA 2050 system, as in this example, the replication set can be deleted from either system.

Prior to deleting the replication set, take a snapshot of the secondary volume, mount that volume, and verify the data is current. This can save time in the event something was missed between replications and must be replicated to the secondary volume.

Technical white paper

Page 35

If replicating a volume for testing purposes, such as applying patches, taking a snapshot of the secondary volume, mounting the snapshot, and applying the patches to the snapshot could save time if subsequent tests must be performed. This is because changes made to the snapshot are not reflected on the secondary volume.

You can use the SMU to delete a replication, as shown in Figure 41.

the SMU to delete a replication, as shown in Figure 41 . Figure 41. Example of

Figure 41. Example of deleting a replication set from within the SMU

The replication set can be deleted using the CLI. Use the delete replication-set command and specify the replication set name.

Accessing a replicated volume As mentioned in the previous section, taking a snapshot of a replicated volume and then mounting that snapshot is a method for accessing data contained within that volume. This method can save time, instead of re-replicating the primary volume if changes were missed between replication schedules.

Another option is to delete the replication set and to mount the secondary volume from another system.

Overview and configuration of HPE OneView for Microsoft System Center

HPE OneView for Microsoft System Center (HPE Storage Integrations) provides a comprehensive integration of HPE storage, HPE servers, HPE Blade systems, and HPE Virtual Connect with Microsoft System Center. HPE OneView for Microsoft System Center provides administrators with the ability to manage and monitor their HPE infrastructure, running in Microsoft environments with a "single-pane-of-glass" view for system health and alerts, driver and firmware updates, OS deployment, detailed inventory, storage and VM management, and HPE fabric visualization. This provides greater control of technology environments, reducing the risk of downtime and enabling a faster response to system outages. Think of it as your Infrastructure Automation Engine.

The following sections provide an overview of each Management Pack, and highlights import installation and configuration settings.

HPE OneView Storage System Management Pack The HPE OneView Storage System Management Pack (MP) is part of the HPE OneView System Center Operations Manager (SCOM) Integration Kit. As part of HPE OneView, HPE OneView Storage System Management Pack integrates the existing HPE Storage MP and HPE Blade Systems/Virtual Connect MP to proactively monitor and manage the health of HPE storage and servers. This Management Pack provides a unified view of alerts/events and a topological view of the HPE hardware being managed under HPE OneView, enabling intelligent and quicker response to hardware events on HPE storage and servers running Windows and Linux®, as well as Blade Systems Enclosures and HPE Virtual Connect.

Technical white paper

Page 36

HPE Storage Management Pack for System Center The HPE Storage Management Pack is part of the HPE Storage Microsoft System Center Operations Manager (HPE SCOM) Integration Kit. The HPE Storage Management Pack provides seamless integration with HPE SCOM, enabling predefined discovery and monitoring policies, event processing rules, and topology views for HPE storage with the following benefits:

Simplified monitoring using a single-pane health monitoring of physical, logical, virtual, and backup infrastructure.

Monitor events and alerts for HPE storage systems with pre-defined rules.

Supports comprehensive health monitoring of all the storage nodes in Hyper-Converged systems.

Effortless installation, configuration, and upgrades using PowerShell.

For advanced hardware lifecycle management and remote administration of Hewlett Packard Enterprise systems, the HPE OneView Management Pack for Operations Manager includes tasks that launch the HPE OneView web console for group systems administration.

Supported devices include servers, enclosures, HPE Virtual Connect, server profiles, HPE MSA storage, HPE StoreOnce Systems backup, HPE StoreVirtual storage, and HPE 3PAR StoreServ storage (HPE 3PAR storage). Alert views indicate when and where problems have occurred.

HPE Fabric Management Add-in for System Center The HPE Fabric Management Add-in for System Center is part of HPE OneView SCVMM Integration Kit. The HPE Fabric Management Add-in enables enhanced integration of SCVMM and HPE storage, providing “single-pane-of-glass” monitoring and provisioning of physical, logical, virtual, and cloud infrastructure. It automates HPE storage management and provides an integrated view of VMs and associated storage resources. This provides enhanced monitoring, provisioning, and control of HPE storage with the following benefits:

Simplified relationships on dashboard information for virtual machines, Hyper-V hosts, volumes, and pools.

Simplified deployment of multiple virtual machines on Hyper-V hosts and clusters.

Unified monitoring of HPE OneView storage resources.

Provides an end-to-end relationship between virtual machines and HPE storage with context-sensitive in-depth information about volumes and their business copies.

Flexibility in managing capacity requirements by expanding LUNs.

Enables monitoring complex System Center infrastructure by integrating with SCOM MP and listing storage alerts.

Manages HPE StoreVirtual VSA appliances.

Licensing The storage integrations are free-of-charge, provided the device is configured within HPE OneView in a monitor-only state. No license is required. Core server integrations are licensed as part of HPE OneView or HPE Insight Control. Enhanced integrations that show relationship information for HPE OneView require an HPE OneView Advanced License.

Discovery Several SCOM servers can be aggregated to monitor multiple networks. There is no port requirement for discovery and monitoring; however, HPE OneView must be accessible from all the SCOM management servers in the resource pool.

If issues are encountered, ensure port 5671 is not being blocked by a firewall. Port 5671 is used by the state change message bus (SCMB) subscription for alerts. Also, if there is a network firewall device enabled, the same 5671 port has to be open/enabled.

HPE OneView registration Registration of an HPE OneView instance allows for discovery of all instances managed by that HPE OneView management server.

The default polling interval for storage discovery is every four hours.

Installation The following is a simplified list of steps necessary to manage and monitor HPE storage devices within SCOM. Following the list of steps is a more detailed description pertaining to each step.

Installation of the Integration Kits

Technical white paper

Page 37

Importing the Management Packs

Using the HPE Storage Management Pack User Configuration Tool to add a storage system to be discovered.

Configuring HPE Storage Management Pack Overrides to allow for discovery

Configuring SNMP within your environment

For additional information, see Resources and Additional Links.

Installing the Integration Kits HPE OneView for Microsoft System Center can be accessed from the HPE Software Depot:

The Integration Kits are contained within the HPE OneView for System Center zip file. However, the HPE OneView for System Center 8.3 zip file no longer contains legacy management packs; it contains HPE ProLiant Integration Kits for SCCM and SCVMM.

Figure 42 shows the HPE OneView for Microsoft System Center 8.3 installation page.

OneView for Microsoft System Center 8.3 installation page. Figure 42. View of HPE OneView for Microsoft

Figure 42. View of HPE OneView for Microsoft System Center 8.3 Installation page

Note

Prior to installing HPE OneView Management Packs, make sure SCVMM and SCOM have all patches and update rollups applied.

To begin the installation, launch the command prompt with Administrative privileges. Change directories to the location of the HPE OneView for System Center 8.3 zip file and execute the Autorun application. This displays the installation page as seen in Figure 42. Select Install HPE OneView SCOM Integration Kit 3.3.5 to begin stepping through the wizard.

Note

When installing the HPE OneView SCOM Integration Kit on a server that does not contain the SCOM Management Server, the HPE OneView Event Manager feature will be disabled.

Technical white paper

Page 38

Importing the Management Packs After the HPE OneView SCOM Integration Kit has been installed, the HPE OneView Management Packs can be imported into the Operations Manager Console. This is performed by navigating to the Administrative section of the console. From within the Management Packs section, right click Installed Management Packs and select Import Management Packs.

The HPE Management Packs are located within the following location: C:\Program Files\HPE OneView Management Packs\Management Packs. Import the management packs and select Install. The installation progress will be updated as the installation completes. An example of the installation is shown in Figure 43.

An example of the installation is shown in Figure 43 . Figure 43. Installation of HPE

Figure 43. Installation of HPE Management Packs for Operations Manager

HPE OneView for Microsoft System Center is fully integrated into the System Center consoles, which means one less tool to learn and manage.

Figure 44 shows the Monitoring dashboard for SCOM. There are a number of predefined diagram views to select from after the integration kit has been installed, the management packs have been imported, storage systems have been added, and overrides have been configured.

Technical white paper

Page 39

Technical white paper Page 39 Figure 44. A high-level view of SCOM Monitoring dashboard and HPE

Figure 44. A high-level view of SCOM Monitoring dashboard and HPE OneView Management Pack integration

Figure 45 shows that the integration provides the ability to launch consoles such as HPE OneView, HPE iLO, and Onboard Administrator Web Consoles.

Technical white paper

Page 40

Technical white paper Page 40 Figure 45. Available consoles to launch from HPE Integrations HPE OneView

Figure 45. Available consoles to launch from HPE Integrations

HPE OneView for Microsoft System Center prevents problems from occurring by proactively monitoring and managing hardware health and intelligently responding to hardware events on HPE Synergy infrastructure, HPE ProLiant servers, enclosures, Virtual Connect, and storage.

Notes

* The HPE OneView Systems section within SCOM contains systems that are managed by the HPE OneView Appliance. The HPE Storage section within SCOM contains storage devices that are added using the HPE Storage Management Pack User Configuration Tool.

* When entering the SCVMM server, be sure to enter the system IP where the SCVMM Server service is running, and not just the console where the Fabric Management Add-In is running.

Using the HPE Storage Management Pack User Configuration Tool To add a storage system to SCOM, open the HPE Storage Management Pack User Configuration Tool and select add HPE Storage System. Select the type of device you want to add. (The tool allows for the addition of HPE 3PAR storage, HPE 3PAR File Controllers, HPE StoreVirtual storage, HPE MSA storage, and HPE StoreOnce systems.) Then enter the credentials used to access the system. Optionally, to monitor the health of VMs deployed through SCVMM, configure an SCVMM server. Figure 46 and Figure 47 show the HPE User Configuration Tool.

Technical white paper

Page 41

Technical white paper Page 41 Figure 46. HPE User Configuration Tool Figure 47. Adding a storage

Figure 46. HPE User Configuration Tool

paper Page 41 Figure 46. HPE User Configuration Tool Figure 47. Adding a storage system using

Figure 47. Adding a storage system using the HPE User Configuration Tool

Configuring HPE Storage Management Pack Overrides Next, the Management Packs for HPE MSA storage need to be configured to receive traps from the HPE MSA storage device. This allows for processing of events and alerts by creating an override for the Management Pack and saving it into a custom Management Pack.

Technical white paper

Page 42

From within the SCOM Authoring pane, click Object Discoveries and search for “HPE MSA.” This locates two discovery rules: HPE MSA Storage Discovery Rule and HPE MSA SNMP Trap Catcher Rule. Each of these rules must be configured with overrides to allow for discovery.

HPE MSA Storage Discovery Rule

1. Right-click the HPE MSA Storage Discovery Rule and select Override Override the Object Discovery For specific objects of the class: Windows Computer.

2. Select the object where the HPE MSA Storage is discovered using the HPE Storage Management Pack User Configuration Tool and click OK. This will be the name of your SCOM Server.

The Override Properties window appears, as shown in Figure 48. Select the Override checkbox and set the Override Value to True. Optionally, you can override the number of seconds to apply the override rule.

– By default, the Override Value is set to 600 seconds, in which the rule takes 600 seconds to apply. You can specify a minimum of 150 seconds as the Override Value.

– For example, if you change the Override Value to 150 seconds, the rule will take 150 seconds to apply. If you change the Override Value, navigate to the last option, select the destination Management Pack on the page, and click New. Type a name for the Management Pack and click Save.

New . Type a name for the Management Pack and click Save . Figure 48. Image

Figure 48. Image of an object discovery rule having override values changed and saved to a new management pack

HPE MSA SNMP Trap Catcher Rule:

1. Right-click the HPE MSA SNMP Trap Catcher Rule and select Overrides Override the Object Discovery For specific objects of the class: Windows Computer. An example is shown in Figure 49.

2. Select the object where the HPE MSA Storage is discovered using the HPE Storage Management Pack User Configuration Tool and click OK. This will be the name of your SCOM Server.

Technical white paper

Page 43

Technical white paper Page 43 Figure 49. Image of an SNMP Trap Catcher Rule having override

Figure 49. Image of an SNMP Trap Catcher Rule having override values changed and saved to a new Management Pack

Configuring SNMP To receive SNMP traps from HPE MSA storage, the array must be configured by setting SNMP parameters. To perform this configuration, access the array via SSH and log in. SNMP parameters can be configured using the options in the table below.

Table 2. Options for configuring HPE MSA SNMP parameters.

Parameter

Description

configuring HPE MSA SNMP parameters. Parameter Description enable crit | error | warn | info |

enable crit | error | warn | info | none

Optional. Sets the level of the trap notification:crit: Sends notifications for Critical events only.error:

Sends notifications for Error and Critica events.warn : Sends notifications for Warning, Error, and Critical events.info: Sends notifications for all events.none: All events are excluded from trap notification and traps are disabled.

add-trap-host <address>

Optional. Specifies the IP address of a destination host that will receive traps. Three trap hosts can be set.

del-trap-host <address>

Optional. Deletes a trap destination host.

read-community <string>

Sets an alphanumeric community string for read-only access

Write-community <string>

Sets an alphanumeric community string for write-only access

trap-host-list <trap-host-list>

Optional. Replaces the current list.

Technical white paper

Page 44

Configuring the SCOM server to process SNMP traps

SNMP is a Windows feature that can be installed from within Server Manager. It has an SNMP Windows Management Instrumentation (WMI) Provider subcomponent that can be additionally installed. Proceed by installing both services, and after installation is complete, verify that the services are started.

Configure the SNMP service within Windows

1. From within the properties of the SNMP service, on the Traps tab, select the Community name of the HPE storage device to be monitored, as shown in Figure 50. Click the Add to list button. For Trap destinations, click the Add button and enter “localhost.” Finally, click the Apply

button.

and enter “localhost.” Finally, click the Apply button. Figure 50. SCOM SNMP service configuration ( Trap

Figure 50. SCOM SNMP service configuration (Trap tab)

2. Select the Security tab, as shown in Figure 51. Within the Accepted community names section, click the Add button, enter the community name of the HPE storage device, and then click Add. Select Accept SNMP packets from any host and click OK.

Technical white paper

Page 45

Technical white paper Page 45 Figure 51. SCOM SNMP service configuration ( Security tab) This allows

Figure 51. SCOM SNMP service configuration (Security tab)

This allows for HPE MSA traps to be captured and displayed within the Monitoring HPE Storage.HPE MSA Storage Events and Alerts section of the Operations Manager console, as shown in Figure 52.

Technical white paper

Page 46

Technical white paper Page 46 Figure 52. SCOM event view for HPE MSA Storage events The

Figure 52. SCOM event view for HPE MSA Storage events

The Diagram View section of HPE MSA Storage contains views captured via the Storage Management Initiative Specification (SMI-S), as shown

in Figure 53.

Technical white paper

Page 47

Technical white paper Page 47 Figure 53. SCOM Diagram View for HPE MSA SAN Storage HPE

Figure 53. SCOM Diagram View for HPE MSA SAN Storage

HPE Fabric Management add-in for System Center The HPE Fabric Management Add-In for System Center provides server fabric support and storage fabric support for SCVMM.

Server fabric support consists of the following:

End-to-end fabric visualization for virtualized environments, using HPE Synergy or HPE Virtual Connect to view information from VMs to the edge of the network configuration.

Enhanced provisioning using HPE OneView server profiles to deploy Hyper-V hosts consistently and reliably, including configuration of Windows networking, HPE Virtual Connect, and shared SAN storage.

Facilitates consistency and improved uptime with simplified driver and firmware updates using a rotating, automated workflow for Hyper-V clusters with the HPE ProLiant Updates Catalog.

Provisioning an HPE OneView-managed server by applying an HPE OneView server profile, deploying an operating system with an SCVMM physical computer profile, and configuring the SCVMM logical networking and HPE Virtual Connect networking by correlating the HPE OneView server profile-defined connections with the SCVMM physical computer profile-defined network adapters.

Ability to easily expand an existing cluster using HPE OneView server profiles and SCVMM.

Cluster consistency view in SCVMM to identify mismatched cluster node configurations.

Technical white paper

Page 48

Note

Deployment requires the installation of the HPE ProLiant SCVMM Integration Kit. This kit assists with SCVMM bare metal operating system deployment. It provides HPE drivers for deploying ProLiant servers into the SCVMM library share, so they can be used by the SCVMM deployment process. The same set of drivers can also be used for injection into WinPE images

Storage fabric support consists of the following:

A summary of the storage information for the configured HPE 3PAR storage, HPE StoreVirtual systems and appliances, and HPE Hyper- Converged-250 (HC-250) for Microsoft CPS Standard.

Facilitates the expansion of an existing storage volume for the configured HPE 3PAR storage, HPE StoreVirtual systems and appliances, and HPE HC-250 for Microsoft CPS Standard.

Facilitates provisioning operations such as storage volume expansion by utilizing HPE OneView resources.

Facilitates VM provisioning by using storage and Fibre Channel (FC) connections.

Facilitates deploying the HPE StoreVirtual Virtual Storage Appliance (VSA).

Provides a detailed view of the configured VMs and the HPE OneView storage system with storage and host information.

Displays SCOM events and alerts for HPE storage systems.

Provides the HPE 3PAR StoreServ Management Console (SSMC) link-launch feature for the HPE 3PAR storage systems.

Provides the HPE StoreVirtual Management Console (SVMC) link-launch feature for the HPE StoreVirtual storage system 13.x

OneView SCVMM Integration Kit installation The HPE Fabric Management Add-In for System Center is contained within the HPE OneView SCVMM Integration Kit and can be installed in either High Availability or Standalone mode. High Availability mode consists of an SCVMM implementation hosted within Failover Cluster Manager. Hewlett Packard Enterprise recommends installing the Fabric Management Add-In in High availability mode within a highly available SCVMM environment. To begin, start by installing OneView SCVMM Integration Kit, as shown in Figure 54.

Notes

Prior to installing HPE OneView management packs, make sure SCVMM and SCOM have all patches and update rollups applied.

If installing the HPE OneView SCVMM Integration Kit on a system running SCVMM Server and Console, the installation of the Fabric Management Add-In will not complete. Please install the HPE OneView SCVMM Integration Kit on a system running SCVMM “Console” only.

By default, “Console” will be selected for systems containing the SCVMM Console only.

Technical white paper

Page 49

Technical white paper Page 49 Figure 54. Module installation for HPE OneView SCVMM Integration Kit Continue

Figure 54. Module installation for HPE OneView SCVMM Integration Kit

Continue by following the installation prompts, and then review the summary. Click Install and Finish to complete the installation.

Configuring the HPE Fabric Management Add-in within the SCVMM console To use the HPE Fabric Management Add-in to view virtualization information within SCVMM, the SCVMM server needs to be authorized within the Fabric Management Add-in settings. This can be accomplished by navigating to either the VM and Services or Fabric sections of the SCVMM console. From within Fabric, select the section labeled Servers and then All Hosts. This enables the HPE Fabric Management button and opens the settings section, as shown in Figure 55.

From within settings, the SCVMM server is listed in the Action column. Authorization is visible, as shown in Figure 55. Click Authorize to be prompted to enter credentials allowing for authorization of the SCVMM server. Enter the credentials and click OK.

Technical white paper

Page 50

Technical white paper Page 50 Figure 55. HPE Fabric Management settings/authorization of the SCVMM server Configuring

Figure 55. HPE Fabric Management settings/authorization of the SCVMM server

Configuring the HPE OneView Appliance within the SCVMM Steps for adding the HPE OneView Appliance are similar. From within the settings OneView Appliances section, click Add OneView and enter the appliance address or hostname, and credentials for accessing the system. After the credentials are validated, the appliance is added to the system.

Note

When adding the HPE OneView Appliance, make sure the credentials provided contain the Infrastructure Administrator user role.

Configuring the HPE MSA Storage system within the SCVMM HPE MSA Storage systems can be added as a storage device within SCVMM by selecting Add Resources (green plus sign) within the Fabric Storage page, as shown in Figure 56. Display the menu by clicking the green plus sign icon and select Storage Devices. Finally, select the Provider Type.

select Storage Devices . Finally, select the Provider Type. Figure 56. Menu for adding resources to

Figure 56. Menu for adding resources to HPE Fabric Management Add-in

Technical white paper

Page 51

For HPE MSA or HPE 3PAR storage devices, select the SAN and NAS devices discovered and managed by the SMI-S provider option. This option allows you to configure specifics such as IP address, protocol, and the account used to add the system. After clicking Next, the available storage controllers, pools and capacity will be listed, as shown in Figure 57. At this point, you can add a classification, as well as the host group to be used when provisioning storage. Click Next and then click Finish to save your changes.

Note

HPE MSA storage is supported natively by SCVMM; however, it is not supported by the HPE Fabric Management Add-in.

it is not supported by the HPE Fabric Management Add-in. Figure 57. SMI-S configuration necessary to

Figure 57. SMI-S configuration necessary to add HPE MSA storage to HPE Fabric Management Add-in for SCVMM

Figure 58 shows a view of the HPE MSA resources. This provides you with the ability to create classifications and host groups when adding a storage resource to the HPE Fabric Management Add-in.

Technical white paper

Page 52

Technical white paper Page 52 Figure 58. HPE MSA resources For more information about the features

Figure 58. HPE MSA resources

For more information about the features of the HPE Fabric Management Add-in for SCVMM, see the HPE OneView SCVMM Integration Kit User Guide.

HPE ProLiant SCVMM Integration Kit installation

Note

Deployment requires the installation of the HPE ProLiant SCVMM Integration Kit. This kit assists with SCVMM bare metal OS deployment. It provides the HPE drivers for deploying HPE ProLiant servers into the SCVMM library share, so they can be used by the SCVMM deployment process. The same set of drivers can also be used for injection into WinPE images.

Executing the integration kit prompts you for a location to extract the PowerShell scripts used to complete the installation. After the scripts are extracted, be sure to execute scripts from a shell with Administrative privileges. Change directories to the extraction location and execute the install script. Be sure to include the Library Share location, as shown in Figure 59 and Figure 60.

Technical white paper

Page 53

Technical white paper Page 53 Figure 59. Command to install HPE ProLiant SCVMM Integration Kit: View

Figure 59. Command to install HPE ProLiant SCVMM Integration Kit: View One

to install HPE ProLiant SCVMM Integration Kit: View One Figure 60. Command to install HPE ProLiant

Figure 60. Command to install HPE ProLiant SCVMM Integration Kit: View Two

For additional information about the HPE ProLiant SCVMM Integration Kit, see the HPE ProLiant SCVMM Integration Kit User Guide.

HPE Storage integration with SCVMM using SMI-S

SMI-S (Storage Management Initiative Specification) is a standard developed by the Storage Network Industry Association (SNIA). The standard is designed to provide a method of management for heterogeneous Storage Area Network environments and describe information available to clients from an SMI-S-compliant CIM Server and object-oriented, XML-based, messaging-based interfaces designed to manage storage devices.

By leveraging vendor and technology-independent standards, SMI-S allows management application vendors to create applications that work across products from multiple vendors.

HPE Storage Integration with SCVMM automates HPE storage management using SMI-S. It provides an integrated view of VMs and associated storage resources, including end-to-end relationships between VMs and storage with context-sensitive, in-depth information for volumes and their business copies.

For more specific information about SMI-S, see https://www.snia.org/forums/smi/tech_programs/smis_home.

How does it work? The SMI-S model is divided into services, profiles, and sub-profiles, each of which describes a particular class of SAN entities. (Such as disk arrays or Fibre Channel switches.) These profiles allow for differences in implementation, but provide a consistent approach for clients to discover and manage SAN resources and facilitate interoperability across vendor products, like SCVMM and SCOM, within the SAN.

SMI-S also defines an automated resource discovery process using Service Location Protocol (SLP). This allows management applications to automatically find SAN resources and then probe them to determine the SMI-S profiles and features they support.

How do I configure it? For solutions like SCOM or SCVMM to discover HPE storage devices, such as HPE MSA storage or HPE 3PAR storage, SMI-S must be enabled and the CIM Service must be running. In addition, solutions, such as SCOM or SCVMM must be configured to access the HPE storage device.

HPE MSA Storage configuration

For the HPE MSA storage array to be discoverable from SCOM, the array needs to have SMI-S enabled. This can be determined by connecting to the array via SSH and running the Show Protocols command, as shown in Figure 61.

Technical white paper

Page 54

Technical white paper Page 54 Figure 61. Example showing the result of running the Show Protocols

Figure 61. Example showing the result of running the Show Protocols command on an HPE MSA storage array

The output of this command indicates whether the array is supporting Secure SMI-S or Unsecure SMI-S. The port for secure communication is 5989 and the port for unsecure communication is 5988. To enable or disable either secure or unsecure communication, run the following appropriate command:

Enable/disable secure protocol:

enable secure

disable secure

Enable/disable unsecure protocol:

enable unsecure

disable unsecure

Note

The Service Location Protocol is displayed under the SMI-S protocols. This protocol must be enabled to retrieve configuration information about the CIM server running on the storage device.

SCOM configuration

See HPE Storage Management Pack User Configuration Tool above for SCOM configuration. After the storage device has been added within the UCT, the device is discoverable via SMI-S and becomes visible within the Diagram View of SCOM.

This information is captured and made possible via SMI-S integration, as shown in Figure 62.

Technical white paper

Page 55

Technical white paper Page 55 Figure 62. HPE storage integrations/HPE MSA storage diagram view SCVMM configuration

Figure 62. HPE storage integrations/HPE MSA storage diagram view

SCVMM configuration

Next, from within the SCVMM console, navigate to the Fabric section. The pane across the top refreshes and provides an Add Resources drop- down button. Select Storage Devices from the drop-down list, as shown in Figure 63.

Devices from the drop-down list, as shown in Figure 63 . Figure 63. View from within

Figure 63. View from within SCVMM allowing for configuration of a new storage device

The Add Storage Devices Wizard is displayed, as shown in Figure 64. Select SAN and NAS devices discovered and managed by a SMI-S provider. This allows you to specify the information necessary to discover a storage device using SMI-S

To add an HPE MSA device, specify the SMI-S CIMXML protocol and enter a valid value in the Provider IP Address or FQDN field. The default connection is established as a secure connection using port 5988; therefore, no additional changes are necessary. Finally, create or select a Run As account entry to complete the connection, as shown in Figure 64. Click Next to continue.

Technical white paper

Page 56

Technical white paper Page 56 Figure 64. Required information for configuring an HPE MSA storage device

Figure 64. Required information for configuring an HPE MSA storage device and its SMI-S provider

On the following screens the storage device will be discovered and Storage Pools will be populated, along with capacity information. At this point, a classification and host group can be added. These can be used later when provisioning storage.

Figure 65 shows that communication has been established with the storage device. Discovered pool-capacity information, the configured classification, and the host group are also displayed. Click Next to continue.

Technical white paper

Page 57

Technical white paper Page 57 Figure 65. Select Storage Devices view The next screen allows you

Figure 65. Select Storage Devices view

The next screen allows you to confirm your settings. Click Finish to add the storage device.

HPE OneView for System Center provides a number of options for monitoring and managing HPE Infrastructure from a “single pane of glass”, so that administrators using Microsoft SCOM can manage their infrastructure. In addition, HPE storage integration with SCVMM automates HPE storage management. HPE storage integration with either SCOM or SCVMM is made possible using the Storage Management Initiative Specification (SMI-S).

Key findings and recommendations

HPE MSA storage

HPE MSA 2050/2052 SAN provides 2x performance over the previous generation at the same price, making it a cost-effective solution for virtualized infrastructure.

Redundant cabling should always be used when implementing a scalable, highly available architecture—whether with iSCSI or Fibre Channel.

HPE MSA 2050/2052 SAN provides support for oversubscription, uninterrupted volume scaling, and storage tiering. Take advantage of the HPE MSA SAN’s many features to improve upon virtual infrastructure within your environment.

HPE MSA 2050/2052 SAN supports two types of disk groups: virtual disk groups and read-cache disk groups. When possible, take advantage of read-cache disk groups to increase random read performance.

Virtual volumes support Volume Tier affinity (No Affinity, Archive, Performance).

HPE MSA 2050/2052 SAN provides two types of replication: Version 1 and Version 2, as follows:

– v1 replicates application-consistent volumes to another controller and runs over iSCSI.

– v2 provides crash-consistent volumes to another controller and runs over iSCSI or Fibre Channel.

– v2 utilizes Peer Connection Authentication and provides more frequent replications (30 minutes).

Technical white paper

Page 58

RAID-level considerations are easier to identify when using Adaptive Optimization. The performance tier should be configured with RAID 1, the standard tier with RAID 5, and the Archive tier with RAID 6.

Hardware configuration is likely to be the determining factor in RAID-level implementation. Use the information provided below to assist in the decision-making process.

HPE MSA configuration

HPE MSA 2040 SAN

process. HPE MSA configuration HPE MSA 2040 SAN Table 3. HPE MSA 2040 SAN configuration as

Table 3. HPE MSA 2040 SAN configuration as tested

Item

Description

Part Number

Model

HPE MSA 2040 SAN Dual Controller SFF Storage

C8R09A

FC host connectivity

16

GB Dual Connections/Controller

 

iSCSI connectivity

10

GB Dual Connections/Controller

 

Enclosure

SFF Array Enclosure max drives 24

 

Firmware

GL220P009

 

RAID levels used

NRAID setup on SSD Read Cache Disk Groups rcA1, rcB1 RAID 5 setup on dgA01 and dgB01

 

Disk type/size

Number of HDDs

24TTL

 

Number

2

SSD type:

SAS

Size:

400 GB

Form Factor:

SFF

Number:

22

HDD type

SAS

Size:

900 GB

Speed

10K

Form Factor:

SFF

Disk groups

Pool A

 

rcA1

single SSD NRAID

dgA01

12 HDD RAID 5

Pool B

rcB1

single SSD NRAID 10 HDD RAID 5

dgB01

Technical white paper

Page 59

HPE MSA 2050 SAN

Technical white paper Page 59 HPE MSA 2050 SAN Table 4 . HPE MSA 2050 SAN

Table 4. HPE MSA 2050 SAN configuration as tested.

Item

Description

Part Number

Model

HPE MSA 2050 SAN Dual Controller SFF Storage

Q1J01A

FC host connectivity

16

GB Dual Connections / Controller

 

iSCSI connectivity

10

GB Dual Connections / Controller

 

Enclosure

SFF Array Enclosure max drives 24

 

Firmware

VL100R004

 

RAID levels used

NRAID setup on SSD Read Cache Disk Groups rcA1, rcB1 RAID 6 setup on dgA01 and dgB01

 

Disk Qty/Type/Size

Number of HDDs:

24TTL

 

Number:

2

SSD type:

SAS

Size:

800 GB

Form Factor

SFF

Number:

22

HDD type:

SAS

Size:

1.2 TB

Speed:

10K

Form Factor:

SFF

Disk groups

Pool A

 

rcA1,

single SSD NRAID

dgA01

12 HDD RAID 6

Pool B

rcB1,

single SSD NRAID

dgB01,

10 HDD RAID 6

Performance tuning Some options to improve the performance of HPE MSA 2050 storage include the following:

Controller cache settings for individual volumes

– Write-back or write-through caching – write-back caching improves the performance of write operations.

– Optimizing read-ahead caching for volumes with sequential reads or streaming data.

– Volume Optimization Mode, (Standard or Mirror), Standard is the default and recommended configuration.

Host port tuning – using more ports per controller is recommended to achieve the best performance results.

Best practices for SSDs – when using SSDs, be sure to place them within an array enclosure instead of a disk enclosure, because array enclosures have the ability to use SSDs more efficiently.

Technical white paper

Page 60

For more information related to performance tuning, see the HPE MSA 2050-2052 Best Practices.

Summary

Businesses are constantly looking for ways to implement low-cost, easy-to-deploy storage solutions for their virtualized environments. Utilizing HPE infrastructure, such as servers, storage, and networking, enables customers to implement solutions to meet performance and availability needs at a cost that delivers a quicker return on investment (ROI).

HPE ProLiant DL380 server, HPE networking with speeds and dependability to support virtually any networking requirement, and HPE MSA storage—the industry’s leading entry-level SAN storage system eight years running—together provide the ideal low-cost solution to support virtually any multitenant mixed workload.

Terminology

Table 5. Terminology

Item

Description

Adaptive Optimization

HPE 3PAR’s disk-tiering technology, which automatically moves the most frequently accessed blocks to the fastest disks and at the same time moves the infrequently accessed cold blocks of data to slower disks.

(AO)

Array enclosure

The array head or chassis of the HPE MSA storage system that includes HPE MSA controllers.

Disk enclosure

The expansion shelf that is connected to the array enclosure.

Disk group

An aggregation of disks of the same type, using a specific RAID type, which are incorporated as a component of a pool for the purpose of storing volume data.

East-to-west

The VM traffic over networks within the Hypervisor host. VMs can communicate with other VMs or the Hypervisor host, but traffic does not leave the host.

Fabric

Commonly used to describe data or storage area networks (SANs).

Logical Unit Number (LUN)

A unique identifier for designating an individual or collection of physical or virtual storage devices that execute input/output (I/O) commands with a host computer, as defined by the Small System Computer Interface (SCSI) standard. HPE MSA 2050/2052 arrays support 512 volumes and up to 512 snapshots in a system. All of these volumes can be mapped to LUNs. Maximum LUN sizes are up to 128 TiB.

Network Interface Card (NIC)

A component that connects a physical or virtual server to a server network.

North-to-south

The VM traffic that leaves the Hypervisor host through a physical network interface. VMs can communicate with VMs located on the same network, internal or VLAN networks, and possibly the Internet.

NRAID

Non-RAID = No RAID configuration or striping to a single disk.

Page

An individual block of data residing on a physical disk. For virtual storage, the page size is 4 MB. A page is the smallest unit of data that can be allocated, deallocated, or moved between Virtual Disk Groups in a tier or between tiers.

Read cache

An extension of the controller cache. A read-cache disk group is a special type of a virtual disk group that is used to cache virtual pages to improve read performance.

Switch Embedded Teaming (SET)

An alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.

Storage Management

The UI for the HPE MSA storage device.

Utility (SMU)

Technical white paper

Page 61

Simple Network

A widely used network monitoring and control protocol.

Management Protocol

(SNMP)

Storage pool

One or more virtual disk groups. A volume’s data on a LUN can span all disk drives in a pool. When capacity is added to a system, users benefit from the performance of all spindles in that pool.

Storage system

The entire HPE MSA system that includes the array enclosure and disk enclosures.

Thin Provisioning

Allows storage allocation of physical-storage resources only when they are consumed by an application. Thin Provisioning also provides for overprovisioning of physical storage-pool resources, allowing flexible growth for volumes without requiring an upfront prediction of storage capacity.

Tier

A storage networking method where data resides on various types of media based on performance, availability, and recovery requirements.

User Configuration Tool (UCT)

A tool used to add storage devices to the HPE Storage Management Pack for System Center. After the storage device is added, storage device information is displayed within the Diagram View of the Management Pack.

Virtual Hard Disk (VHD)

A legacy hard disk format that has a limit of 2040 GB and cannot be used with Generation 2 VMs.

Hyper-V Virtual Hard Disk (VHDX)

This format cannot be used with Windows 2008. VHDX formatted disks can be up to 64 TB, support trim-and- reclaim unused space, have a 4 K logical sector size, and utilize an internal log that reduces the chance of corruption.

Technical white paper

Page 62

Resources and additional links

HPE Servers

HPE Storage

HPE Networking

MSA Entry Level Storage Systems hpe.com/storage/msa

HPE MSA 2050/2052 Best Practices

HPE MSA Remote Snap Software

HPE OneView 3.1 Documentation and Videos

HPE Storage Management Pack for System Center v4.5 User Guide

HPE OneView SCOM Integration Kit v3.3 User Guide

HPE OneView SCVMM Integration Kit v3.3 User Guide

HPE 3PAR OS Command Line Interface Reference (HPE 3PAR OS 3.3.1)

Additional HPE OneView for Microsoft System Center documentation

What’s New in Windows Server 2016

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.

please provide feedback at hpe.com/contact/feedback . Sign up for updates © Copyright 2018 Hewlett Packard

© Copyright 2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

technical or editorial errors or omissions contained herein. Microsoft, Windows, and Windows Server are either registered

Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

a00038738en_us, January 2018, Rev. 1