Вы находитесь на странице: 1из 168

Виртуализация ЦОД с помощью

System Center 2012 R2


Александр Шаповал
Microsoft
Александр Шаповал
Эксперт по стратегическим технологиям
Email: ashapo@microsoft.com
Blog: http://blogs.technet.com/b/ashapo
Twitter: @ashapoval
http://itcamps.ru
 Семинар 1. Windows 8.1 Enterprise для ИТ-специалистов
 Семинар 2. Управление ЦОД с помощью Windows Server 2012 R2
 Семинар 3. Виртуализация ЦОД с помощью System Center 2012 R2
 Семинар 4. Управление ЦОД с помощью System Center 2012 R2
 Семинар 5. Переход к гибридному облаку с помощью Windows Azure и
System Center 2012 R2
Программа семинара
Время Описание
09:30 – 10:00 Регистрация
10:00 – 12:00 Платформа виртуализации Microsoft
Настройка хостов виртуализации
12:00 – 12:15 Перерыв
12:15 – 14:00 Настройка хостов виртуализации
14:00 – 15:00 Обед
15:00 – 16:30 Кластеризация и надежность
16:30 – 16:45 Перерыв
16:45 – 18:00 Конфигурация виртуальных машин
18:00 – 18:15 Сессия вопросов и ответов
Введение
Модуль 1
Конфигурация лабораторных работ
DC01.contoso.com - Windows Server 2012 R2
DC01 3 NICs - 192.168.1.1 | 10.10.10.1 | 11.11.11.1
Running Active Directory Domain Services, DNS, iSCSI Target
SCVMM01.contoso.com - Windows Server 2012 R2
4 NICs - 192.168.1.6 | 10.10.10.6 | 11.11.11.6 | External DHCP SCVMM01
Running System Center 2012 R2 Virtual Machine Manager & SQL Server 2012 SP1
Running Server Manager & MMC Interfaces: Hyper-V Manager, Failover Cluster Manager
HYPER-V01.contoso.com – Hyper-V Server 2012 R2
HYPER-V01 8 NICs - 192.168.1.4 | 10.10.10.4 | 11.11.11.4 | 15.15.15.4 | 16.16.16.4 | 3 Reserved
Running Hyper-V, Failover Clustering
Note: Hyper-V is running inside a VM thus some functionality may not be
available.
HYPER-V02.contoso.com – Hyper-V Server 2012 R2
8 NICs - 192.168.1.5 | 10.10.10.5 | 11.11.11.5 | 15.15.15.5 | 16.16.16.5 | 3 Reserved HYPER-V02
Running Hyper-V, Failover Clustering
Note: Hyper-V is running inside a VM thus some functionality may not be
available.
FS01.contoso.com - Windows Server 2012 R2
FS01
3 NICs - 192.168.1.2 | 10.10.10.2 | 11.11.11.2
Running File Server Role & Services
Конфигурация лабораторных работ

You will RDP into SCVMM01 192.168.1.6 Corp Network 192.168.1.1 DC01
SCVMM01
10.10.10.6 10.10.10.1
192.168.1.2 orks
11.11.11.6
Netw 11.11.11.1
FS01 CSI
10.10.10.2 iS
B/
SM
11.11.11.2
192.168.1.4 192.168.1.5
10.10.10.4 10.10.10.5

HYPER-V01 11.11.11.4 11.11.11.5 HYPER-V02


15.15.15.4 Live Migration 15.15.15.5

16.16.16.4 Cluster Communication 16.16.16..5


Платформа
виртуализации
Microsoft
Модуль 2
Ключевые технологии

Automation Orchestrator vCenter Orchestrator


Service Mgmt. Service Manager vCloud Automation Center
Protection Data Protection Manager vSphere Data Protection
System Center 2012 R2 vCloud Suite
Monitoring Operations Manager vCenter&Ops Mgmt. Suite
vCenter
Self-Service App Controller vCloud Director
VM Management Virtual Machine Manager vCenter Server
Hypervisor Hyper-V vSphere Hypervisor
Ключевые технологии

Automation Orchestrator vCenter Orchestrator


System Center 2012 R2 Licensing vCloud Suite Licensing

Service Mgmt. Service Manager vCloud Automation Center


Standard Datacenter Std. Adv. Ent.

# of Physical CPUs per # of Physical CPUs


2 2 1 1 1
License per License

Protection Data Protection


# of Managed OSE’s Manager
2 + Host Unlimited vSphere Data Protection
# of Managed OSE’s
Unlimited VMs on Hosts
per License per License

Monitoring Operations Manager


Includes all SC Mgmt.
Components
Yes Yes vCenter OpsYesMgmt.
Includes vSphere
Yes Suite
5.1 Enterprise Plus
Yes

Includes SQL Server Includes vCenter 5.5 No No No


Yes Yes
Self-Service App Controller
for Mgmt. Server Use
vCloud No
Director
Includes all required
No No
Open No Level (NL) & database licenses
Software Assurance $1,323 $3,607
VM Management Virtual Machine Manager vCenter Server
Retail Pricing per
(L&SA) 2 year Pricing $4,995 $7,495 $11,495
CPU (No S&S)
vSphere 5.5 Standalone Per CPU Pricing (Excl. S&S):
Standard = $995
Windows Server 2012 R2 Inc. Hyper-V
Hypervisor Hyper-V Server 2012 R2 = Free Download vSphere Hypervisor
Enterprise = $2,875
Enterprise Plus = $3,495
Варианты развертывания Hyper-V
 Windows Server
 Server with a GUI
 Server Core Installation
 Many roles available incl. Hyper-V

 Hyper-V Server
 Free Standalone Download
 Contains hypervisor, driver model &
key virtualization components
 Server Core minus other roles

 From a Hyper-V perspective,


all 3 deployment options
have identical capabilities
Эволюция Hyper-V

er 2 P1
08 2

rv 8 R S
R2
20 R
ь
ас

Se 00 ния

er 12
er 08

12
ил

rv 20
08
08

rv 20

20
r-V er ле
08 яв

20
20

Se ver
pe rv ов
Se ver
20 по

er

r-V er
Hy Se бн

2
r-V er
er V

rv

pe s S
pe s S

и ws а о
rv er-

Se

Hy w
Hy w

т
Se yp

и do
r-V

r-V do ке
и indo
ws я H

n
R2 pe in па
pe

i
r-V W
Максимальная

r-V W
do ги

Hy я W ск
Hy

pe ск
in о

pe ск

дл пу
масштабируемость.
W ол

ск

Динамическая миграция.

Hy пу
Hy пу

Вы
пу
в хн

Вы
Вы
Общие тома кластера. Пространства хранения.
Вы
Те

Учет ресурсов и управление


Совместимость Динамическая память
процессоров. RemoteFX качеством обслуживания.
«Горячее» добавление Усовершенствованная
хранилищ. Сентябрь миграция.
Июнь Октябрь Октябрь Февраль
Повышение Расширяемость.
2008 г. 2008 г. 2009 г. 2011 г. 2012 г.
производительности Аппаратная разгрузка.
и масштабируемости. Виртуализация сети.
Репликация.
Масштабируемость физических и виртуальных
компонентов
Виртуализация самых Масштабируемость Виртуальный
64
корпоративного ЦП
ресурсоемких рабочих нагрузок уровня для
основных рабочих Виртуальная
Узлы нагрузок память 1 ТБ
• Поддержка до 320 логических процессоров
и 4 ТБ физической оперативной памяти на
каждом узле
• Поддержка до 1024 виртуальных машин на 320 4 ТБ
каждом узле
Кластеры
• Поддержка до 64 физических узлов и 8000
виртуальных машин в кластере
Виртуальные машины
• Поддержка до 64 виртуальных процессоров
и 1 ТБ памяти на каждой ВМ

Логические Физическая Физические


процессоры память узлы 64
Сравнение с VMware
Windows Server vSphere vSphere 5.5
System Resource
2012 R2 Hyper-V Hypervisor Enterprise Plus
Logical Processors 320 320 320
Host Physical Memory 4TB 4TB 4TB
Virtual CPUs per Host 2,048 4,096 4,096
Virtual CPUs per VM 64 8 641
Memory per VM 1TB 1TB 1TB
VM
Active VMs per Host 1,024 512 512
Guest NUMA Yes Yes Yes
Maximum Nodes 64 N/A2 32
Cluster
Maximum VMs 8,000 N/A2 4,000
1. vSphere 5.5 Enterprise Plus is the only vSphere edition that supports 64 vCPUs. Enterprise edition supports 32 vCPU per VM with all other editions supporting 8 vCPUs per VM
2. For clustering/high availability, customers must purchase vSphere
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf and http://www.vmware.com/products/vsphere-hypervisor/faq.html,
http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf
Управление вирт. средой Compares with
vCenter Server

Centralized, Scalable
Management of Hyper-V System Center
Virtual Machine Manager 2012 R2
VM and Cloud management
• Supports up to 1,000 Hyper-V hosts &
25,000 virtual machines per VMM Server
• Supports Hyper-V hosts in trusted &
untrusted domains, disjointed
namespace & perimeter networks

{
• Supports Hyper-V from 2008 R2 SP1
through to 2012 R2
• Comprehensive fabric management
capabilities across Compute, Network &
Storage
• End to end VM management across
heterogeneous hosts & clouds

Hyper-V Hosts
Архитектура VMM
SQL

Management
Console
App Controller (Консоль
управления)

Management
Server
(Сервер
управления)

Хосты (1000 хостов на один сервер Library


управления) (Библиотека)
Высокодоступная архитектура VMM
SQL-кластер

Management
Console
NLB App Controller (Консоль
управления)

Кластер
управления

Библиотека на файловом
Хосты (1000 хостов на один сервер кластере
управления)
}
System Center для ЦОД
App Controller
Портал управления ВМ и Operations Manager
приложениями (портал Проактивный мониторинг
самообслуживания) инфраструктуры и
приложений

Service Manager
Управление ИТ-сервисами
Virtual Machine Manager Портал самообслуживания
Управление ВМ и IaaS
облаками
{
Orchestrator
Интеграция и автоматизация
ключевых технологий и
процессов

Data Protection Manager


Непрерывная защита
ключевых приложений и
сервисов
Хосты Hyper-V
Настройка
хостов

Модуль 3
Развертывание Hyper-V
Сравнение с
PXE и автоматическим
развертыванием

Развертывание Развертывание Развертывание


при помощи при помощи по сети
DVD-диска или накопителя USB PXE выполняет загрузку
физических узлов
файла ISO Создание загрузочного
накопителя USB
и развертывание образа
Windows Server/Hyper-V Server
Создание загрузочного с помощью исходного по сети.
DVD-диска с помощью носителя Windows
исходного файла ISO Server/Hyper-V Server.
Windows Server/Hyper-V
Server.
Развертывание виртуализованной среды
с помощью VMM
Сбор информации
до развертывания Hyper-V Сервер Сервер
4
WDS без стандартного
Благодаря интеграции с BMC, VMM может 2 программного
вывести из спящего режима физический сервер
и собрать данные для определения обеспечения
соответствующего развертывания.
1. Перезагрузка OOB.
2. Загрузка с PXE. 3 5
3. Авторизация загрузки с PXE.
4. Скачивание настроенной в VMM среды
предустановки WinPE.
5. Выполнение набора вызовов в среде WinPE
для инвентаризации оборудования (сетевые Сервер 6
адаптеры и диски). VMM
6. Передача данных об оборудовании обратно 1
в VMM.
Развертывание виртуализованной среды
с помощью VMM
Централизованное
автоматизированное
развертывание Hyper-V «с нуля» Сервер
WDS 2 Сервер
без стандартного
После сбора инфориации VMM развернет образ
Hyper-V на физическом сервере.
4 программного
Виртуальный
жесткий диск обеспечения
1. Перезагрузка OOB. 3
2. Загрузка с PXE.
3. Авторизация загрузки с PXE. Сервер
VMM 5
4. Скачивание настроенной в VMM среды 1
предустановки WinPE. Физические
аппаратные
5. Запуск универсальных сценариев и драйверы
настройка разделов.
6. Скачивание VHD и вставка драйверов.
Затем узел присоединяется к домену Сервер
и добавляется к VMM Management. библиотек
Профиль и 6
После установки выполняются соответствующие физического
сценарии. компьютера
Virtualization Host Configuration
Granular, Centralized
Configuration of Hosts
Virtual Machine Manager 2012 R2 provides
complete, centralized hardware configuration
for Hyper-V hosts
Hardware – Allows the admin to configure
local storage, networking, BMC settings etc.
Storage – Allows the admin control granular
storage settings, such as adding an iSCSI or
FC array LUN to the host, or an SMB share.
Virtual Switches – A detailed view of the
virtual switches associated with physical
network adaptors.
Migration Settings – Configuration of Live
Migration settings, such as LM network,
simultaneous migrations
Конфигурация узла виртуализации
Детальная централизованная
конфигурация узлов

Virtual Machine Manager 2012 R2 обеспечивает


полную централизованную конфигурацию
оборудования для узлов Hyper-V.
Оборудование — позволяет администраторам
конфигурировать локальные хранилища, сети,
параметры BMC и т. д.
Хранилище — дает администраторам
возможность управлять параметрами хранилищ
iSCSI, Fibre Channel или ресурсов SMB.
Виртуальные коммутаторы — обеспечивают
детализированное представление виртуальных
коммутаторов, связанных с адаптерами
физической сети.
Параметры миграции — позволяют настраивать
конфигурацию динамической миграции
(сеть LM, одновременная миграция и т. д.).
Поддержка хранилища Hyper-V Сравнение с
MPIO
VAAI и VAMP

Интерфейсы Поддержка Технология Встроенная


iSCSI и Fibre многопутевого Offloaded Data поддержка
Channel ввода-вывода Transfer дисков
Простая и быстрая (Multi-path I/O, Разгрузка задач, с секторами
интеграция предъявляющих
с инвестициями MPIO) высокие требования размером 4 КБ
в существующие Встроенный модуль к хранилищам, Используйте
хранилища. MPIO для повышения и передача их в SAN. преимущества
надежности повышенной плотности
и производительности,
поддержки партнерских и надежности.
решений.
Управление хранилищем VMM
Централизованное управление System Center
и подготовка хранилища Virtual Machine Manager 2012 R2
Управление хранением
VMM позволяет обнаруживать локальные
и удаленные хранилища (SAN, пулы, LUN, диски,
тома, виртуальные диски и т. д.), а также
управлять ими.
VMM поддерживает хранилища с блочной

{
записью, использующие интерфейсы iSCSI и Fibre
Channel, а также файловые хранилища.
Интеграция VMM с WS SMAPI обеспечивает:
• Обнаружение SMI-S, SMP и устройств Storage
Spaces.
• Управление дисками и томами.
• Управление инициатором iSCSI/FC/SAS HBA.
R2: перечисление хранилищ выполняется
в 10 раз быстрее. Хранилища с Файловое
блочной хранилище
записью
Интегрированный интерфейс iSCSI Target
Преобразование Windows
Server 2012 R2 в iSCSI SAN
Интеграция ролей в Windows Server и управление
с помощью графического пользовательского интерфейса,
PowerShell.
Оптимально подходит для загрузки по сети и загрузки
без использования дисков, использования серверных
хранилищ приложений, разработки разнородных
хранилищ, а также для тестирования и развертывания
в лабораторных условиях.
Поддерживаются виртуальные жесткие диски VHDX
объемом до 64 ТБ, функции тонкой подготовки,
динамические и разностные виртуальные диски.
Также поддерживается безопасное обнуление для
развертываний дисков с фиксированным размером.
Масштабирование до 544 сеансов и 256 LUN для каждого
целевого сервера iSCSI с поддержкой кластеризации
для обеспечения устойчивости.
Полнофункциональная поддержка VMM Management
с помощью SMI-S.
Интеграция VMM iSCSI и Fibre Channel
Расширенная поддержка System Center
структур Fibre Channel Virtual Machine Manager 2012 R2
Управление хранением
После обнаружения VMM может централизованно
управлять ключевыми функциями iSCSI и Fibre
Channel.
iSCSI — подключение узлов Hyper-V к порталу iSCSI
и вход с использованием целевых портов iSCSI FC Fabric
и поддержкой нескольких сеансов для MPIO. (Структура FC)
Fibre Channel — добавление целевых портов к зоне
• Управление зонами, участниками и наборами зон
iSCSI SAN FC SAN
После подключения VMM может создавать
и назначать LUN, инициализировать диски, создавать
разделы, тома и т. д.
VMM также может уменьшать объем дискового
пространства, отключать тома, присваивать маски
LUN и т. д.
Декомпозиция SAN
Адаптеры подключения
Надежное подключение к внешним источникам
с помощью iSCSI, FC, FCoE, NFS, SMB

Контроллеры
Как правило, хранилище SAN, включающее в себя ЦП x86
и оперативную память, обладает функциональными
возможностями корпоративного уровня и позволяет выполнять
тонкую настройку, дедупликацию, разбиение хранилищ
на уровни и т. д. Использование нескольких контроллеров
обеспечивает отказоустойчивость.

Физические диски
Носители на основе технологии флеш (SSD) или шпиндельные
носители (HDD) предоставляют дисковое пространство для
хранения данных. Поддерживается объединение в пулы при
помощи контроллеров и разделение на LUN (простые,
с зеркалированием, с поддержкой четности и т. д.).
Управление хранилищами Майкрософт
Адаптеры подключения
Файловые серверы Windows Server обеспечивают надежное
подключение к внешним источникам при помощи стандартных
сетевых адаптеров 1GbE, 10GbE. Поддерживается различные
адаптеры, включая RDMA 56Гб. Кроме того, поддерживается
подключение при помощи iSCSI, SMB 3.0 и NFS.

Контроллеры
Кластеризованные файловые серверы Windows Server 2012 R2
(SOFS) создают пулы дисков, затем разделяют их на пространства
хранения. Пространства хранения поддерживают тонкую
подготовку и дедупликацию. Пространства могут быть простыми,
с поддержкой зеркалирования или с поддержкой четности.

Физические диски
Низкая стоимость, низкий уровень сложности массива JBOD
при совместном использовании SSD/HDD
и нескольких портов подключения SAS.
Поддержка хранилища узла Hyper-V

Пространства Разбиение Дедупликация Hyper-V через


хранения хранилищ данных SMB 3.0
Storage Spaces на уровни* Экономное Простота подготовки,
потребление файловых повышенный уровень
Преобразование Создание пулов HDD хранилищ, поддержка гибкости, легкость
недорогих дисков и SSD, автоматическое используемых интеграции, высокая
большой емкости перемещение часто виртуальных жестких производительность.
в гибкие и надежные используемых данных дисков для сценариев
виртуализованные в SSD для повышения VDI*.
хранилища. производительности.

*Новое в Windows Server 2012 R2


Пространства хранения Storage Spaces
Встроенное в Windows решение

} Тома
для управления хранилищами

Виртуализация хранилищ за счет группировки


F:\

} Пространства
дисков, соответствующих отраслевым
стандартам, в пулы носителей.
Пулы разделяются на виртуальные диски или
пространства.

} Пулы
Пространства поддерживают Thin Provisioning,
структуры Simple, Mirroring и Parity.
Windows создает том в пространстве хранения
и разрешает размещение данных в томе.

} Диски DAS
Пространства хранения могут использовать
только хранилища прямого подключения
(Direct Attach Storage, DAS) (локально в шасси
или с помощью SAS).
Разбиение хранилищ на уровни
для создания пространств
Оптимизация производительности Пространство
хранения данных в пространствах хранения Storage Space
Пул дисков состоит из высокопроизводительных Уровень твердотельных накопителей
носителей SSD и HDD большой емкости. SSD — 400 ГБ EMLC SAS SSD

Часто используемые данные перемещаются


автоматически в SSD, а редко используемые —
в HDD с помощью функции Sub-File-Level data
movement (Перемещение данных на более низкий Часто Редко
уровень, чем уровень файлов). используемые используемые
SSD поддерживает функции обратной записи данные данные
и кэширования. Это позволяет ускорить
обработку неупорядоченных операций записи, Уровень жестких дисков
которые часто используются при HDD — 4 ТБ 7200RPM SAS
виртуализованных развертываниях.
Администраторы могут вручную переместить
часто используемые файлы в SSD для повышения
производительности.
Для управления уровнями хранилищ доступны
новые командлеты PowerShell.
Hyper-V через SMB 3.0
Хранение виртуальных машин
Hyper-V в файловых хранилищах
SMB 3.0
\\SOFSFileServerName\VMs
Упрощение подготовки и управления.
Низкие операционные и капитальные затраты.
Файловый сервер
Добавление нескольких сетевых плат в файловых с поддержкой
серверах позволяет использовать протокол горизонтального
SMB Multichannel. Это способствует масштабирования
повышению пропускной способности и
надежности. Сетевые платы должны обладать Пространства
одинаковыми характеристиками (тип и скорость). хранения
Применение сетевых плат с поддержкой RDMA
дает возможность использовать протокол SMB Пулы
Direct, чтобы разгрузить обработку сетевых носителей
операций ввода-вывода на сетевой плате.
SMB Direct обеспечивает высокую пропускную
способность и низкий уровень задержек;
скорость может достигать 40 Гбит/с (RoCE) Физичес-
и 56 Гбит/с (Infiniband). кие диски
Интеграция файловых хранилищ
Широкие возможности
управления интегрированными
файловыми хранилищами
VMM обеспечивает доступ к общим сетевым
ресурсам с помощью SMB 3.0 для устройств NAS
(EMC, NetApp, а также продуктов других
поставщиков).
VMM поддерживает интеграцию с автономными
и кластеризованными файловыми серверами,
а также управление ими.
VMM позволяет быстро обнаруживать выбранные
файловые хранилища и проводить
их инвентаризацию.
VMM поддерживает функции выбора и
классификации существующих файловых
ресурсов. Это дает возможность упростить
размещение виртуальных машин.
VMM обрабатывает ACL автоматически. Благодаря
этому ИТ-администраторы могут назначать общие
ресурсы узлам Hyper-V для размещения
виртуальных машин.
Масштабируемый файловый сервер
Низкая стоимость, высокая
Масштабирования файловых
производительность, серверов (File Server, FS) — 4 узла
отказоустойчивые общие хранилища
Кластеризованный файловый сервер для хранения FS1 FS2 FS3 FS4
файлов виртуальных машин Hyper-V на общих
файловых ресурсах.
Высокая надежность, управляемость Кластеризо-
и производительность — это отличительные признаки ванные
систем SAN. пространства

Доступ к общим файловым ресурсам в режиме Кластеризо-


«активный — активный» — несколько файловых ванные пулы
ресурсов подключены к сети одновременно.
Расширенная полоса пропускания — при
добавлении дополнительных узлов SOFS.
Возможность использования CHKDSK с нулевым
временем простоя, поддержка кэша CSV. JBOD
Хранение
VMM поддерживает функции создания и управления с использова-
при помощи существующих серверов Windows, нием общих
а также при помощи серверов без ресурсов SAS
специализированного программного обеспечения.
Развертывание масштабируемых
файловых серверов
Централизованное и управляемое
развертывание файловых хранилищ

VMM поддерживает не только управление


автономными файловыми серверами, но и
развертывание масштабируемых файловых серверов
(даже на ресурсах без специализированного ПО).
При развертывании на узлах без
специализированного ПО характеристики файлового
сервера определяются физическим профилем.
Существующие серверы Windows могут быть
преобразованы в масштабируемые файловые
серверы (Scale-Out File Servers, SOFS)
непосредственно в VMM.
После импорта VMM может преобразовать
отдельные диски в диски высокой доступности
и динамические пулы (с поддержкой
классификации).
VMM может также создать отказоустойчивые
пространства и общие файловые ресурсы в пуле
носителей.
Классификация хранилищ и структур
Детальная классификация
хранилищ и структур FC
VMM позволяет классифицировать хранилища,
используя высокий уровень детализации для
извлечения сведений о хранилище:
• Тома (включая локальные диски узлов
и хранилища с прямым подключением).
• Общие файловые ресурсы (автономные
и на основе SOFS).
• Пулы носителей и LUN SAN.
• Структуры Fibre Channel — позволяют
идентифицировать структуру с помощью
понятных имен.
Поддержка эффективного и упрощенного
развертывания виртуальных машин на основе
классификаций.
Возможность интеграции с облачной средой.
Шифрование диска BitLocker
VHD на традиционном LUN
Встроенные средства шифрования E:\VM2
диска помогают защитить важные
данные
Встроенные средства защиты информации
VHD на DAS
• Поддержка режима шифрования только
занятого пространства на диске.
F:\VM1
• Интеграция с модулем TPM.
• Сетевая разблокировка и интеграция с AD.
Поддержка дисков различного типа
• Хранилища с прямым подключением
(Direct Attach Storage, DAS).
• Традиционный SAN LUN.
• Общие тома кластера.
• Общий файловый сервер
Windows Server 2012.

VHD на общих томах кластера VHD на файловом сервере


C:\ClusterStorage\Volume1\VM4 \\FileServer\VM3
Сравнение с VMware
Hyper-V
Hyper-V vSphere vSphere 5.5
Возможности
(2012 R2) Hypervisor Enterprise Plus
Поддержка iSCSI/FC
Поддержка систем многопутевого
Да Да Да
использует
ввода-вывода (MPIO) сторонних Да Нет Да (VAMP)1 инвестиции в
оборудование.
поставщиков
Возможность разгрузки SAN Да (ODX) Нет Да (VAAI)2
Виртуализация хранилищ данных
Разбиение хранилищ на уровни
Да (пространства)
Да
Нет
Нет
Да (vSAN)3
Да4
При этом
Поддержка сетевой файловой системы Да (SMB 3.0) Да (NFS) Да (NFS) отсутствуют
Дедупликация данных Да Нет Нет ограничения,
Шифрование хранилищ Да Нет Нет
связанные с SKU,
а также
1. Функция vSphere API для поддержки нескольких каналов ввода-вывода (VAMP) доступна только
в версиях Enterprise и Enterprise Plus vSphere 5.5.
2. Функция vSphere API для интеграции массивов (VAAI) доступна только в версиях Enterprise
и Enterprise Plus vSphere 5.5.
3. Функция vSphere vSAN доступна только в виде бета-версии.
4. vSphere Flash Read Cache поддерживает только механизм сквозного кэширования, ускоряющий операции
необходимость
чтения. В vSAN также предусмотрены механизмы кэширования SSD, выполняющие функции буфера для
операций чтения и записи. обновлений.
Сведения о vSphere Hypervisor / vSphere 5.x Ent+: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf,
http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf, http://www.vmware.com/products/vsphere/compare.html.
Lab 1
Virtual Machine
Storage

Module 3
Подключение к лаб. работам
https://cloud.holsystems.com/itcamp
Подключение к лаб. работам
https://cloud.holsystems.com/itcamp

Access event code: ITCV9177


Подключение к лаб. работам
https://cloud.holsystems.com/itcamp
Hyper-V Storage Support Compares with
MPIO
VAAI & VAMP

iSCSI & Fibre Multi-Path Offloaded Native 4K


Channel I/O Support Data Transfer Disk Support
Integrate with existing Inbox for resiliency, Offloads storage- Take advantage of
storage investments increased performance intensive tasks to the enhanced density
quickly and easily & partner extensibility SAN and reliability
VMM Storage Management
Centralized Management & System Center
Provisioning of Storage Virtual Machine Manager 2012 R2
Storage Management
VMM can discover & manage local and
remote storage, including SANs, Pools, LUNs,
disks, volumes, and virtual disks.
VMM supports iSCSI & Fibre Channel Block

{
Storage & File-based Storage
VMM integrates with WS SMAPI for discovery
of:
• SMI-S, SMP, and Spaces Devices
• Disk & Volume management
• iSCSI/FC/SAS HBA initiator management
R2: 10x faster enumeration of storage
Block Storage File Storage
Integrated iSCSI Target
Transform Windows Server
2012 R2 into an iSCSI SAN
Integrated Role within Windows Server &
manageable via GUI, PowerShell
Ideal for Network & Diskless Boot, Server
Application Storage, Heterogeneous Storage
& Development, Test & Lab Deployments
Supports up to 64TB VHDX, Thin Provisioning,
Dynamic & Differencing. Also supports
secure zeroing of disk for Fixed size disk
deployments.
Scalable up to 544 sessions & 256 LUNs per
iSCSI Target Server & can be clustered for
resilience
Complete VMM Management via SMI-S
VMM iSCSI & Fibre Channel Integration
Improved Support for Fibre System Center
Channel Fabrics Virtual Machine Manager 2012 R2
Storage Management
Once discovered, VMM can centrally manage
key iSCSI & Fibre Channel capabilities.
iSCSI - Connects Hyper-V hosts to iSCSI
portal and logs on to iSCSI target ports FC Fabric
including multiple sessions for MPIO.
Fibre Channel - Add target ports to Zone
• Zone Management, Member
Management, Zoneset Management iSCSI SAN FC SAN
Once connected, VMM can create and assign
LUNs, initialize disks, create partitions,
volumes etc.
VMM can also remove capacity, unmounts
volumes, mask LUNs etc.
Deconstructing a SAN

Connectivity Adaptors
Resilient connectivity to external sources via
iSCSI, FC, FCoE, NFS, SMB

Controllers
The brains of the SAN – typically now with x86 CPU, Memory, and
provides enterprise features like Thin Provisioning, Deduplication,
Storage Tiering etc. Multiple controllers provide resiliency.

Physical Disks
Flash-based (SSD) or spinning media (HDD) to provide the raw
storage capacity for your data. Pooled by the controllers,
and sliced into LUNs (Simple, Mirrored, Parity etc.)
Microsoft Storage Management
Connectivity Adaptors
Windows Server File Servers have resilient connectivity to
external sources using regular 1GbE, 10GbE Network Adaptors.
Support for up to 56Gb RDMA Adaptors. Support via iSCSI,
SMB 3.0 & NFS Connectivity

Controllers
Clustered Windows Server 2012 R2 File Servers (SOFS)
creates disk pools, then slices them into Storage Spaces.
Spaces can be Thin Provisioned & support Deduplication.
Spaces can be Simple, Mirrored or Parity.

Physical Disks
Low cost, low complexity JBOD shelf with SSD/HDD mix
and multiple SAS connectivity ports
Hyper-V Host Storage Support

Storage Storage Data Hyper-V over


Spaces Tiering* Deduplication SMB 3.0
Transform high-volume, Pool HDD & SSD and Reduce file storage Ease of provisioning,
low cost disks into automatically move hot consumption, now increased flexibility &
flexible, resilient data to SSD for supported for live VDI seamless integration
virtualized storage increased performance virtual hard disks* with high performance

*New in Windows Server 2012 R2


Storage Spaces
Inbox solution for Windows to

} Volumes
manage storage
Virtualize storage by grouping industry-
F:\

} Spaces
standard disks into storage pools
Pools are sliced into virtual disks, or Spaces.
Spaces can be Thin Provisioned, and can be

} Pools
striped across all physical disks in a pool.
Mirroring or Parity are also
supported.
Windows then creates a volume on the

} DAS Disks
Space, and allows data to be placed on the
volume.
Spaces can use DAS only (local to the chassis,
or via SAS)
Storage Tiering for Spaces
Optimizing storage
Storage Space
performance on Spaces
SSD Tier - 400GB EMLC SAS SSD
Disk pool consists of both high performance
SSDs and higher capacity HDDs
Hot data is moved automatically to SSD and
cold data to HDD using
Sub-File-Level data movement
Hot Data Cold Data
With write-back-caching, SSD absorb random
writes that are typical in virtualized
deployments
HDD Tier - 4TB 7200RPM SAS
Admins can pin hot files to SSDs manually to
drive high performance
New PowerShell cmdlets are available for the
management of storage tiers
Hyper-V over SMB 3.0
Store Hyper-V VMs on SMB
3.0 File Shares
\\SOFSFileServerName\VMs
Simplified Provisioning & Management
Low OPEX and CAPEX
Adding multiple NICs in File Servers unlocks Scale-out
SMB Multichannel – enables higher file server
throughput and reliability. Requires NICs of
same type and speed. Storage
Using RDMA capable NICs unlocks SMB spaces
Direct offloading network I/O processing to
the NIC. Storage
SMB Direct provides high throughput and pools
low latency and can reach 40Gbps (RoCE) and
56Gbps (Infiniband) speeds
Physical
disks
File Storage Integration
Comprehensive, Integrated
File Storage Management
VMM supports network shares via SMB 3.0 on
NAS device from storage vendors such as
EMC and NetApp
VMM supports integration and management
with standalone and clustered file servers
VMM will quickly discover and inventory
selected File Storage
VMM allows the selection, and now, the
classification of existing File Shares to
streamline VM placement
VMM allows IT Admin to assign Shares to
Hyper-V hosts for VM placement, handling
ACL’ing automatically.
Scale-Out File Server
Low Cost, High Performance,
Resilient Shared Storage Scale Out File Server (4 Nodes)

FS1 FS2 FS3 FS4


Clustered file server for storing Hyper-V
virtual machine files, on file shares
High reliability, availability, manageability, Clustered
and performance that you would expect from Spaces
a SAN
Active-Active file shares - file shares online Clustered
simultaneously Pools

Increased bandwidth – as more SOFS nodes


are added
CHKDSK with zero downtime & CSV Cache JBOD
Storage
Created & Managed by VMM, both from via
existing Windows Servers & Bare Metal Shared
SAS
Scale-Out File Server Deployment
Centralized, Managed
Deployment of File Storage
VMM can not only manage standalone File
Servers, but can deploy Scale-Out File
Servers, even to Bare Metal
For Bare Metal deployment, a physical profile
determines the characteristics of the File
Server
Existing Windows Servers can be transformed
into a SOFS, right within VMM
Once imported, VMM can transform
individual disks into highly available,
dynamic pools, complete with classification.
VMM can then create the resilient Spaces &
File Shares within the Storage Pool
Storage & Fabric Classification
Granular Classification of
Storage & FC Fabrics
VMM can classify storage at a granular level
to abstract storage detail:
• Volumes (including local host disks &
Direct Attached Storage)
• File Shares (Standalone & SOFS-based)
• Storage Pools & SAN LUNs
• Fibre Channel Fabrics - Helps to identify
fabric using friendly names.
Support for efficient & simplified
deployment of VMs to classifications
Now integrated with Clouds
BitLocker Drive Encryption
VHD on Traditional LUN
In-box Disk Encryption to E:\VM2

Protect Sensitive Data


Data Protection, built in
VHD on DAS
• Supports Used Disk Space Only Encryption F:\VM1
• Integrates with TPM chip
• Network Unlock & AD Integration
Multiple Disk Type Support
• Direct Attached Storage (DAS)
• Traditional SAN LUN
• Cluster Shared Volumes
• Windows Server 2012 File Server Share

VHD on Cluster Shared Volumes VHD on File Server


C:\ClusterStorage\Volume1\VM4 \\FileServer\VM3
VMware Comparison
Hyper-V
Hyper-V vSphere 5.5
Capability vSphere Hypervisor
(2012 R2) Enterprise Plus

integrates with
iSCSI/FC Support Yes Yes Yes
3rd Party Multipathing (MPIO) Yes No Yes (VAMP)1
SAN Offload Capability
Storage Virtualization
Yes (ODX)
Yes (Spaces)
No
No
Yes (VAAI)2
Yes (vSAN)3
key hardware
Storage Tiering Yes No Yes4 investments with
Network File System Support
Data Deduplication
Yes (SMB 3.0)
Yes
Yes (NFS)
No
Yes (NFS)
No
no SKU-specific
Storage Encryption Yes No No restrictions or
1.
2.
vSphere API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.5
vSphere API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.5 upgrades
required
3. vSphere vSAN is still in beta
4. vSphere Flash Read Cache has a write-through caching mechanism only, so reads only are accelerated. vSAN also has SSD
caching capabilities built in, acting as a read cache & write buffer.

vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/files/pdf/vsphere/VMware-


vSphere-Platform-Whats-New.pdf, http://www.vmware.com/products/vsphere/compare.html,
Lab 1
Virtual Machine
Storage

Module 3
Подключение к лаб. работам
https://cloud.holsystems.com/itcamp
Подключение к лаб. работам
https://cloud.holsystems.com/itcamp

Access event code: ITCV9177


Подключение к лаб. работам
https://cloud.holsystems.com/itcamp
NIC Teaming
Integrated Solution for
Network Card Resiliency
Operating system
• Vendor agnostic and shipped inbox
• Provides local or remote management
through Windows PowerShell or UI Virtual
• Enables teams of up to 32 network
adapters Team network Team network
adapter adapter
adapters
• Aggregates bandwidth from multiple
network adapters whilst providing traffic NIC Teaming
failover in the event of NIC outage
• Includes multiple nodes: switch dependent
and independent
• Multiple traffic distribution algorithms:
Hyper-V Switch Port, Hashing and
Dynamic Load Balancing
Hyper-V Networking Basics
Connecting VMs to each
other, and the outside world Hyper-V Host

3 Types of Hyper-V Network VM1 Private VM2


• Private = VM to VM Communication
• Internal = VM to VM to Host (loopback)
• External = VM to Outside & Host
Each vNIC can have multiple VLANs attached to it,
however if using the GUI, only a single VLAN ID can
be specified. External Internal
Set-VMNetworkAdapterVlan -VMName VM01
-Trunk -AllowedVlanIdList 14,22,40
Creating an external network transforms the chosen
physical NIC into a switch and removes TCP/IP stack
and other protocols
Optional host vNIC is created to allow
communication of host out of the physical NIC Host
vNIC
Hyper-V Extensible Switch Compares with
VMware vSwitch (Not VDS)

Layer-2 Network Switch for


Hyper–V host
Virtual Machine Connectivity
Virtual machine Virtual machine Virtual machine

Extensible Switch Network Network Network


application application application
• Virtual Ethernet switch that runs in the
Virtual network
management OS of the host adapter
Virtual network Virtual network
adapter adapter
• Exists on Windows Server Hyper-V, and
Windows Client Hyper-V
• Managed programmatically
Hyper‑V
• Extensible by partners and customers Extensible Switch

• Virtual machines connect to the Physical network


extensible switch with their adapter

virtual network adaptor


• Can bind to a physical NIC or team
Physical switch
• Bypassed by SR-IOV
Hyper-V Extensible Switch
Layer-2 Network Switch for Hyper–V host
Virtual Machine Connectivity Virtual machine Virtual machine Virtual machine

Granular In-box Capabilities Network Network Network


application application application
• Isolated (Private) VLAN (PVLANs) Virtual network
Virtual network Virtual network
adapter
• ARP/ND Poisoning (spoofing) adapter adapter

protection
• DHCP Guard protection
• Virtual Port ACLs Hyper‑V
Extensible Switch
• Trunk Mode to VMs
Physical network
• Network Traffic Monitoring adapter

• PowerShell & WMI Interfaces for


extensibility
Physical switch
Extending the Extensible Switch
Build Extensions for Capturing, Virtual Machine Virtual Machine
Filtering & Forwarding
Parent Partition
2 Platforms for Extensions VM NIC Host NIC VM NIC
• Network Device Interface Specification
(NDIS) filter drivers Virtual Switch
• Windows Filtering Platform (WFP) Extension Protocol
callout drivers
Extension
Capture A
Extensions
Extensions
• NDIS filter drivers Extension
Filtering C
Extensions

• WFP callout drivers Extension


Forwarding D
Extension

• Ingress filtering Extension Miniport

• Destination lookup and forwarding


Physical NIC
• Egress filtering

Hyper‑V Extensible Switch architecture


Extending the Extensible Switch
Build Extensions for Capturing, Virtual Machine Virtual Machine
Filtering & Forwarding
Parent Partition
Many Key Features VM NIC Host NIC VM NIC
• Extension monitoring & uniqueness
• Extensions that learn VM life cycle Virtual Switch

• Extensions that can veto state changes Extension Protocol

• Multiple extensions on same switch Extension


Capture A
Extensions

Several Partner Solutions Available Extension


Filtering C
Extensions
• Cisco – Nexus 1000V & UCS-VMFEX Extension
Forwarding D
Extension
• NEC – ProgrammableFlow PF1000 Extension Miniport
• 5nine – Security Manager
• InMon - SFlow Physical NIC

Hyper‑V Extensible Switch architecture


VMware Comparison
Advanced Networking Capability
Hyper-V
(2012 R2)
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
The Hyper-V
Integrated NIC Teaming Yes Yes Yes Extensible
Switch is open
Extensible Network Switch Yes No Replaceable
Confirmed Partner Solutions 5 N/A 2
Private Virtual LAN (PVLAN)
ARP Spoofing Protection
Yes
Yes
No
No
Yes1
vCloud/Partner2
and extensible,
DHCP Snooping Protection Yes No vCloud/Partner2 unlike
Virtual Port ACLs
Trunk Mode to Virtual Machines
Yes
Yes
No
No
vCloud/Partner2
Yes3 VMware’s
Port Monitoring
Port Mirroring
Yes
Yes
Per Port Group
Per Port Group
Yes3
Yes3
vSwitch, which
1. The vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.5 and is closed, and
replaceable
is replaceable (By Partners such as Cisco/IBM) rather than extensible.
2. ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the vCloud Networking & Security package, which is part
of the vCloud Suite or a Partner solution, all of which are additional purchases
3. Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which
is available in the Enterprise Plus edition of vSphere 5.5

vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www-


03.ibm.com/systems/networking/switches/virtual/dvs5000v/, http://www.vmware.com/technical-resources/virtualization-topics/virtual-networking/distributed-virtual-switches.html,
http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf, and
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html, http://www.vmware.com/products/vcloud-network-security,
Lab 2
Virtual Machine
Networking

Module 3
Comprehensive Network Management
Integrated management of the Blue Network Red Network
software defined network
VM
Top of rack switch management and
10.10.10.10 10.10.10.11 10.10.10.10 10.10.10.11 Networks
integration for configuration and compliance
Logical network management: named
networks that serve particular functions
in your environment i.e. backend Host
IP address pool management and integration NICs
with IP address management
Host and VM network switch management
Load balancer integration and
Logical
automated deployment CorpNet Networks
Network virtualization deployment
and management ToR
Switches
Top of Rack Switch Integration
Synchronize & Integrate ToR
Settings with VMM PowerShell CIM Cmdlets

Physical switch management and integration


built into VMM using in-box or partner-
supplied provider
Switches running Open Management
Infrastructure (OMI) Communicating using
WS-MAN
Switch Management PowerShell Cmdlets
Common management interface across
multiple network vendors
Automate common network management
tasks OMI OMI OMI
Manage compliancy between VMM, Hyper-V
Hosts & physical switches.
Logical Networks
Abstraction of Infrastructure Logical Network - ‘Contoso’
Networks with VMM
Network Site - ‘Contoso Building 1’
Logical networks are named networks that
serve particular functions i.e. “Backend,” IP Subnet IP Subnet
“Frontend,” or “Backup”. 10.10.10.0/24 10.10.4.0/24
VLAN 1 VLAN 4
Used to organize and simplify network
assignments B1 IP Pool 1 B1 IP Pool 4
10.10.10.2 – 10.10.4.50 –
Logical network is a container for network 10.10.10.50 10.10.4.100
sites, IP subnet & VLAN information
Supports VLANs & PVLAN Isolation
Hosts & Host Groups can be associated with
Logical Networks
IP Addresses can be assigned to Host & VM
NICs from Static IP Pools

Hyper-V Hosts
Static IP Pool Management in VMM
IP Address Management for
Hosts & Virtual Machines
VMM can maintain centralized control of host
& VM IP address assignment
IP Pools defined and associated with a Logical
Network & Site
VMM supports specifying IP range, along
with VIPs & IP address reservations
Each IP Pool can have Gateway, DNS & WINS
Configured.
IP address pools support both IPv4 and IPv6
addresses, but not in the same pool.
IP addresses assigned on VM creation, and
retrieved on VM deletion
The Logical Switch
Logical Switch
“Switch for Central Buildings”
Centralized Configuration of Native Port Profile for Uplinks
Network Adaptors across Hosts “Uplinks with Network Virtualization”
[ ] Network Site ‘Contoso Building 1’
[ ] Network Site ‘Contoso Building 2’
Combines key VMM networking constructs to
standardize deployment across multiple hosts
within the infrastructure: Port Classification
“High Bandwidth DB”
• Uplink Port Profiles
Native Port Profile for vNIC
• Virtual Port Profiles for vNICs “High Bandwidth DB”
(includes Offload, Security and
• Port Classifications for vNICs Bandwidth Settings)
• Switch Extensions
Logical Switches support compliance &
remediation Port Classification
“Medium Bandwidth DB”
Logical Switches support Host NIC Teaming
Native Port Profile for vNIC
& Converged Networking “Medium Bandwidth DB”
(includes Offload, Security and
Bandwidth Settings)
Uplink Port Profiles
Host Physical Network Adaptor
Configuration with VMM
Uplink Port Profile – centralized
configuration of physical NIC settings that
VMM will apply upon assigning a Logical
Switch to a Hyper-V host.
Teaming – Automatically created when
assigned to multiple physical NICs, but
admin can select LB algorithm &
teaming mode
Sites – Assign the relevant network sites &
logical networks that will be supported by
this uplink port profile
Virtual Port Profiles
Host Physical Network Adaptor
Configuration with VMM
Virtual Port Profile – Used to pre-configure
VM or Host vNICs with specific settings.
Offloading – Admins can enable offload
capabilities for a specific vNIC Port Profile.
Dynamic VMq, IPsec Task Offload & SR-IOV
are available choices.
Security – Admins can enable key Hyper-V
security settings for the vNIC Profile, such as
DHCP Guard, or enable Guest Teaming.
QoS – Admins can configure QoS bandwidth
settings for the vNIC Profile so when assigned
to VMs, their traffic may be
limited/guaranteed.
Dynamic Virtual Machine Queue
Increased efficiency of network
processing on Hyper-V hosts
Hyper‑V Host Hyper‑V Host Hyper‑V Host
Without VMQ
• Hyper-V Virtual Switch is responsible for
routing & sorting packets for VMs
• This leads to increased CPU processing, all
focused on CPU0
With VMQ
CPU0 CPU1 CPU2 CPU3 CPU0 CPU1 CPU2 CPU3 CPU0 CPU1 CPU2 CPU3
• Physical NIC creates virtual network
queues for each VM to reduce host CPU
With Dynamic VMQ
• Processor cores dynamically allocated for Without VMQ With VMQ With DVMQ
a better spread of network traffic
processing
Single Root I/O Virtualization
Integrated with NIC hardware Virtual Machine
for increased performance
VM Network Stack
• Standard that allows PCI Express devices
to be shared by multiple VMs Synthetic NIC Virtual Function
• More direct hardware path for I/O
• Reduces network latency, CPU utilization
for processing traffic and increases
throughput
Hyper‑V
• SR-IOV capable physical NICs contain Extensible Switch
virtual functions that are securely
mapped to VM
• This bypasses the Hyper-V Extensible
Switch
SR-IOV NIC VF VF VF
• Full support for Live Migration

Traffic Flow Traffic Flow


Network Quality of Service
Achieve desired levels of Relative minimum Strict minimum
networking performance bandwidth bandwidth
Normal High Bronze Silver Gold
Critical
priority priority tenant tenant tenant
Bandwidth Management
• Establishes a bandwidth floor W=1 W=2 W=5 100 MB 200 MB 500 MB

• Assigns specified bandwidth for each type Hyper‑V Extensible Switch Hyper‑V Extensible Switch
of traffic 1 Gbps

• Helps to ensure fair sharing during


congestion Gold Gold Gold
• Can exceed quota with no congestion Bandwidth tenant tenant tenant

2 Mechanisms oversubscription 500 MB 500 MB 500 MB

• Enhanced packet scheduler (software) Hyper‑V Extensible Switch

• Network adapter with DCB support NIC Teaming

(hardware)
1 Gbps 1 Gbps
Port Classifications
Abstract Technical Depth from
Virtual Network Adaptors
Port Classifications – provides a global
name for identifying different types of virtual
network adapter port profiles
Cross-Switch - Classification can be used
across multiple logical switches while the
settings for the classification remain specific
to each logical switch
Simplification – Similar to Storage
Classification, Port Classification used to
abstract technical detail when deploying VMs
with certain vNICs. Useful in Self-Service
scenarios.
Constructing the Logical Switch
Combining Building Blocks to
Standardize NIC Configuration
Simple Setup – Define the name and
whether SR-IOV will be used by VMs.
SR-IOV can only be enabled at switch
creation time.
Switch Extensions – Pre-installed/
Configured extensions available for use with
this Logical Switch are chosen at this stage
Teaming – Decide whether this logical switch
will bind to individual NICs, or to NICs that
VMM should team automatically.
Virtual Ports – Define which port
classifications and virtual port profiles can be
used with this Logical Switch
Deploying the Logical Switch
Applying Standardized
Configuration Across Hosts
Assignment – VMM can assign logical
switches directly to the Hyper-V hosts.
Teaming or No Teaming – Your logical
switch properties will determine if multiple
NICs are required or not
Converged Networking – VMM can create
Host Virtual Network Adaptors for isolating
host traffic types i.e. Live Migration, CSV, SMB
3.0 Storage, Management etc. It will also
issue IP addresses from it’s IP Pool. This is
useful with hosts that have just 2 x 10GbE
adaptors but require multiple separate,
resilient networks.
Lab 3
Advanced
Virtual Machine
Networking
Module 3
Virtual Machine
Clustering &
Resiliency
Module 4
Failover Clustering Overview Compares with
VMware HA

High-Availability Platform for


Applications with Shared Data Failover Clustering
Massive scalability with support for 64 physical Built Into Windows Server
nodes & 8,000 VMs.
Cluster physical servers (host clusters), virtual Node 1 Node 2 Node 3… …Node 64
machines (guest cluster), and SMB Scale Out File
Servers (storage cluster).
Built-in hardware and software validation tests to
ensure compliance and to offer guidance to fix
misconfigurations.
Redundant networks and teamed NICs supporting
IPv4 and/or IPv6.
Shared storage using SMB, iSCSI, Fibre Channel,
Fibre Channel over Ethernet (FCoE) or Serial-
Attached SCSI (SAS).
Cluster Shared Volumes (CSV) is a distributed-
access file system allowing multiple VMs to write
to the same disk. Cluster Shared Volumes (CSV)
on Shared Storage
How Failover Clustering Works
A cluster is a coordinated,
distributed system Cluster Communication
Critical for Efficient Operation of Cluster
All cluster nodes can access the same
shared storage.
VMs run on the host, but store their data Node 1 Node 2
(.vhdx) on shared storage.
Nodes monitor the health of each other
through cluster networks.
If a node fails or is partitioned, the
health check fails, and failover actions
take place.
The VMs or roles will restart on another
node, reading the application’s data
from the shared disk.
Shared Storage
Failover Clustering Quorum
Integrated Solution for
Resilient Virtual Machines Cluster Dynamic Quorum Configuration
Node
Node Majority
& Disk Majority
Uses quorum, a state, to determine how many
elements must be online for the cluster to
continue running. Node 1 Node 2 Node 3… …Node 64
Nodes, disks or file shares can have a vote.
There must always be an odd number of
votes across the cluster.
After a network partition, this ensure that one V V V V
group of voters (nodes or disks) has the
quorum (majority) of votes.
2012 introduced Dynamic Quorum to toggle
disk voting to ensure odd votes.
V
Reduced AD dependencies so contact with a
DC is not required for cluster to start.
Drain Roles to evacuate host for maintenance. V = Vote Shared Storage
Failover Clustering Networking
Optimal cluster configuration
requires multiple networks Cluster Networking
Minimum of 4 Networks Recommended
Host Management - Used for managing the
Hyper-V hosts through RDP, Hyper-V Manager, Virtual
Machine Manager etc.
Node 1 Node 2 Node 3… …Node 64
VM Access - Dedicated NIC(s) on the nodes for
VMs to use to communicate out onto the network
Live Migration - Network dedicated to the
transmission of live migration traffic
Cluster Communications- Preferred network used by
the cluster for communications to maintain cluster
health. Also, used by Cluster Shared Volumes to send
data between owner and non-owner nodes. If storage
access is interrupted, this network is used to access the
Cluster Shared Volumes or to maintain and back up the
Cluster Shared Volumes
Storage (Optional)
Used by the hosts to communicate with their iSCSI or
SMB storage iSCSI, Fibre Channel
or SMB 3.0 Storage
Hyper-V Cluster Deployment
Construction of Hyper-V
Clusters, Integrated into VMM
Hyper-V Clusters provide VM resiliency, so
that in the event of host failure, VMs
automatically restart on other physical hosts.
Creation – Replaces the use of Failover
Cluster Manager to create a Hyper-V Cluster.
Add Hosts – VMM will utilize hosts that are
already under management and not clustered
Validation – VMM will trigger the validation
of the cluster configuration to ensure solid
foundation. Skipping optional.
Storage & Networks – Select and configure
currently exposed storage and logical
networks
Failover Priority, Affinity & Anti-Affinity
Ensure Optimal VM Placement
and Restart Operations Anti-Affinity
Upon failover,
Hyper-V keeps
VMs
cluster related
restart
with VMs in VMs
eachapart
onpriority order
node

Failover Priority ensures certain VMs


start before others on the cluster
Affinity rules allow VMs to reside on 1 2
certain hosts in the cluster
Preferred and Possible Ownership help to
control where VMs run.
AntiAffinityClassNames helps to keep
Hyper-V
virtual machines apart on separate physical Hosts
cluster nodes
AntiAffinityClassNames exposed
through VMM as Availability Set
iSCSI, FC or SMB Storage
VM Monitoring Compares with
VMware App HA

Monitor Health of Applications


Inside Clustered VMs
• Upon service failure, Service Control Clustered VM with
Manager inside guest will attempt to restart Monitoring Enabled
the service
• After 3 failures, Cluster Service will
trigger event log entry 1250
• VM State = Application in VM Critical
• VM can be automatically restarted on the
same node
• Upon subsequent failure, VM can be failed
over and restarted on alternative node
• Extensible by Partners
Hyper-V
Cluster Node
Dynamic Optimization Compares with
vSphere DRS

Virtual Machine Resource utilization


Manager Optimization
threshold

Time of day
Dynamic Optimization Compares with
vSphere DRS

Optimizing cluster resource


usage by virtual machines
Load Balancing – VMM keeps the cluster
balanced across the different nodes, moving
VMs around without downtime
Heterogeneous – Supports load balancing on
Hyper-V, vSphere & XenServer clusters
Resources – looks at CPU, Memory, Disk IO and
Network IO - when the resource usage goes
above the DO threshold, VMM orchestrates live
migrations of VMs
User Controlled – configurable frequency, and
aggression level. Can be manually triggered, of
enabled for automatic optimization
Power Optimization Compares with
vSphere DPM

Virtual Machine Resource utilization


Manager

Optimization
threshold

Time of day
Power Optimization Compares with
vSphere DPM

Reduces power consumption


by Hyper-V hosts
Reduced Power Consumption –VMM assesses
the current cluster utilization and if the VMs can
be run on fewer hosts, it will migrate VMs onto
fewer hosts and power spares down
Resources – looks at CPU, Memory, Disk IO and
Network IO - when the resource usage goes
above the DO threshold, VMM orchestrates live
migrations of VMs
Configurable – Admin specifies time for PO to
operate, i.e. weekend, overnight, and if VMM
deems it possible, it will power hosts down
during this time. Hosts will be reactivated if
demand increases.
Centralized Virtualization Patching
Central patching of key hosts
& management servers
Cluster-Aware Compliance – Ensures all
hosts are patches to a baseline without VM
downtime
WSUS – Integrates with WSUS and
Configuration Manager
Baselines – Admins define patches that are to
be deployed for compliance. These baselines
are assigned to hosts/servers
Scan for Compliance – Scan the
hosts/management servers against baselines
to determine compliance
Remediation – VMM orchestrates the
patching of the servers, moving VMs as
necessary with Live Migration
VMware Comparison
Capability
Hyper-V vSphere vSphere 5.5 Only Hyper-V
provides Guest
(2012 R2) Hypervisor Enterprise Plus
Integrated High Availability Yes No1 Yes2

OS Application
Maximum Cluster Size 64 Nodes N/A 32 Nodes
Maximum VMs per Cluster 8,000 N/A 4,000

Monitoring in
Failover Prioritization Yes N/A Yes4
Affinity Rules Yes N/A Yes4

the box, with


Guest OS Application Monitoring Yes N/A Yes3
Cluster-Aware Updating Yes N/A Yes4

1.
2.
3.
vSphere Hypervisor has no high availability features built in – vSphere 5.1 is required.
VMware HA is built in to Essentials Plus and higher vSphere 5.1 editions
VMware App HA only available in 5.5 Enterprise Plus and requires deployment of 2 appliances per vCenter
no additional,
SKU specific
4. Features available in all editions that have High Availability enabled.

restrictions

vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/compare.html and http://www.yellow-bricks.com/2011/08/11/vsphere-5-0-ha-application-


monitoring-intro/, http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/products/vsphere/features/application-HA.html
Guest Clustering
Complete Flexibility for
Deploying App-Level HA Guest
Guest Cluster
cluster noderunning
nodes onona physical
restarts
supported Hyper-V Cluster
with Livehost failure
Migration

Guest
• Full support for running clustered Cluster
workloads on Hyper-V host cluster
• Guest Clusters that require shared storage
can utilize software iSCSI, Virtual FC or
SMB
• Full support for Live Migration of Guest
Cluster Nodes Hyper-V Host
Cluster
• Full Support for Dynamic Memory of
Guest Cluster Nodes
• Restart Priority, Possible & Preferred
Ownership, & AntiAffinityClassNames
help ensure optimal operation iSCSI, Fibre
Channel or
SMB Storage
Guest Clustering with Shared VHDX
Guest Clustering No Longer
Bound to Storage Topology Flexible choices for placement of Shared VHDX

• VHDX files can be presented to multiple Guest Guest


VMs simultaneously, as shared storage Cluster Cluster
• VM sees shared virtual SAS disk
• Unrestricted number of VMs can
connect to a shared VHDX file
Hyper-V
• Utilizes SCSI-persistent reservations Host Clusters
• VHDX can reside on a Cluster Shared
Volume on block storage, or on
File-based storage
• Supports both Dynamic and Fixed VHDX
Shared Shared
VHDX File CSV on SMB Share VHDX File
Block Storage File Based Storage
VMware Comparison
Capability
Hyper-V
(2012 R2)
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
Hyper-V
Max Size Guest Cluster (iSCSI)
Max Size Guest Cluster (Fiber)
64 Nodes
64 Nodes
5 Nodes1
5 Nodes2
5 Nodes1
5 Nodes2
provides the
Max Size Guest Cluster (File Based) 64 Nodes 5 Nodes1 5 Nodes1 most flexible
options for
Guest Clustering with Shared Virtual Disk Yes Yes6 Yes6
Guest Clustering with Live Migration Support Yes N/A3 No4
Guest Clustering with Dynamic Memory Support Yes No5 No5
guest-clustering,
1.
2.
Guest Clusters can be created on vSphere 5.5 with a maximum of 5 nodes
Shared storage for Quorum and/or Data must be on Fibre Channel (FC) based RDMs (physical mode for clusters across
without
3.
4.
physical hosts, virtual mode for clusters on a single host)
vMotion unavailable in the vSphere Hypervisor
VMware does not support vMotion and Storage vMotion of a VM that is part of a Guest Cluster
sacrificing agility
5.
6.
VMware does not support the use of Memory Overcommit with a VM that is part of a Guest Cluster
Guest Clustering with Shared Virtual Disks are only supported as a ‘Cluster in a Box’ i.e. multiple VMs on a single host. & density

vSphere Hypervisor / vSphere 5.x Ent+ Information http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://pubs.vmware.com/vsphere-55/index.jsp?


topic=%2Fcom.vmware.vsphere.mscs.doc%2FGUID-6BD834AE-69BB-4D0E-B0B6-7E176907E0C7.html, http://kb.vmware.com/kb/1037959
Lab 4
Virtual Machine
Clustering &
Resiliency
Module 4
Virtual Machine
Configuration

Module 5
Creating Virtual Machines with VMM
Granular, centralized process
for VM Deployment
VM Hardware – VMM provides all the
configuration for VM hardware upfront as
part of the Create VM Wizard
Intelligent Placement – VMM provides
placement guidance for deployment of the
virtual machine across hosts or clusters
Granular Network Control – VMM provides
granular networking configuration up front,
connecting with Logical/Standard Switches,
VLANs etc.
PowerShell – Wizards in VMM enable the
administrator to generate a script which is
exactly what VMM will perform behind the
scenes
Dynamic Memory
Achieve higher levels of
density for your Hyper-V hosts Maximum
memory
Memory in use
Maximum
memory Memory in use
Windows Server 2008 R2 SP1
• Introduced Dynamic Memory to enable Minimum
memory
reallocation of memory automatically Administrator can
increase maximum
between running virtual machines memory without a
VM1 restart
Enhanced in Windows Server 2012 & R2
• Minimum & Startup Memory Hyper‑V
• Smart Paging
• Memory Ballooning Physical
memory
• Runtime Configuration pool
Dynamic Memory | Smart Paging
Utilize disk as additional,
temporary memory Maximum
memory
Maximum
memory
Maximum
memory Startup increases
memory in use
Memory in use
Hyper-V Smart Paging Minimum
memory
after startup
Minimum Minimum
• Reliable way to keep a VM running when memory memory
no physical memory is available
• Performance will be degraded as disk is VM1 VM2 VMn
much slower than memory
Used in the following situations: Hyper‑V Paging file provides
Memory reclaimed
additional memory
• VM restart after startup
for startup
• No physical memory is available Physical
memory
pool
• No memory can be reclaimed from other
virtual machines on that host
Removing
Virtual machine
paged memory
starting with
after
virtual
Hyper‑machine
V smart restart
paging
New Virtual Hard Disk Format
VHDX Provides Increased
Scale, Protection & Alignment
Features Large allocations
and 1 MB aligned
Data region (large allocations and 1 MB aligned)
• Storage capacity up to 64 TBs Block Allocation
User data blocks
Intent log Table (BAT)
• Corruption protection during power failures Sector bitmap blocks

• Optimal structure alignment for large-sector


disks Header region Metadata region (small allocations and unaligned)
User metadata

Benefits Header Metadata table


File metadata
• Increases storage capacity
• Protects data
• Helps to ensure quality performance on
large-sector disks
Online VHDX Resize
Online VHDX Resize provides
VM storage flexibility

Expand Virtual SCSI Disks


1. Grow VHD & VHDX files whilst attached
to a running virtual machine
2. Then expand volume within the guest
Shrink Virtual SCSI Disks
3. Reduce volume size inside the guest
4. Shrink the size of the VHDX file whilst the
VM is running 30 GB Primary Partition40GB Primary Partition10 GB Unallocated

Expanded Virtual Disk & Volume without Downtime


Virtual Fibre Channel in Hyper‑V
Access Fibre Channel SAN Hyper‑V host 1 Hyper‑V host 2
data from a virtual machine
Virtual machine Virtual machine
• Unmediated access to a storage area LIVE MIGRATION
network (SAN)
• Hardware-based I/O path to virtual hard Worldwide Worldwide Worldwide Worldwide
Name Set A Name Set B Name Set A Name Set B
disk stack
• N_Port ID Virtualization (NPIV) support
• Single Hyper‑V host connected to different
SANs
• Up to four Virtual Fibre Channel adapters
on a virtual machine
• Multipath I/O (MPIO) functionality Live migration maintaining
• Supports Live migration Fibre Channel connectivity
Storage Quality of Service
Control allocation of Storage
IOPS between VM Disks
OS VHDX
Virtual Machine
• Allows an administrator to specify a
maximum IOPS cap
Data VHDX
• Takes into account incoming &
outgoing IOPS
• Configurable on a VHDX by VHDX Hyper-V Host
basis for granular control whilst VM is
running
• Prevents VMs from consuming all
500 1000
of the available I/O bandwidth to
the underlying physical resource
• Supports Dynamic, Fixed
& Differencing
0 1,500
IOPS
Virtual Receive Side Scaling
Provides Near-Line Rate to a With out
vProc
vProc Virtual
vRSS vProc
Machine
VM on Existing Hardware vProc

vNIC

• vRSS makes it possible to virtualize


traditionally network intensive physical
workloads
Node 0 Node 1 Node 2 Node 3
• Extends the RSS functionality built into
Windows Server 2012
• Maximizes resource utilization by spreading
VM traffic across multiple virtual processors With out
• Helps virtualized systems reach higher RSS
speeds with 40 Gbps and 100 Gbps NICs
• Requires no hardware upgrade and works 0 1 2 3
with any NICs that support RSS 0 1 2 3

Incoming
packets
Virtual Machine Live Cloning
Duplication of a Virtual
Machine whilst Running
Export a clone of a running VM
• Point-time image of running VM
VM1 VM2
exported to an alternate location
• Useful for troubleshooting VM
without downtime for primary VM
1 User Initiates an export of a running VM
Export from an existing checkpoint
• Export a full cloned virtual machine Hyper-V performs a live, point-in-time export of
from a point-in-time, existing checkpoint 2 the VM, which remains running, creating the new
of a virtual machine files in the target location
• Checkpoints automatically merged into Admin imports new, powered-off VM on the
single virtual disk 3
target host, finalizes configuration and starts VM
4 With Virtual Machine Manager, Admin can select
host as part of the clone wizard
VMware Comparison
Hyper-V vSphere vSphere 5.5
Capability
(2012 R2) Hypervisor Enterprise Plus
Virtual CPUs per VM
Memory per VM
64
1TB
8
1TB
641
1TB Hyper-V allows
Dynamic Memory
Maximum Virtual Disk Size
Yes
64TB
Yes
62TB
Yes
62TB the growth and
Online Virtual Disk Resize
Storage QoS
Yes
Yes
Grow Only
No
Grow Only
Yes shrinking of
Virtual Fibre Channel
Dynamic Virtual Machine Queue
Yes
Yes
Yes
NetQueue2
Yes
NetQueue2 online virtual
IPsec Task Offload
SR-IOV with Live Migration
Yes
Yes
No
No3
No
No3
disks with no
Virtual Receive Side Scaling
Network QoS
Yes
Yes
Yes (VMXNet3)
No
Yes (VMXNet3)
Yes
VM downtime.
VM Live Cloning Yes No Yes4
1. vSphere 5.5 Enterprise Plus is the only vSphere edition that supports 64 vCPUs. Enterprise edition supports 32 vCPU per VM
with all other editions supporting 8 vCPUs per VM
2. VMware vSphere and the vSphere Hypervisor support VMq only (NetQueue)
3. VMware’s SR-IOV implementation does not support vMotion, HA or Fault Tolerance and is only available as part of the
vSphere Distributed Switch
4. Live Cloning requires vCenter

vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/buy/editions_comparison.html, http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-


configuration-maximums.pdf, http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.1.pdf, http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf
Linux Support on Hyper-V
Comprehensive feature
support for virtualized Linux Configuration Worker
Store Processes
Applications Applications
Significant Improvements in
Interoperability WMI Provider

• Multiple supported Linux distributions Management Service Enlightened Mode Enlightened Mode
Optimized Performance Optimized Performance
and versions on Hyper-V. Optimized Synthetic
Optimized Synthetic
Devices Devices
• Includes Red Hat, SUSE, OpenSUSE,
CentOS, and Ubuntu Windows Virtual Service
Kernel Provider
Virtualization Virtualization
Comprehensive Feature Support Service Client Service Client
Independent Hardware
• 64 vCPU SMP Vendor Drivers

• Virtual SCSI, Hot-Add & Online Resize


• Full Dynamic Memory Support Hyper-V
• Live Backup
Server Hardware
• Deeper Integration Services Support
Linux Support in VMM
Deeper Integration for
Streamlined Linux Deployment
VMM Templates can be used to deploy both
Windows and Linux Guest Operating Systems
Enables Linux to be deployed to Hyper-V
hosts
Enables Linux to be part of Service Templates
Supports a number of customization options:
Root password, Computername,
DNSDomainName, IP address, Timezone,
Root ssh public key, Run once commands
Linux VM is requires to have latest Linux
Integration Services and VMM agent for Linux
Generation 2 Virtual Machines
VMs built on Optimized, Synthetic NIC
Software-Based Devices PXE Boot

Ease of Management & Operations


• PXE boot from Optimized vNIC
• Hot-Add CD/DVD Drive Hot-Add
Dynamic Storage CD/DVD
Drive
• VMs have UEFI firmware with support
for GPT partitioned OS boot disks >2TB
• Faster Boot from Virtual SCSI with Online Boot From
Resize & increased performance Virtual SCSI
Security
Generation 2
Virtual Machine
• Removal of emulated devices reduces
attack surface
• VM UEFI firmware supports Secure Boot UEFI Firmware
with Secure Boot
Generation 2 VM Support in VMM
Support for Generation 2 VMs
on Hyper-V 2012 R2
VMM provides comprehensive Generation 2
VM lifecycle support:
• Creation, Import/Export/Clone, Migration,
Store, Correct UI/CLI Hardware Profile
Support, Sysprep, Placement
VMM UI reflects key Generation 2 VM
hardware configuration options
VMM provides support for Generation 2
VM Templates
VMM does not support Generation 2 VMs
for Service Templates
VMM prevents deployment onto older hosts
Enhanced Session Mode
Enhancing VMConnect for
the Richest Experience
Improved VMBus Capabilities enable:
• Audio over VMConnect
• Copy & Paste between Host & Guest
• Smart Card Redirection
• Remote Desktop Over VMBus
Enabled for Hyper-V on both Server
& Client
Fully supports Live Migration of VMs
Automatic Virtual Machine Activation
Simplifying Activation of
Windows Server 2012 R2
Windows Server 2012 R2 VMs Windows Server 1 Datacenter host activated with
2012 R2 VM
regular license key
• Activate VMs without managing
product keys on a VM by VM basis Windows Server 2012 R2 VM is
• VMs activated on start-up 2 created, with an AVMA key
• Reporting & Tracking built-in injected in the build
• Activate VMs in remote locations, with On start-up, VM checks for an
or without internet connectivity
3 activated, Windows Server
• Works with VM Migration 2012 R2 Datacenter Hyper-V
• Generic AVMA key for VMs activates host
against a valid, activated Windows Windows Server
Server 2012 R2 Hyper-V host 2012 R2 Datacenter Guest OS activates and won’t
Hyper-V Host 4 recheck against host until next
guest reboot, or after 7 days.
VMware Comparison
Hyper-V vSphere vSphere 5.5
Capability
(2012 R2) Hypervisor Enterprise Plus
Linux Guest OS Support Yes Yes Yes
VMs with Secure Boot & UEFI Firmware Yes No No
Enhanced VM Administration Experience Yes No No
Automatic VM Activation Yes No No
Lab 5
Virtual Machine
Configuration

Module 5
Virtual Machine
Mobility

Module 6
Live Migration Compares with
vMotion

Faster, Simultaneous Migration


of VMs Without Downtime Modified
Memory
Storage
Live migration
pages
pages
handle
transferred
transferred
moved
setup

• Faster live migrations, taking full advantage


of available network
VM Modified memory
Configuration
Memory pages
content
data VM
• Simultaneous Live Migrations

MEMORY
• Uses SMB Direct if network bandwidth
available is over 10 gigabits
st
• Supports flexible storage choices IP connection get ho
Tar
• No clustering required if virtual machine
resides on SMB 3.0 File Share

iSCSI, FC or SMB Storage


Live Migration with Compression
Intelligently Accelerates Live
Migration Transfer Speed Modified
Memory pages
Storage
Livecompressed,
migration
handle moved
setup
then
then transferred
transferred

• Utilizes available CPU resources on the host


to perform compression
M
VM Modified memory
Configuration
Memory pages
content
data VVM
• Compressed memory sent across the

MEMORY
network faster
• Operates on networks with less than 10
gigabit bandwidth available IP connection et ho
st
g
Tar
• Enables a 2X improvement in Live Migration
performance

iSCSI, FC or SMB Storage


Live Migration over SMB
Harness RDMA to Accelerate
Live Migration Performance Modified
Memory pages
Storage
pages
Live migration
transferred
transferred
handle moved
setup
at
at high
high speed
speed

• SMB Multichannel uses multiple NICs for


increased throughput and resiliency
Modified memory
Configuration
Memory pages
content
data
VM VM
• Remote Direct Memory Access delivers low

MEMORY
latency network, CPU utilization & higher
bandwidth
• Supports speeds up to 56Gb/s et ho
st
IP Connection g
using RDMA Tar
• Windows Server 2012 R2 supports RoCE,
iWARP & Infiniband RDMA solutions
• Delivers the highest performance for
Live Migrations
• Cannot be used with Compression
iSCSI, FC or SMB Storage
Storage Live Migration Compares with
Storage vMotion

Disk
Disk
Reads
writes
contents
and
are mirrored;
writes
are copied
go outstanding
to to
newnew
Increased Flexibility through Reads and writes go to the source VHD
changes
destination
are replicated
VHD
Live Migration of VM Storage
Host running
• Move virtual hard disks attached
to a running virtual machine
Hyper‑V
Virtual machine
• Manage storage in a cloud environment with
greater flexibility and control
• Move storage with no downtime
• Update physical storage available to a virtual Source device Target device
machine (such as SMB-based storage)
• Windows PowerShell cmdlets VHD VHD
Shared-Nothing LM Compares with
vMotion

Disk
Reads
Disk
contents
writes
and writes
are
arecopied
mirrored;
go toto
thenew
Complete Flexibility for Virtual Live
Live Migration
Migration Completes
Continues
outstanding
source VHD.
destination
source
changes
Live Migration
VHDVHD
are replicated
Begins
Machine Migrations Destination
Source Live Migration
Hyper‑V Configuration
Modified memory data
pages
Hyper‑V

MEMORY
Memory content
• Increase flexibility of virtual machine Virtual Virtual
placement & increased administrator machine machine

efficiency IP connection

• Simultaneously live migrate VM & virtual disks


between hosts
• Nothing shared but an ethernet cable
Source device Target device
• No clustering or shared storage requirements
• Reduce downtime for migrations across VHD VHD
cluster boundaries
Live Migration Upgrades
Simplified upgrade process Hyper-V Cluster Upgrade without Downtime
from 2012 to 2012 R2 2012 Cluster Nodes 2012 R2 Cluster Nodes

• Customers can upgrade from Windows 1


0
3
2 1023
Server 2012 Hyper-V to Windows Server
2012 R2 Hyper-V with no VM downtime
• Supports Shared Nothing Live Migration for
migration when changing storage locations
• If using SMB share, migration transfers only
the VM running state for faster completion
Hyper-V
• Automated with PowerShell
Hosts
• One-way Migration Only

SMB Storage
VMware Comparison
Only Hyper-V
Hyper-V vSphere vSphere 5.5
Capability
(2012 R2) Hypervisor Enterprise Plus
VM Live Migration Yes No1 Yes2
VM Live Migration with Compression Yes No No provides key
VM migration
VM Live Migration over RDMA Yes No No
1GB Simultaneous Live Migrations Unlimited3 N/A 4
10GB Simultaneous Live Migrations
Live Storage Migration
Unlimited3
Yes
N/A
No4
8
Yes5 features in the
Shared Nothing Live Migration
Live Migration Upgrades
Yes
Yes
No
N/A
Yes5
Yes
box, with no
1.
2.
Live Migration (vMotion) is unavailable in the vSphere Hypervisor – vSphere 5.5 required
Live Migration (vMotion) and Shared Nothing Live Migration (Enhanced vMotion) is available in Essentials Plus & higher
additional
licensing costs
editions of vSphere 5.5
3. Within the technical capabilities of the networking hardware
4. Live Storage Migration (Storage vMotion) is unavailable in the vSphere Hypervisor
5. Live Storage Migration (Storage vMotion) is available in Standard, Enterprise & Enterprise Plus editions of vSphere 5.5
6. Live Cloning requires vCenter

vSphere Hypervisor / vSphere 5.x Ent+ http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/products/vsphere/compare.html,


Lab 6
Virtual Machine
Mobility

Module 6
Virtual Machine
Replication &
Protection
Module 7
Streamlined Incremental Backup
Integrated Virtual Machine
Backup Capabilities
Sunday Monday
First full backup after enabling incremental backup First incremental backup
Before backup During backup After backup Before backup During backup After backup

• Allows incremental backup of virtual


VHD VHD VHD VHD VHD VHD

hard disks Merge

• Is Volume Shadow Copy Service Differences 1 Differences 1 Differences 1 Differences 1 Differences 2


(VSS)-aware
• Backs up the Hyper‑V environment Differences 2

Tuesday Friday: Restore to Tuesday’s Backup


• Requires no backup agent inside
Second incremental backup Incremental restore
virtual machines Before backup During backup After backup Before restore During restore After restore

• Saves network bandwidth VHD VHD VHD VHD VHD VHD

Merge Merge

• Reduces backup sizes


• Saves disk space Differences 2 Differences 2 Differences 3 Differences 3 Differences 1

• Lowers backup cost Differences 3 Differences 2

Files in blue are backed up


Differences 3
Windows Azure Backup Integration
Windows Server Backup
Integrated with Cloud Services Third-party cloud

• Simple installation and configuration Windows Azure Third-party online


Backup portal backup portal
• Ability to leverage Windows Azure
Backup cloud services to back up data
• Sign up • Sign up
• Use either the Windows Azure Backup • Billing • Billing
Service Agent or the Windows Azure Windows Azure
Backup service
Third-party online
backup service
Backup PowerShell cmdlets
• Reduced cost for backup storage Registration
Backup/
and management Restore
Registration

• Options for third-party cloud services


• Ideal for small businesses, branch Inbox engine Agents
offices, and departmental Inbox UI • Windows Azure Backup
business needs Windows Server 2012
• Third-party agents
Windows Server R2 backup (extensible)
2012 R2
IT Pro
Hyper‑V Replica
Replicate Hyper‑V VMs from a
Primary to a Replica site Once
Once
Uponreplicated,
Hyper-V
site failure,
Replica
changes
VMs is
canenabled,
replicated
be started
VMs
onon
chosen
begin
secondary
replication
frequency
site

• Affordable in-box business continuity and Primary Site Initial Replica


Secondary Site
Replicated Changes
disaster recovery
• Configurable replication frequencies of 30
seconds, 5 minutes and 15 minutes
• Secure replication across network
• Agnostic of hardware on either site
• No need for other virtual machine
replication technologies
• Automatic handling of live migration
• Simple configuration and management CSV on SMB Share
Block File Based
Storage Storage
Hyper-V Replica | Extended Replication
Replicate to 3rd Location for
Extra Level of Resiliency Replication canconfigured
Replication be enabledfrom
on the 1st replica
primary to a 3rd site
to secondary

• Once a VM has been successfully


DR Site
replicated to the replica site, replica Replication
can be replicated to a 3rd location
• Chained Replication
• Extended Replica contents match the
original replication contents
• Extended Replica replication frequencies
can differ from original replica
DAS
• Useful for scenarios such as SMB -> Storage
Service Provider -> Service Provider DR
Site
Hyper-V Recovery Manager
Orchestrate protection and
recovery of private clouds
Windows Azure
Hyper-V Recovery Manager
• Protect important services by
coordinating replication and recovery of

Co
el
VMM-managed private clouds

m
an

m
ch

un
• Automates replication of VMs within

ica
tio
clouds between sites

tio
ica

nc
un

ha
mm
• Hyper-V Replica provides replication,

nn
LOB cloud/Dev-test LOB cloud/Dev-test

Co
orchestrated by Hyper-V Recovery

el
Manager
System Center 2012 R2 Failover System Center 2012 R2
• Can be used for planned, unplanned and
testing failover between sites
• Integrate with scripts for customization of
recovery plans
Hyper-V Hyper-V
Hosts Replication
Hosts
Channel

Datacenter 1 Datacenter 2
VMware Comparison
Hyper-V vSphere vSphere 5.5
Capability

Only Hyper-V
(2012 R2) Hypervisor Enterprise Plus
Incremental Backup Yes No1 Yes1
Inbox VM Replication
1.
Yes No1
vSphere Data Protection and vSphere Replication are available in the Essentials Plus and higher editions of vSphere 5.5
Yes1
provides key
Replication Capability Hyper-V Replica vSphere Replication
replication
Architecture Inbox with Hypervisor Virtual Appliance capabilities
Replication Type
RTO
Asynchronous
30s, 5, 15m
Asynchronous
15 Minutes-24 Hours
without additional
Replication Tertiary Secondary products, such as
Planned Failover
Unplanned Failover
Yes
Yes
No
Yes
Site Recovery
Test Failover Yes No Manager, required
Simple Failback Process Yes No
Automatic Re-IP Address Yes No
Point in Time Recovery Yes, 15 points No
Orchestration Yes, PowerShell, HVRM No, SRM
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/compare.html, http://www.vmware.com/products/vsphere/features/replication.html,
http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Replication-Overview.pdf,
Lab 7
Virtual Machine
Replication &
Protection
Module 7
Network
Virtualization

Module 8
Network Virtualization
Network Isolation & Flexibility Blue Network Red Network
without VLAN Complexity
• Secure Isolation for traffic segregation, 10.10.10.10 10.10.10.11 10.10.10.12 10.10.10.10 10.10.10.11 10.10.10.12
without VLANs
• VM migration flexibility
• Seamless Integration
Key Concepts
• Provider Address – Unique IP addresses 192.168.2.10 192.168.2.11 192.168.2.12 192.168.2.13 192.168.2.14
routable on physical network
• VM Networks – Boundary of isolation Network/VSID Provider Address Customer Address
between different sets of VMs Blue (5001) 192.168.2.10 10.10.10.10
• Customer Address – VM Guest OS IP Blue (5001) 192.168.2.10 10.10.10.11
addresses within the VM Networks Blue (5001) 192.168.2.12 10.10.10.12
• Policy Table – maintains relationship Red (6001) 192.168.2.13 10.10.10.10
between different addresses & networks Red (6001) 192.168.2.14 10.10.10.11
Red (6001) 192.168.2.12 10.10.10.12
Network Virtualization through NVGRE
Network Isolation & Flexibility
without VLAN Complexity
192.168.2.10 -> GRE Key 10.10.10.10 ->
MAC
• Network Virtualization using Generic 192.168.5.12 (5001) 10.10.10.11
Route Encapsulation uses
encapsulation & tunneling
• Standard proposed by Microsoft, Intel,
Arista Networks, HP, Dell & Emulex
• VM traffic within the same VSID routable Same Customer
10.10.10.10 Network & VSID 10.10.10.11
over different physical subnets
• VM’s packet encapsulated for
transmission over physical network
• Network Virtualization is part of the
Hyper-V Switch
192.168.2.10 192.168.5.12
Different Subnets
Network Virtualization Packet Flow
Where is 10.10.10.11?
10.10.10.10 10.10.10.11
Blue1 Network Virtualization Packet Flow Blue2
Blue1 sending to Blue2
VSID 5001 1. Where is 10.10.10.11? VSID 5001
2. Blue1 sends ARP Packet to locate 10.10.10.11
Hyper-V Switch 3. Hyper-V Switch broadcasts ARP on VSID 5001 Hyper-V Switch
4. Hyper-V Switch then broadcasts ARP to the rest of
VSID ACL Enforcement the network, but intercepted by NV Filter VSID ACL Enforcement
Note: ARP not broadcast on physical network
Network Virtualization 5. NV Filter checks its Policy Table and responds with Network Virtualization
IP Virtualization Blue2 MAC
IP Virtualization
6. NV Filter sends ARP Response back into Hyper-V
Policy Enforcement Policy Enforcement
Switch and on to Blue1
Routing Routing
ARP TABLE

10.10.10.11 34:29:af:c7:d9:12

192.168.2.10 192.168.5.12
MACPA1 MACPA2
Network Virtualization Packet Flow
MACB1 -> MACB2 10.10.10.10 -> 10.10.10.11
10.10.10.10 10.10.10.11
Blue1 Packet
Network Virtualization Packet Flow Blue2
Blue1 sending to Blue2
VSID 5001 7. Blue1 starts to construct its packet for Blue2 and VSID 5001
VSI VSI
Packet Packet
D sends it to the Hyper-V Switch D
Hyper-V Switch 8. Hyper-V Switch attaches the VSID Hyper-V Switch
VSID ACL Enforcement 5001 MACB1 -> MACB2 10.10.10.10 -> 10.10.10.11 VSID ACL Enforcement
Network Virtualization 9. NV Filter checks to see if Blue1 is allowed to contact Network Virtualization
VSI Blue2, then constructs GRE Packet and sends it VSI
IP Virtualization
Packet
D GRE
across the physical network
Packet
D GRE
IP Virtualization
Policy Enforcement Policy Enforcement
MACP1 -> 192.168.2.10 -> MACB1 -> 10.10.10.10 ->
Routing 5001 Routing
MACP2 192.168.5.12 MACB2 10.10.10.11

10. On receiving host, opposite process takes place – NV Filter strips


GRE, pulls out the VSID information, passes packet to Hyper-V
Switch, where VSID removed and packet sent to Blue2 VM

192.168.2.10 192.168.5.12
MACPA1 MACPA2
Network Virtualization Gateway
Contoso Fabrikam
Bridge Between VM Networks
& Physical Networks

• Multi-tenant VPN gateway in Windows Resilient


Server 2012 R2 Resilient
HNV
Gateway
Internet HNV
• Integral multitenant edge gateway for Gateway
seamless connectivity
• Guest clustering for high availability
Resilient
• BGP for dynamic routes update HNV
Gateway
• Encapsulates & De-encapsulates
NVGRE packets
• Multitenant aware NAT for
Internet access

Service
Provider
Hyper-V Host Hyper-V Host
Lab 8
Network
Virtualization

Module 8
Virtual Machine &
Service Templates

Module 9
Accelerating Deployment with
Templates
Accelerated deployment of
VMs with VMM Templates
Hardware – VMM uses hardware profiles,
along with a sysprepped VHD/X file to
streamline deployment. VMM will create the
sysprepped VHD/X for you.
OS Config – Configuration of domain join,
admin password, product key, but even the
Windows Server Roles & Features
App Config – Add application-level
configurations, such as MS Deploy Web
Packages, Server App-V, or SQL DAC
SQL Config – VMM allows admins to add
SQL configuration/deployment files to a VM
deployment, to accelerate DB deployment in
the environment
Application Configuration
Application-Level Config
within the VM Template
Application profiles provide instructions for
Application Virtualization (Server App-V),
Microsoft Web Deploy, Microsoft SQL Server
DACs, Scripts when deploying a virtual
machine as part of a service
Scripts can be executed pre or post install,
and support specific parameters for execution
Application profiles enable automatic
configuration within the VM, i.e. a web site, or
configuration of a database
Application profiles accelerate deployment of
services within the virtualized infrastructure
SQL Server Configuration
Granular Configuration
Control for SQL with VMs
Allows for the standardized deployment of
new VMs containing SQL Server
IT Admin specifies key SQL data such as
Instance Name, Run As Account
SQL configuration requires a sysprepped
VHD/X with SQL pre-installed:
• SQL installed inside guest using advanced
installation and selecting ‘Image
preparation of a stand-alone instance of
SQL Server’
• SQL Media needs to be accessible to the
Guest OS at deployment time
Application Configuration | SQL DAC
Granular Configuration
Control for SQL with VMs
SQL Profile in VMM tells VMM which SQL
settings to apply to complete SQL installation
at deployment time
Combine with SQL DAC Application Profiles
to customize database configuration on top
of the deployed SQL instance
SQL DAC can combine with pre/post
installation scripts
This combination provides an automated,
controlled way to deploy SQL inside VMs.
Lab 9
Virtual Machine
Templates

Module 9
Service Templates

Service template
Web (IIS)
(multi-tier .NET applications)
App (App-V) Data (SQL)

Web tier Application tier Data tier


Scale out and health policy Scale out and health policy Scale out and health policy

Internet Information Services (IIS) Application server SQL Server

HW OS App HW OS App HW OS App


profile profile profile profile profile profile profile profile profile
Compute Storage Network
Designing Services via Service Templates
Model Business Services within
the Virtualized Infrastructure
Utilizes existing templates as building blocks
to form interconnected, multi-tier, multi-VM
services
Tiers can be configured for scale and
designed for high availability through
availability sets
Intelligent placement ensures optimal
placement of all VMs within each tier of the
Service Template at deployment time
Service templates can specify logical
networks, load balancers
Set service-related properties, such as cost
center, description, release version
Service Template Updates
Template-driven In-place updates Image-based
Provide a single source of truth
for service deployments
Change application or template
settings without replacing OS image
updates
Use Upgrade Domains to limit Replace old OS image
Change memory, update with new OS image
disruption of service during updates application package
Reinstall the application
and restore the state
Service Templates | In-Place Updates
Pending
service Service
1. Choose service Service
update template V1.5
template from library template V1.0
2. Deploy an instance
of the service
3. Copy the service
template, update
version number, and
update application Web App Data
or configuration
4. Publish the template
and set the deployed
service to the new
template
5. Apply the update Template
while maintaining library
availability of the
service through the
use of Upgrade Compute Storage Network V1.5
V1.0
Domains
Service Templates | Image-Based
Pending
service Service
1. Choose service Service
update template V1.5
template from library template V1.0
2. Deploy an instance
of the service
3. Copy the service
template, update
version number,
and update virtual
disk or application Web App Data

4. Publish the template


and set the service
to the new template
5. Apply the update
while maintaining Template
availability of the library
service by replacing
the virtual hard disk
and redeploying the Compute Storage Network V1.5
V1.0
application using
Upgrade Domains
Lab 10
Service Templates

Module 9
Private Clouds &
User Roles

Module 10
Private Clouds
Standardize
d services

Development Production Delegated


capacity

Cloud
Assign dedicated and shared resources abstraction

Datacenter one Datacenter two Logical and


standardized

Diverse
infrastructure

Development

Production
Creating Clouds
Integrated Management
Experience for Cloud Creation
Resources – define the physical infrastructure
capacity that will form the basis of the cloud.
Supports VMware Resource pools.
Logical Networks & Load Balancers –
Admins can define Logical Networks and
managed Load Balancers, that can be used by
VMs & Services in the cloud.
Storage – the Cloud abstracts the underlying
storage in favor of classifications for
simplified placement experience.
Capacity – Define the scale boundaries for
the cloud
Role-Based Administration
Tenant Delegated VMM
Self-service user administrator Administrator
Application Owner
administrator
Fabric Administrator Fabric
• Scope: Clouds only Tenant Administrator
• Author templates
• Scope: Clouds only • Scope: Host groups
• Deploy/manage VMs and clouds • Scope: Entire
and Services • Author VM Networks system
• Share resources • Configure fabric
• Revocable actions
• Assign cloud (hosts, networking • Can take any action
• Quota as a shared • Create Tenant Roles and storage)
and per-user limit • Create cloud on fabric
• All other SSU settings
• Assign cloud

Read only administrator


Help Desk
• Scope: Host groups and clouds, No actions
Role-Based Administration
Granular Control and Delegated
Access to Cloud Resources

VMM allows IT Admins to define granular


administrative and self-service roles for
consumers of the fabric and cloud
Application Administrator has least privilege,
and can consume in self-service manner only
Seamless integration with Active Directory
Users can be scoped to multiple clouds
Quotas can be defined at the role and
member levels for granular capacity
management
Control the VM Networks users can deploy
virtual machines onto
Global and Cloud-specific permissions
Consuming Clouds with App Controller
Rich, self-service experience
for VM and app management
Self-Service – Silverlight based web experience
for users to consume VMs, applications and
services, managed by VMM
Service Providers – Through the Service
Provider Foundation, users can consume clouds
from on premise, and Service-Provider capacity
Delegation – VMM roles are reflected in App
Controller presenting users with their content
and their capacity boundaries
Deployment – Users can deploy from
Templates, or Service Templates and can
upgrade services if allowed by role settings
Access – Console and RDP access to VMs is
provided, if allows by role settings
Lab 11
Private Clouds &
User Roles

Module 10
Ресурсы
Windows Server 2012 R2
http://technet.microsoft.com/ru-RU/evalcenter/dn205286

System Center 2012 R2


http://technet.microsoft.com/ru-RU/evalcenter/dn205295

Windows Azure
http://msdn.microsoft.com/ru-ru/ff380142

Портал Microsoft Virtual Academy


http://www.microsoftvirtualacademy.ru

Портал TechNet
http://technet.microsoft.com/ru-ru/
Сессия вопросов и ответов
Александр Шаповал
Эксперт по стратегическим технологиям
Email: ashapo@microsoft.com
Blog: http://blogs.technet.com/b/ashapo
Twitter: @ashapoval