Академический Документы
Профессиональный Документы
Культура Документы
Integration, optimization
and automation
Anders Bengtsson
Microsoft Senior PFE
Pete Zerger
Microsoft MVP
John McCabe
Microsoft Senior PFE
Contents
3.1 Basic network components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1.1 IP Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.2 VNET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.2.1 VNET Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.3 Network Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.3.1 Network Interface settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.4 Connecting to on-premises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.4.1 Point-to-site VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.4.2 ite-to-Site VPN and ExpressRoute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.5 Publish a service to the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.6 Network Security Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.7 Traffic Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.8 Forced Tunneling and User Defined Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.9 User Defined Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2 Networking Planning and Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.1 Network Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.2 Deploying a VNET (in the Azure Management Portal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.3 Deploying a VNET (with Azure PowerShell) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Step 1: Connect and Authenticate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Step 2: Create Resource Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Step 3: Define Subnets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Step 4: Deploy VNET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.4 Deploying a VNET (with JSON template) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Step 1: Select Template Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Step 2: Paste json template into Edit template window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Step 3: Create Resource Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.3.2.5 VM Extensions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Chapter 3:
Azure Virtual Networking
As cloud consumption grows, more and more of the cloud resources begin to depend on each other.
One of the key ways that Microsoft Azure (Azure) resources are tied together is through the use of
virtual networks (VNET). VNETs provide a mechanism for Azure resources to communicate, not just with
each other, but optionally with your on-premises resources via a site-to-site VPN or ExpressRoute.
Of course, with great capability comes great responsibility. How does one govern and secure outside
access to Azure resources? This chapter details some of the capabilities that will enable you to manage
and maintain your Azure VNETs efficiently and securely. Before going into network design, this chapter
will cover the core concepts of Azure virtual networking in Azure Resource Manager, also called ARM or
Azure v2, as well as common scenarios for networking in Azure.
A couple of important notes:
Before we get started, there are a couple of important points we need to touch on briefly:
W
hen working with Azure Resource Manager in the Azure Management Portal, you must always use
the Azure Preview Portal at https://portal.azure.com. These resources are not visible in the original
Azure Management Portal at https://manage.windowsazure.com.
C
onnectivity options between corporate networks and Azure, including site-to-site VPN and
ExpressRoute will be covered in depth in Chapter 5 Connecting Azure to Your Data Center. They
are mentioned briefly in this chapter in discussions related to hybrid connectivity.
In addition to the components seen in figure 3.1.1, the VM must be connected to a VNET. The VNET can be
created in the same resource group or in another resource group. One of the reasons to create the VNET in
another resource group is role-based security. For example, multiple application servers in different resource
groups may use a particular VNET, but only the network engineers have permissions to reconfigure the VNET.
Before we walk through an example of a VNET, we will discuss IP addressing in Azure networking.
3.1.1 IP Addressing
Azure networking uses a number of different types of IP addresses.
I nternal IP Address (DIP). An internal IP address, or DIP, is an internal IP address assigned to an
Azure resource, such as a network interface that is connected to a VM. A network interface has the
same function as a physical network card and in Azure Resource Manager, can be attached to a VM
or a load balancer. By default, this IP address is assigned dynamically and if the VM is de-provisioned,
it will lose its DIP. It is possible to configure a static DIP. The IP will be static from the standpoint that
the VM will always get the same IP address from the Azure fabric. However, it is not supported to
configure a static IP address inside the VM in the properties of the network interface. The guest OS
should always be configured to receive dynamic IP address assignment via DHCP.
On each subnet, Azure reserves the first three IP addresses for internal use. The first IP address that a
VM or other resource can use is .4.
I nstance Level Public IP Address (PIP). A PIP is a public IP address you assign directly to your
network interface. You can use the PIP to connect directly to your VM on the Internet.
V
irtual IP address (VIP). A VIP is a public IP address automatically assigned to a network load
balancer. This IP address can be used for load balancing or network address translation (NAT) to
network interfaces and VMs behind the network load balancer.
A
zure Data center IP Ranges. In some scenarios, you need to know all the public IP addresses
that different Azure data centers use. This list is frequently updated and can be downloaded in
XML format from the Microsoft Azure website. Search for Microsoft Azure Datacenter IP Ranges,
currently at http://www.microsoft.com/en-us/download/details.aspx?id=41653.
3.1.2 VNET
VNETs, are used in the same way as physical local area networks in your local data center. VNETs in
Azure provides the following capabilities:
Isolation. By default all machines within a VNET can communicate with each other. To totally isolate
VMs from each other, you can place them on different VNETs. A VNET can span a region (multiple
physical data centers) but not multiple regions.
Access. Access between VMs on a VNET is open, even if the VMs are on different subnets within the VNET.
Connectivity. VNETs can be connected to each other, as well as to on-premises data centers.
Connectivity to an on-premises data center is achieved with either Express Route or VPN. A single
VNET can be connected to multiple VNETs and on-premises data centers at the same time.
Previously, when creating a VNET, you were required to specify an affinity group. An affinity group
is a way to group resources together inside the data center to minimize latency. With the improved
performance in Azure data center networks today, affinity groups are no longer required. VNETs are
now associated with regions, which includes one or more physical data centers. You can still specify an
affinity group if your application requires it, but it is not otherwise required.
The number of VNETs you need depends on what you are planning to do in Azure. Changing VNET
settings on a resource after deployment can be complicated, so it is a good idea to plan the network
design before deploying any resources. You can redeploy your VMs to change VNET settings, but that
will result in downtime. In the end, it is better to spend time in planning before deploying your systems
or applications to Azure. Figure 3.1.2 shows a sample configuration with three resource groups
C
ONTOSO-HR. This resource group contains two VMs with related network
interfaces and a storage account that both servers share.
CONTOSO-WEB. This resource group contains one VM with
related network interface and storage account.
CONTOSO-INFRA. This resource group contains the VNET that all VMs are
connected to. The example in figure 3.1.2 use one VNET for all VMs.
10
Figure 3.1.3 shows another example with two VNETs. By default, all VMs within a VNET can communicate.
However, VNETs are totally isolated from each other, which allows administrators to easily create isolated
environments within the Azure subscription, such as in separating testing and production VMs.
11
12
13
3.1.2.1.5 Users
With the support for role-based access control (RBAC) on VNETs, you can grant appropriate access to
the VNET to support your administrative model. There are a number of default roles that can be used.
For example, you can give different teams (HR and Web) access to use the VNET, but not modify it.
Administrators of these resource groups will have permissions to connect network interfaces to the
VNET, but not modify the VNET itself.
3.1.2.1.6 Tags
In the Azure Management Portal, you can organize resources in resource groups. You can use role
based security to control access to these resource groups. To organize resources across resource
groups, you can use tags. Tagging resources with name or values to categorize them enables you to
then list and report on resources across resource groups by tag. Figure 3.1.4 shows tags for a VM. Three
tags have been added to the VM in this example; Budget, Function and Team.
14
15
16
17
Figure 3.1.6 illustrates another example, with a load balancer configured to balance incoming traffic on
port 80 to three VMs.
18
19
20
It is possible to nest up to 10 levels of traffic manager, and each profile can be configured with a
different method. Traffic Manager offers three load balancing methods
Failover. Use this profile when you have a primary endpoint you want to use for all traffic, but if that
endpoint is not available failover to a backup endpoint.
Round Robin. Use this profile if you want to distribute clients over a set of endpoints in the same
data center or in different data centers.
Performance. This profile is recommended when you have endpoints in different geographic
locations and you want the client to use the closest one to minimize latency.
You can learn more about Azure Traffic Manager in Chapter 12 Backup and Disaster Recovery.
21
22
23
Figure 3.2.2 shows an example of subnet structure. Even if some subnets may not be in used in the
beginning, they are included for future use. The purpose of each subnet in the sample network design
is described here. In some scenarios, the different subnets are named after server roles, for example
database, web and system management. While naming strategies may vary, the key point is the names
should be descriptive of the subnets intended use.
24
IMPORTANT: Even if you created a subnet with 256 available IP addresses, Azure fabric will use first 3 IP addresses for
each subnet. The first IP address you can use is number 4 - for example 10.1.4.4 in the Backend subnet.
There are three ways to build a VNET in ARM: The Azure Management Portal, PowerShell and with an
ARM json template. In the sections that follow, we will deploy the sample 3-subnet VNET described
above using each of these options. You can download the sample code for the PowerShell and json
options at the URLs provided for each example.
25
26
4. In the Create VNET blade, input the following settings and click Create.
Name: Contoso-VNET01
Address space: 10.0.0.0/8
Subnet name: Frontend
Subnet address range: 10.1.2.0/24
Subscription: Choose suitable subscription
Resource Group: Create a new resource group, for example CONTOSO-Infrastructure
Location: Choose your closest location or based on design requirements
5. O
nce the new VNET is completed, (shown in figure 3.2.4), browse to the new VNET
in the Azure Management Portal.
27
FIGURE 3.2.5 NEW VNET ON THE NEW VNET BLADE, ALL SETTINGS
The VNET is now configured and subnets are added. The next step is to add
a security group for the backend subnet.
28
29
-Force
# Confirm registered ARM Providers
Get-AzureProvider |
Select-Object `
-Property ProviderNamespace `
-ExpandProperty ResourceTypes
# Confirm registered ARM Providers
Get-AzureProvider |
Select-Object `
-Property ProviderNamespace `
-ExpandProperty ResourceTypes
# Select an Azure subscription
$subscriptionId =
(Get-AzureSubscription |
Out-GridView `
-Title "Select a Subscription ..." `
-PassThru).SubscriptionId
Select-AzureSubscription `
-SubscriptionId $subscriptionId
Step 2: Create Resource Group
If the resource group does not already exist, you will need to create a resource group.
# Create Resource Group
New-AzureResourceGroup `
-Name 'Contoso-Infrastructure' `
-Location "West US"
30
31
32
"type" : "string",
"defaultValue" : "10.1.2.0/24",
"metadata" : {
"Description" : "Frontend Subnet Prefix"
}
},
"SCsubnetPrefix" : {
"type" : "string",
"defaultValue" : "10.1.3.0/24",
"metadata" : {
"Description" : "Midtier Subnet Prefix"
}
},
"SQLsubnetPrefix" : {
"type" : "string",
"defaultValue" : "10.1.4.0/24",
"metadata" : {
"Description" : "Backend Subnet Prefix"
}
}
},
"resources": [
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Network/virtualNetworks",
"name": "Contoso-VNET01",
"location": "[parameters('location')]",
"properties": {
"addressSpace": {
2015 Veeam Software
33
"addressPrefixes": [
"[parameters('addressPrefix')]"
]
},
"subnets": [
{
"name": "FrontEnd",
"properties" : {
"addressPrefix": "[parameters('FrontendsubnetPrefix')]"
}
},
{
"name": "Midtier",
"properties": {
"addressPrefix": "[parameters('MidtiersubnetPref
ix')]"
}
},
{
"name": "Backend",
"properties" : {
"addressPrefix": "[parameters('BackendsubnetPrefix')]"
}
}
]
}
}
]
}
34
To deploy the VNET using the provided json template, perform the following steps. You will first need to
download the json template from the GitHub repository described below.
Download the Code
You can download the json template from GitHub at https://github.com/insidemscloud/
AzureIaasBook, in the \Chapter 3 directory. The file name is VNET-3subnets-azuredeploy.json.
Step 1: Select Template Deployment
You will begin by selecting the Template Deployment option, as detailed in the steps below.
1. Browse to the Azure Management Portal, https://portal.azure.com.
2. On the Azure Management Portal homepage, click Marketplace.
3. I n the search box provided, type template deployment (without quotes). The search will return the
template deployment option, shown in figure 3.2.7.
4. Select the template deployment and click Create.
35
36
37
The network security group contains a number of default rules, shown in table 3.2.1. These rules cannot
be deleted, they can only be superseded. All default rules are created with lowest priority (highest
number) and can be superseded by rules with higher priority (lower number).
Name
Protocol
Source
Port
Dest
Port
Source
Address
Dest
Address
Access
Priority
Direction
Allow
VnetIn
Bound
VNET
VNET
Allow
65000
Inbound
Allow
Azure Load
BalancerIn
Bound
Azure
Load
Balancer
Allow
65001
Inbound
Deny
65500
Inbound
VNET
Allow
65000
Outbound
Internet
Allow
65001
Outbound
Deny
65500
Outbound
Deny AllIn
Bound
Allow Vnet
Out Bound
Allow
Internet
Out Bound
VNET
Deny All
Out Bound
38
The new VNET is now configured, including subnets and a network security group.
39
3.3 Summary
Networking in Azure is a key component in every hybrid and public cloud computing infrastructure
scenario. It is important to design and plan the network before starting to deploy resources and
maintain strong security when extending the local network to Azure data centers.
In this chapter, we have discussed the components of Azure networking in the context of common
networking scenarios, as well as reviewed best practices and recommendations for design, deployment
and security. We hope you have used the hands-on examples provided in the chapter as practice, as
the knowledge and experience you gain here will be useful in later chapters in this book, as well as your
career as an Azure administrator.
40
Chapter 7:
Azure Virtual Machines
Azure Virtual Machines (VM) are the primary touch point many new customers will use from their very
first day working with Microsoft Azure (Azure). In principal, this is where people feel naturally safe as the
concept of an Azure VM is in many ways like a traditional on premise VM. However, in Azure there are
further concepts that we need to understand in order to design and build enterprise grade workloads
in Azure Virtual Machines. Additionally, we need to explore topics that show us how we can use the
power and flexibility of the cloud to ease management of complex environments.
In this chapter, you will learn about Azure VM concepts, as well as basic and advanced deployment
options through hands-on examples, including the Azure Protal, Azure PowerShell, as well as the ARM
JSON template. You will learn how you can create ARM templates in Visual Studio 2013 / 2015, which
offers a bit of GUI authoring help for developers and IT Pros alike.
Once you have deployed VMs and other resources, you will learn how to perform VM configuration
using Azure VM Extensions, as well as PowerShell DSC integrated with Azure.
A couple of important notes:
As in previous chapters, we will focus on Azure Resource Manager (ARM) functionality, or Azure v2.
Before we get started, there are a couple of important points of clarify:
W
here you see the tag Classic in the Azure Preview Portal (Azure Portal, ARM Portal, or simply
portal) this refers to the original Azure Portal or Service management API located
at https://manage.windowsazure.com.
Examine the components for their Availability in the new Azure Portal located https://portal.azure.com.
Everything listed in this chapter is written specifically to to leverage the new ARM capabilities and the
Azure Portal at https://portal.azure.com unless specified in the text.
41
42
You might ask what was wrong with the old platform at https://manage.windowsazure.com (also
referred to as the Service Management Portal, or Azure v1). The answer really lies in some key limitations
related to deployment and management. Azure v2 was designed to make deployment not only of VMs,
but also of entire environments (VMs, VNETs and applications) easier, faster and more reliable.
The Azure Portal is a completely new framework and API model. This essentially means that at some
point, you will likely be migrating services from Azure v1 into ARM, allowing you to manage them with
the rich management capabilities of ARM.
The new API and framework allows you to enable many new features within Azure for tenants, including:
Template Deployment
Role Based Access Control
Tagging
Azure Resource Groups
We will discuss all of these components in the coming pages. The key take away is that ARM is the next step in
the Microsoft strategy to provide a consistent API and framework, which will span public and private cloud.
43
44
The choice on how resource groups are constructed and what resources exist is an important part of
the design process. It requires some thought when you are implementing services in Azure because
you need to design your resource group structure based on what types of services your organization
plans to deploy and your need to delegate administration.
IMPORTANT: Resources (i.e. VMs, VHDs, and VNETs, etc.) can only be part of one resource group at a time.
In Azure Portal by default, there are multiple default resource groups to help get you started. Figure
7.1.3 shows you the Default-Networking resource group.
45
In Figure 7.1.4, you can see that the resource group provides not only a boundary for management and
security, but can also provide a boundary to provide billing data about that service.
46
Idempotent. If a json-based deployment in ARM fails, you can restart and it will pick up where it left
off! This is a huge leap over Azure v1.
DISCLAIMER: It is important that we not overstate idempotence here. The fact of the matter is, if your
deployment fails in a custom script stage of your deployment, you will likely have to do some cleanup
before you attempt to restart. At this point, it is likely just as fast to delete the deployment and re-deploy.
Reusable. Once you have created and tested a template, you can share the template and related
resources (PowerShell scripts, DSC modules, etc.) with anyone who can then use the template to
deploy their own environment!
C
leanup. Not only does this allow you to deploy several resources very quickly in Azure, you are
also able to delete all the resources you deployed by simply deleting the Resource Group they were
deployed to. Having to worry about dependencies from using the Service Management Model in
Azure are no longer an issue.
This is not to say deploying with ARM via PowerShell does not have its place. There are many
deployment and configuration operations where this is going to be faster, easier and a better fit than
ARM json templates. Scenarios where you just want to deploy a VM to an existing environment, or
just do some bulk configuration or administration, PowerShell is still a great fit. Simply match the right
methodology to your situation using what we have shared here as guidelines.
In Figure 7.1.5 we show the basic outline of a JSON, for more information around the schema and
language please refer to Authoring Azure Resource Manager Templates on the Microsoft website at
https://azure.microsoft.com/en-us/documentation/articles/resource-group-authoring-templates/
A simple example of how you can use a template is a 3-tier web application (app). This 3-tier web
app requires, a database server, a work tier, a web server. These components require resources, such
a storage account, virtual networks and public IP addresses. Under the resources section in the ARM
template, you can specify each resource as well as the dependencies of that resource. For example,
a VM will not be deployed without first a storage account and a virtual network being created. ARM
templates can also leverage PowerShell Desired State Configuration (DSC), enabling additional system
configuration and application deployment capabilities. .
This allows IT operations and application development teams to update their ARM templates as
their applications evolves. Then, as the application roles from development to test to production, the
dependencies and order-of-operations are already written into the application deployment, helping to
ensure consistent application deployment during the lifecycle process.
47
7.1.4 Tags
Earlier we described resource groups, with the resource groups we can group resources together and
manage them as a single item. However, this might not give us the views we want of our application
estate. For example, you may have a finance department who have multiple applications, as well sales
department who also have a number of applications, spread across multiple resource groups.
These applications are very different and we choose to represent the applications in multiple resource
groups to suit the application administrative and our delegation needs.
In this case, we have good visualization of the application, but no single view of what applications the
finance department or the sales department interact with on a frequent basis. To organize resources
across resource groups, you can use tags. Tags allow us to provide an additional piece of information so
we can achieve this view of our estate. In this example, we can use Dept:Sales to filter out the resources
and resource groups that belong to the sales department, as shown in Figure 7.1.6.
48
As you can see from Figure 7.1.7 if you click the i icon, you will get a description of what tasks the role
allows a user to perform. You can assign users to multiple roles, as you require.
49
Another interesting thing in ARM is that you can assign permissions right down to the resource level. As
you will see in Figure 7.1.8, the resource group, as well as reserved IP resource have RBAC icons so that
you can assign permissions to them.
50
7.2.1 VM Architecture
The first area we need to understand is the architecture of the VM. In Figure 7.2.1, we show you the
basic layout of VM architecture
The VM, much like in Hyper-V, is a configuration container that references the resources it requires in
order to operate. In this section, we will discuss each component in detail.
51
7.2.1.1 VM Size
Azure VMs have a variety of different choices when it comes to the size of the VM you will select and
deploy. First, you need to choose your tier for the VM, currently (September 2015) there are two choices
Basic
Standard
You choose a tier based on your needs. Basic tier VMs do not allow high availability and limit the choice
of sizes you can select. A basic tier VM also restricts the amount of virtual disks you can attach to the VM
and limit the IOPS of the disk to approximately 300 IOPS per disk.
A standard tier VM will allow high availability and does not restrict the sizes of the VMs you can choose
to deploy. Virtual disk performance is also better, supporting up to 500 IOPS per disk. When choosing a
standard tier VM, you have currently (September 2015) a choice of machines rated by Series.
All VMs include a temporary disk (D: by default), designed to be used as a working area to store nonpersistent (temporary data).
Note: When designing services in Azure, it is very important to understand that the limit is consider an up to
limit not a guarantee or SLA.
The VM series available in Azure currently include the following:
A Series A Series range from an A0 to A11 with CPU cores that range from 1 to 16. Memory
ranges from 768MB to 112GB. In this series, there are no solid-state drive (SSD) options available.
D Series D Series range from D1 to D14 with CPU cores that range from 1 to 16. Memory ranges
from 3.5GB to 112GB. The Temporary disk is based on SSD and ranges from 50GB to 800GB. D Series
also have a DS range of machines, they have similar ranges as the D Series but allow you to achieve
higher performance and disk IOPS.
G Series G Series range from G1 to G5 with CPU Cores the range from 2 to 32. Memory ranges
from 28GB to 448GB. The Temporary disk is based on SSD ranging in size from 384GB to 6144GB. G
Series also have a GS range of machines, they have similar ranges as the G Series but allow you to
achieve higher performance and disk IOPS.
For a complete list and exact details of all the VM options currently available, refer to Sizes for virtual
machines on the Microsoft website at https://azure.microsoft.com/en-us/documentation/articles/
virtual-machines-size-specs/
Generally speaking, the higher the series of VM you choose the better performance you will get. For
example if you need a VM for a CPU intensive workload, like a large SQL Server instance, you may select
a G Series VM. The underlying physical hardware the virtual machine would be deployed on is specially
designed for CPU intensive workloads.
52
7.2.1.2 Storage
A VM will always have at least 2 disks by default. These are
Operating System Disk
Temporary Disk
As the name implies, the operating system disk is where the operating system lives and the VM book disk. This
disk has caching turned on by default and you cannot turn it off, the options you have are read, read/write.
The temporary disk is simply a scratch disk; this scratch disk comes from the underlying physical host.
The temporary disk should not be used as a persistent data store. The particulars of the temporary disk
are linked to the series of VM you select. The size of the temporary disk varies by VM image size. Larger
images have larger temporary disks.
VMs can also give you the option of high performance storage under pinning the workload. DS or
GS series of VMs deliver high-performance. These VMs will allow up to 50,000 IOPS in the correct
configuration. The temporary disk in this series of VM will also reside on SSD hosted by underlying
physical hardware to which the VM is deployed.
Finally, all storage that a VM accesses is via a network connection (through a RESTful API), this introduces an
imposed limit that needs to be taken into account. It is possible to achieve a throughput of up to around
3Gbps when you select a VM image size that enables remote direct memory access (RDMA).
You can read more about Azure storage, including storage architecture and disk cache options in
Chapter 2 Microsoft Azure Storage.
7.2.1.3 Network connectivity
In ARM, the network card of the VM is abstracted as a manageable resource. This is very important, as in
ARM you will associated services directly to the network card, as opposed to the VM, as was the case in
the Azure v1 Service Management Portal. An Azure VM can have up to 16 network cards assigned to a
VM. In Figure 7.2.2, you can see network cards listed as individual resources within resource groups.
53
Figure 7.2.3 illustrates some the network cards in this sample environment, which can be assigned (to
VMs or load balancers). As you can see, we can assign a public IP address directly to the network card of
the VM, as well as a network security group (NSG). For more details on NSGs, refer to Chapter 3 Azure
Virtual Networking.
54
The public IP is represented as an assignable resource within the Azure resource group, as shown in
figure 7.2.4. This allows you to associate a public IP to a resource, as you need it and move it as you
need or as requirements change.
Network cards can be associated to individual NSGs if it becomes a requirement to control the traffic
coming into a VM. This is essential if you have a public IP assigned to the VM.
If you require endpoints opened to a VM without assigning it a public IP directly, then you will need
to configure a network address translation (NAT) rule in an Azure Load balancer and associate it to a
network card. Endpoints still exist in ARM, but are controlled by the Azure load balancer.
55
7.2.1.4 VM security
Figure 7.2.5 illustrates the various layers of security you can implement to protect your VM.
Administrators can implement multiple layers of NSGs, utilize the Windows firewall (enabled by default
in Azure VMs), as well as install Microsoft Antimalware as part of the VM build process.
This ensures that from the moment you provision the VM into Azure you can enforce protection to
protect any workload on that system.
NSGs can be used to isolate a VMs traffic from other VMs, the internet or an IP source. NSGs can also
control the traffic flow outbound if required to. This allows you to enforce protection before traffic
leaves or arrives at a VM.
56
7.2.1.5 Marketplace
The Azure Marketplace is a single repository of prebuilt VMs ready for deployment into Azure. These VMs are
designed to be a click and deploy system for rapid deployment of services. Some of the VMs have software
licenses built into the runtime cost of the VM (such as with SQL Server VMs), while other VMs require you to
purchase a license after deployment. The Azure Marketplace homepage is shown in figure 7.2.6.
In the Azure Marketplace, available VMs are presented by category. A search function is also available,
enabling you to locate the VM you require through text-based search.
Microsoft updates the Azure Marketplace regularly (and the VM images made available through the
Marketplace) with the latest patches and releases available from the vendors.
57
58
59
To configure PowerShell for working with ARM, complete the following steps:
1. Open up an elevated PowerShell prompt (right click the prompt and select Run as administrator)
2. Type Add-AzureAccount and press enter
3. A sign-in prompt will appear, as shown in Fig 7.3.5, enter your email address and click continue.
IMPORTANT: When working with Azure PowerShell with ARM, a Windows Live Account will not do. You must
login with an organizational account an account that exists in the Azure Active Directory associated with
your Azure subscription that also has privileges in the subscription. Otherwise, authentication to your Azure
subscription will fail.
60
For testing purposes, granting an account administrator rights in an Azure trial or other non-production
subscription is easy.
4. Select the Account Type as shown in Figure 7.3.6. For Azure PowerShell, you must choose the Work
or school account option and enter an account present in the Azure Active Directory associated
with this subscription.
61
6. Once sign in is complete, the account and the authorized subscriptions that it has access to will be
displayed as shown in figure 7.3.8.
7. To load the ARM-aware cmdlets type Switch-AzureMode AzureResourceManager and press enter.
Note: In September 2015, Switch-AzureMode was deprecated, and soon ARM cmdlets will have their own
native cmdlets. For example, Get-AzureVM in Azure resource manager will be Get-AzureRMVM
7.3.1.1 Deploying a Virtual Machine from the Portal (Windows and Linux)
For your first VM deployment, you will use the Azure Portal. This will help you gain some basic familiarity
with ARM features and the Azure Marketplace.
To create a VM in the Azure Portal, complete the following steps:
1. F rom the Azure Resource Manager Portal located at https://portal.azure.com Click + NEW as
shown in Figure 7.3.9
62
2. From the New blade, all the options for deployment are listed. For a VM we need to select
Compute, as shown in figure 7.3.10
63
3. The Compute blade presents the most recently used and most commonly requested images.
Marketplace provides an area where all the images available (Microsoft and 3rd Party published) in Azure
can be selected for Installation. Click Windows Server 2012 R2 Datacenter as shown in Figure 7.3.11.
64
4. After clicking on the Windows Server 2012 R2 Datacenter image, it will open up a summary blade
describing the image and ask you to confirm which deployment model you wish to use. As a reminder,
Classic is the Azure v1 Service Management Portal and Resource Manager is the ARM (Azure v2)
described in this chapter. Select Resource Manager and click Create, as shown in Figure 7.3.12.
5. In the Create Virtual Machine blade, the basics blade will automatically open, as shown in Figure 7.3.13.
65
By Clicking View All, it will display all Size options available. Within each option, you should also see the
estimated monthly cost of running the VM.
66
7. Next we will configure VM settings in the Settings blade as shown in figure 7.3.15
67
68
8. In the Summary blade, as shown in Figure 7.3.16, confirm the settings and click OK.
69
The VM will now deploy. Deployment time varies from 15 minutes to 45 minutes depending on the
workload and deployment options selected. Watch the Notifications area of the Azure Portal for a
message indicating whether your deployment was successful.
To create a Linux VM from the Portal, perform the following steps:
1. From the ARM Portal located at https://portal.azure.com Click + NEW as shown in Figure 7.3.17.
70
2. From the New blade all the options for deployment are listed, for a VM we need to select Compute
as shown in figure 7.3.18.
71
3. In the Compute blade the most recently used/most common images are presented for use.
Marketplace gives an area where all the images available (Microsoft and 3rd Party published) in
Azure can be selected for Installation. Click Ubuntu Server 14.04 LTS as shown in Figure 7.3.19
72
4. A
fter clicking on the Ubuntu Server 14.04 LTS image, it will open up a summary blade describing the
image and ask you to confirm with deployment model you wish to use. Classic being the Service
Management Portal and Resource Manager being the ARM Portal. Select Resource Manager and
Click Create as shown in Figure 7.3.20.
73
5. In the Create Virtual Machine blade the basics blade will automatically open as shown in Figure 7.3.21.
74
For Resource Group the field is by default setup to create a new resource group, if you enter a new
name there it will create a new resource group with that name and all the VM components will be
associated with it. Alternatively, you can click select existing and select a resource group that has
already been created. For this example Enter VMLINUXRG in the resource group field.
Note: if you are following directly on from the Windows Example listed in this chapter, you can select the
original VMRG Resource Group
Select the correct Location for your deployment, in the examples case North Europe.
Click OK.
6. The Choose a Size blade will open, by default it will have recommended options for the type of
image you have chosen. In Figure 7.3.23 you can see we have two listed and for this examples case
we will select A1. Click A1 Standard and click Select.
By clicking View All, it will display all size options available. Within each option, you should also see the
estimated monthly cost of running the VM.
75
7. Next we will configure VM settings in the Settings blade as shown in figure 7.3.24.
76
77
78
-Force
# Confirm registered ARM Providers
Get-AzureProvider |
Select-Object `
-Property ProviderNamespace `
-ExpandProperty ResourceTypes
# Confirm registered ARM Providers
Get-AzureProvider |
Select-Object `
-Property ProviderNamespace `
-ExpandProperty ResourceTypes
# Select an Azure subscription
$subscriptionId =
(Get-AzureSubscription |
Out-GridView `
-Title "Select a Subscription ..." `
-PassThru).SubscriptionId
Select-AzureSubscription `
-SubscriptionId $subscriptionId
1. The next step is to gather the details of the VM image we wish to deploy using the GetAzureVMImage cmdlet. In the examples instance we want to retrieve all Windows 2012 R2
Datacenter Image.
Get-AzureVMImage Location North Europe `
PublisherName MicrosoftWindowsServer `
Offer WindowsServer SKU 2012-R2-Datacenter
79
Figure 7.3.25 shows partial output. Locate the latest version and record the number.
2. Now we select the appropriate image and store it into a variable for later use, as shown here.
$vmimage = Get-AzureVMImage Location North Europe `
PublisherName MicrosoftWindowsServer
Offer WindowsServer SKU 2012-R2-Datacenter `
Version 4.0.20150825
3. Next, we create a resource group using the New-AzureResourceGroup cmdlet an example is as follows
New-AzureResourceGroup Name VMResourceGroup `
Location North Europe
Partial output is shown in Figure 7.3.26.
80
4. N
ext we create a storage account for the VM we want to create using the NewAzureStorageAccount cmdlet using the follow example
New-AzureStorageAccount ResourceGroupName `
VMResourceGroup Name mystoracct001 `
Location North Europe type standard_lrs
Note: The storage account name has to be lowercase and unique in all of azure. When repeating this
example change the name from what you see here to a unique value. For more info on storage accounts, see
Chapter 4 Microsoft Azure Storage.
Partial output is shown in figure 7.3.27
5. Next we create a virtual network VM where the VM will reside, using the following PowerShell:
$subnet = New-AzureVirtualNetworkSubnetConfig `
Name production AddressPrefix 10.0.50.0/24
$vnet = New-AzureVirtualNetwork Name CloudVNet `
ResourceGroupName VMResourceGroup Location North Europe `
AddressPrefix 10.0.0.0/16 Subnet $subnet
$subnet = Get-AzureVirtualNetworkSubnetConfig `
Name production VirtualNetwork $vnet
81
In figure 7.3.28 shows partial contents of the $vnet variable we created in this example.
82
7. N
ext we need to create a network interface for the VM and public IP address to bind to using the
following syntax
$netint = New-AzureNetworkInterface `
ResourceGroupName VMResourceGroup `
Name WinVMNic subnet $subnet Location North Europe `
PublicIPaddress $pip PrivateIPAddress 10.0.50.4
This will assign the static IP of 10.0.50.4 in the previously created subnet for this VM and assign it the
Public IP address we created earlier in the example. Figure 7.3.30 shows you the output
8. N
ext, we need to capture credentials for the VM, remember not to use the Administrator username.
Use the following syntax
$cred = get-credential
This will prompt you to enter credentials as shown in figure 7.3.31.
Enter a Username (not administrator) and a complex password
83
IMPORTANT: get-credential is used twice in this script, so you will be prompted twice to enter credentials into
a Windows-style logon prompt. The first time, you should enter the credentials for your Azure subscription
(your organizational account). The second prompt for credentials is for the name / password you would like
to specify as local administration in the VM you are deploying.
9. Next, we create a Virtual Machine configuration file, which will be then used to deploy the VM. The
following example outlines syntax required
$vmConfig = New-AzureVMConfig -VMName "VM001" `
-VMSize "Standard_A1" | `
Set-AzureVMOperatingSystem -Windows -ComputerName "VM001" `
-Credential $cred -ProvisionVMAgent -EnableAutoUpdate| `
Set-AzureVMSourceImage PublisherName ` $vmimage.publishername
-Offer $vmimage.offer `
-Skus $vmimage.skus -Version $vmimage.version | `
Set-AzureVMOSDisk -Name "VM001" -VhdUri ` "https://mystoracct001.
blob.core.windows.net/vhds/VMM001-os.vhd" `
-Caching ReadWrite -CreateOption fromImage | `
Add-AzureVMNetworkInterface -Id $netint.Id
To re-use this example ensure you copy this into PowerShell ISE to allow the piping to re-align. To Verify
the configuration, the partial output of $vmconfig is displayed in Figure 7.3.32
84
85
3. Now let us obtain the latest version and store it into a variable for later use, using the following syntax
$vmimage = Get-AzureVMImage Location North Europe `
PublisherName Canonical Offer UbuntuServer `
Skus 14.04.2-LTS Version 14.04.201507060
4. Next we create a resource group using the New-AzureResourceGroup cmdlet an example is as follows
New-AzureResourceGroup Name VMLinuxResourceGroup `
Location North Europe
5. Next we create a storage account for the VM we want to create using the NewAzureStorageAccount cmdlet using the follow example
New-AzureStorageAccount ResourceGroupName `
VMLinuxResourceGroup Name mystoracct002 `
Location North Europe type standard_lrs
6. Next, we create a virtual network to which we will connect the VM, as shown here.
$subnet = New-AzureVirtualNetworkSubnetConfig `
Name LinuxProd AddressPrefix 172.0.60.0/24
$vnet = New-AzureVirtualNetwork Name CloudLinuxVNet `
ResourceGroupName VMLinuxResourceGroup Location `
North Europe AddressPrefix 172.0.0.0/16 `
Subnet $subnet
$subnet = Get-AzureVirtualNetworkSubnetConfig `
Name LinuxProd VirtualNetwork $vnet
7. Next step is to create a public IP address using the following syntax
$pip = New-AzurePublicIPaddress ResourceGroupName `
VMLinuxResourceGroup Name LinuxVMPublicIP `
Location North Europe AllocationMethod Dynamic
86
8. Next we need to create a network interface for the VM and public IP address
to bind to using the following syntax
$netint = New-AzureNetworkInterface ResourceGroupName `
VMLinuxResourceGroup Name LinuxVMNic `
subnet $subnet Location North Europe `
PublicIPaddress $pip PrivateIPAddress 172.0.60.4
This will assign the static IP of 172.0.60.4 in the previously created subnet for this VM and assign it the
Public IP address we created earlier in the example.
9. Using the syntax shown here, we need to capture credentials for the VM, remember not to use the
root username.
$cred = get-credential
This will prompt you to enter credentials as shown in figure 7.1.46. Enter a Username (not root) and a
complex password.
IMPORTANT: As with the Windows VM deployment, get-credential is used twice in this script, so you will be
prompted twice to enter credentials into a Windows-style logon prompt. The first time, you should enter the
credentials for your Azure subscription (your organizational account). The second prompt for credentials is for
the name / password you would like to specify as local administration in the VM you are deploying.
10. Next, we create a VM configuration file which will be then used to deploy the VM. The following
example outlines syntax required.
$vmConfig = New-AzureVMConfig -VMName "LNX001"
-VMSize "Standard_A1" | `
Set-AzureVMOperatingSystem -Linux -ComputerName `
"LNX001" -Credential $cred | Set-AzureVMSourceImage `
PublisherName ` $vmimage.publishername `
-Offer $vmimage.offer -Skus $vmimage.skus `
-Version $vmimage.version |
Set-AzureVMOSDisk -Name "LNX001" -VhdUri `
"https://mystoracct002.blob.core.windows.net/vhds/LNX001-os.vhd" `
-Caching ReadWrite -CreateOption fromImage |
87
88
3. In the browse blade, type Virtual and select Virtual Machines as shown in Figure 7.3.35.
5. This will open the VM001 Virtual Machine Blade, Click All Settings as shown in Figure 7.3.37.
89
90
8. In the Attach new disk blade, the settings as listed in Figure 7.3.40. will need to be configured to
your needs. In this example, accept the defaults by clicking OK
91
9. After a couple of minutes your new disk will be created and attached as shown in Figure 7.3.41
To add an additional disk to the Linux VM you deployed in the previous section through the Azure
Portal, perform the following steps:
1. F irst thing to add additional data disks to a VM is to get the VM we are interested in using the
following syntax
$VM = Get-AzureVM ResourceGroupName `
"VMLinuxResourceGroup" -Name "LNX001"
2. Next, we need to get the storage account in which the VM will be stored. This is contained within
the value stored in the $VM variable, but you can retrieve the value using the following syntax:
$storacct = $VM.StorageProfile.OsDisk.VirtualHardDisk.URI.split("/")[2]
3. Now we must construct the new URI for the new data disk with want to attach to the VM using the
following syntax
$datadiskURI = "https://$storacct/vhds/" + $vm.name + "-data-disk1.vhd"
4. Next we create the disk using the following syntax
Add-AzureVMDataDisk VM $vm Name "Data-Disk1" `
DisksizeInGB "100" VhdURI $datadiskURI `
CreateOption empty
92
5. Finally, we update the VM for the changes to take effect using the syntax
Update-AzureVM VM $vm `
ResourceGroupName "VMLinuxResourceGroup"
To check the results after running the script, go to the Azure Management Portal and select Browse All
-> Resource groups -> VMLinuxResourceGroup.
Download the Code
You can download the full script from GitHub at https://github.com/insidemscloud/AzureIaasBook, in
the \Chapter 7 directory. The file name is AddVMDisk.ps1.
1.3.1.4 Creating a NAT Rule to an existing Virtual Machine
There are many deployments where you will not assign a public IP to your VM directly. Previously, in the
Service Management Portal, you would achieve this end using endpoints and cloud services. These created a
NAT rule as such to allow you to connect to your VM. In ARM, you create all endpoints as part of the Azure load
balancer. We will walk through the steps to create a simple rule for Port 80 (HTTP) into a VM.
As with previous examples, a sample script is available for download as detailed in the Download the
Code at the end of this tutorial.
To create an Azure load balancer, a NAT rule and associate to an existing VM, utilize the following steps
and PowerShell sample:
1. First we need to get a public IP address using the following syntax
$vip = New-AzurePublicIPaddress ResourceGroupName VMResourceGroup
Name VMNATPublicIP Location North Europe AllocationMethod
Dynamic
2. N
ext, we need to create a Front End IP Configuration for the load balancer and use the public IP
address we created to bind to it, using the following syntax:
$feIPConf = New-AzureLoadBalancerFrontEndIPConfig Name ALBFEIP
PublicIpAddress $vip
3. Next we can create our Inbound NAT Rule using the following syntax
$httpnatrule = New-AzureLoadBalancerInboundNatRuleConfig Name Http
FrontEndIPConfiguration $feIPConf Protocol TCP FrontEndPort 80
BackendPort 80
4. N
ext we must tell give the load balancer the backend subnet which your VM will reside on, use the
following syntax
$lbbepool = New-AzureLoadbalancerBackEndAddressPoolConfig Name
BEPool01
93
5. Next we need to create our Load Balancer Rule using the following syntax
$lbrule = New-AzureLoadBalancerRuleConfig Name Http
FrontEndIPConfiguration $feIPConf BackEndAddresspool $lbbepool
Protocol TCP FrontEndPort 80 BackEndPort 80
6. Next we create the load balancer itself and use the rules and items we configure as base items
$azurelb = New-AzureLoadBalancer -ResourceGroupName
"VMResourceGroup" -Name "VM_LB" -Location "North Europe"
-FrontendIpConfiguration $feIpConf -InboundNatRule $httpnatrule
-LoadBalancingRule $lbrule -BackendAddressPool $lbbePool
7. Finally, you associate a Network card to the Load balancer rule using the following syntax
$vnet = Get-AzureVirtualNetwork Name CloudVNet ResourceGroupName
VMResourceGroup
$subnet = Get-AzureVirtualNetworksubnetconfig VirtualNetwork $vnet
Name production
$netint = Get-AzureNetworkInterface Name WinVMNic
ResourceGroupName VMResourceGroup
$netint.ipconfigurations[0].LoadBalancerBackendAddressPools.
Add($azurelb.backendaddresspools[0])
$netint | Set-AzureNetworkInterface
Download the Code
You can download the full script from GitHub at https://github.com/insidemscloud/AzureIaasBook, in
the \Chapter 7 directory. The file name is AddNATRule2VM.ps1.
94
To give you a better understanding of the schema let us describe each section of the JSON file
$schema $schema is a required section of the JSON file and it points to the location of the JSON
schema file. This file will describe the version of the template language.
contentVersion contentVersion is a required section of the JSON file. The version number is in
the format of X.X.X.X ; For example, 1.0.0.0. The version number allows you to ensure that you can
select the right template.
parameters parameters are not a required section of the JSON file. Parameters are useful
elements as they all you to customize a template on deployment. For example, if you want to have a
template which deploys a VM you can request the VM name as a parameter.
95
<parameterName> :{
type : <type-of-parameter-value>,
defaultValue: <optional-default-value-of-parameter>
allowedValues: [ <optional-array-of-allowed-values> ]
}
}
The parameterName and type are required.
Type can be of the follow items
string or secureString
int
bool
object or secureObject
array
A sample parameters section is as follows:
parameters: {
Location :{
type : String,
allowedValues: [
North Europe,
West Europe,
North US,
West US ],
}
}
96
variables Variables are not a required section of the JSON file. However, JSON templates can get
quite complex and if you want to simplify the references in the template variables become very
useful. For example, if you consistently want to reference a virtual network name, creating it as a
variable allows you to reference easily throughout the JSON template.
The format of the variables is a combined in a key pair value, as shown here:
variables: {
key: value
virtualnetwork: ProductionVnet01,
storageaccount: productionstorage
}
resources Resources are a required section of the JSON file. Resources allow you to specify what
you want deployed in the template, for example a VM, a network card, a storage account etc
The format of the resources sections is as follows:
resources: [
{
apiVersion: <api-version-of-resource>,
type: <resource-provider-namespace/resource-type-name>,
name: <name-of-the-resource>,
tags: <name-value-pairs-for-resource-tagging>,
dependsOn: [
<array-of-related-resource-names>
],
properties: <settings-for-the-resources>,
resources: [
<array-of-dependent-resources>
97
]
}
]
The apiVersion, type, name is required elements of the resources section. The location, tags,
dependsOn, properties, resources are optional elements. The apiVersion for the schemas available can
be located at the following URL: https://github.com/Azure/azure-resource-manager-schemas
TIP: The DependsOn section of the JSON template is the key to controlling deployment order. For a good
example of effective use of the DependsOn section, see the New Active Directory Domain deployment
template at https://github.com/Azure/azure-quickstart-templates.
DependsOn is used in at least 5 places in the azuredeploy.json template, so be sure to search on
DependsOn and review all instances to get a feel for how this option is used.
The type varies depending on the resource you are address, the following is a sample of the types of
resources and how you reference them.
Microsoft.Web/serverfarms
Microsoft.Web/sites
Extensions
Microsoft.Network/virtualNetworks
Microsoft.Network/networkInterfaces
Etc
A Sample resources section covering a virtual network and a storage account is as follows
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[parameters('newStorageAccountName')]",
"apiVersion": "2015-05-01-preview",
"location": "[variables('location')]",
"tags": {
"displayName": "StorageAccount"
},
98
"properties": {
"accountType": "[variables('storageAccountType')]"
}
},
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[variables('location')]",
"tags": {
"displayName": "VirtualNetwork"
},
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix":
"[variables('subnetPrefix')]"
}
}
]
}
},
99
Outputs outputs are not a required section of the JSON file. If you need the deployment to
return data after the deployment, you can specify the output in this section. This might include
a simple success / failure value, or perhaps something more dynamic, like a value constructed
(concatenated) from multiple parameters submitted to this template deployment time.
outputs: {
<outputName>: {
type: <type-of-output-value>,
value: <output-value-expression>,
}
}
outputName, type and value is required. The type value allows the same input types
A sample ouput is as follows
outputs: {
operationResult: {
type: string,
value: "[parameters('location')]",
}
}
100
101
3. I n the Search field type SharePoint and click the search glass, this will find 2 templates
as shown in figure 7.3.44
102
6. A
fter you click Deploy to Azure, this will re-direct you to the Azure portal and open up a Custom
Deployment blade as shown in Figure 7.3.46
103
104
8. In the Custom deployment blade, click Template (Edit Template) as show in Figure 7.3.49.
105
9. This drops you into a template editor, which presents you with the outline of the JSON file schema
and allows you to add your resources, parameters and variables in as required. See Figure 7.3.50.
At this point, you can simply paste in an existing JSON template and deploy via the Azure Portal. This
is great when you want quick deployment, but do not have the one-click Deploy to Azure button
available with templates in the Azure Quickstart Template repository on Github.
10. To test this deployment capability, copy-and-paste a simple ARM template into the template
window, such a simple Windows VM deployment in https://github.com/Azure/azure-quickstarttemplates/blob/master/101-simple-windows-vm-data-disk/azuredeploy.json. Simply click the
Raw button, then copy-and-paste the JSON into the window.
11. Once you are finished, click Save. You can then deploy the template based on your customizations.
Authoring and Deployment in Visual Studio
In this section, we will author a more detailed template deployment of a VM and a virtual network in
Visual Studio 2013 or 2015. Regardless of which of these Visual Studio versions you choose, template
authoring in Visual Studio requires that you install the Azure SDK 2.5 or above. The Web Platform
Installer will allow you to install the latest version of the Azure SDK for the visual studio version you
have installed. You can download the Web Platform Installer at the following link: http://www.microsoft.
com/web/downloads/platform.aspx.
The following example is based on the supporting software listed Visual Studio 2013 Update 4 with
Azure SDK 2.7. The authoring experience in Visual Studio 2015 is virtually identical.
Step 1: Create a New Project
1. Start Visual Studio 2013.
2. Click File, Click New and click Project
106
3. Expand Templates, Click Cloud, Select Azure Resource Group as shown in Figure 7.3.51
107
6. T he project is broken down as follows, in the left hand menu you have the JSON Outline, in the
center menu you have the JSON file and in the right hand menu you have the solution explorer for
the project you are authoring as shown in Figure 7.3.53
108
3. Review the DeploymentTemplate.json and the JSON Outline as shown in Figure 7.3.55.
4. In the JSON Outline menu, right click on resources and Click Add Resources
5. In the Add Resource Window Select Storage Account and Type a Name and Click Add
6. Review the DeploymentTemplate.json and the JSON Outline for the additional
of the storage account
7. In the JSON Outline menu, right click on resources and Click Add Resources
8. In the Add Resource window, select Windows Virtual Machine.
a. Type a Name in the space provided.
b. Enter the name for a new Storage Account.
c. Select a Virtual network/subnet, Click Add as shown in Figure 7.3.56.
109
9. In the DeploymentTemplate.json, in figure 7.3.57, you will see the updated JSON added for the
Windows VM we have just added with the visual authoring aids in Visual Studio.
110
5. In the Edit Parameters window, populate the fields and select the appropriate items for the
Storage Account and OS Version as shown in Figure 7.3.60 and Click Save
111
6. Click Deploy
7. When prompted enter a password for the admin account and click ok
8. Observe the Output Window as shown in Figure 7.3.61 for completion.
112
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2015-05-01-preview",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/',
parameters('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Security",
"type": "IaaSAntimalware",
"typeHandlerVersion": "1.1",
"settings": {
"AntimalwareEnabled": "true",
113
"Exclusions": {
"Paths": "C:\Users",
"Extensions": ".txt",
"Processes": "taskmgr.exe"
},
"RealtimeProtectionEnabled": "true",
"ScheduledScanSettings": {
"isEnabled": "true",
"scanType": "Quick",
"day": "7",
"time": "120"
}
},
"protectedSettings": null
}
TIP: For further samples of Azure VM extensions, check the Azure Quick Start Templates on Github and search
for Extensions. The Azure Quick Start Templates library is available on Github at https://github.com/Azure/
azure-quickstart-templates
To do this from PowerShell you can leverage the following sample code we show for enabling the DSC.
$settings = @{
SasToken =
ModulesUrl = https://mystoracct001.blob.core.windows.net/
windows-powershell-dsc/InstallIIS.ps1.zip
Configurationfunction = IISInstall.ps1\InstallIIS
$protectedsettings = @{PlaceHolder = }
114
115
116
7.4 Summary
In this chapter, we discussed Azure Resource Manager (ARM) and the new deployment model for VM and
application deployment. We discussed VM sizing and configuration options of VM images in the Azure
Marketplace. We explored ARM deployment options for Windows and Linux VMs in depth, including via the
Azure Portal, Azure PowerShell and the new ARM json template. With a few deployments completed, we
explored additional configuration capabilities of ARM, including VM Extensions and PowerShell DSC.
In chapter 8, we will explore options for migration from on-premises data centers to Microsoft Azure from
planning to implementation, including tips and tricks to ensure your migration to Azure is a success.
117
Chapter 10:
Automation
and Self-Service
Explained simply, an IT service is a group of IT systems, people and processes required to deliver
value to a customer. A service provides value to the customer, but a service cannot deliver value to
the customer if it is not available. To make services available more quickly and service delivery more
reliable, you can use automation and self-service. While automation is a key driver to reliable and
scalable execution of repeatable processes, it is only a small component of delivering an IT service.
As an example, building a runbook to reset a users password can be a great solution, quick and easy
for support engineers to use, instead of multiple administration tools. However, what is the real benefit
of automation if the customer still needs to call the service desk, the support engineer still needs to
fill in the incident, start the runbook manually and to deliver the new password to the customer? In
this example, the most time consuming part of the process is to call the service desk and create the
incident. Resetting the password is a quick job with or without an automated runbook.
If you instead focus on delivering a service to the customer that substantially reduces manual effort,
eliminates human error and reduces phone calls to the service desk, you can provide real business
value. As an example, a solution implementing a self-service portal were your customers can reset
passwords without the phone call to the service desk and the new password is delivered automatically
by a secured channel, provides closed loop automation with measurable cost reduction. A self-service
portal is a good way to provide customers an easy contact channel to the service provider, which in
this example is the internal IT department. A self-service portal can be used to request services, as well
as to update existing work items and configuration items.
Self-service can just as easily backfire if the user experience is not designed for the audience who will use
the solution. Another user in the same company can spend days thinking about whether they should
be running Windows X86 or X64 and if they should have 120 GB hard disk or 240 GB hard disk. Perhaps
they do not install Windows often, and perhaps have no clue what GB is. In this example, self-service can
actually become more expensive for the company, as the end-user may now waste time deciding on an
order instead of simply calling service desk and saying "I need a standard desktop with MS Office.
Focus must always be on delivering an easy-to-use service to our customer, not just to automate for the
sake of automation. Self-service must be delivered in a way that the customer understands, for example
"Small PC or Large PC", abstracting technical terms they may not understand, like GB and GHz. When
delivering automation and self-service in a good balance, with focus on service delivery, a great value
can be provided to the customer and the business.
118
This chapter will present several self-service and automation solution options, with examples of how to
utilize these together to deliver value to users and the business.
In this chapter, you will learn about:
D
ifferent self-service portals you might use when automating processes in Azure, as well as the pros
and cons of the different solutions in System Center 2012 R2 (System Center).
S ystem Center offers multiple runbook automation engines (three at the time of this writing). This
chapter will talk about the options available from Microsoft and possible use cases for each.
T hen, to help you better understand how to use the two together, we will discuss integration
between self-service portals and automation engines.
F inally, we will cover how to address a couple of common scenarios and you will create two Azure
Automation runbooks, including one in the new graphical authoring interface.
As with previous chapters, code samples presented within the chapter be downloadable from the Github
repository associated with this book. Look for the Download the Code sections throughout the chapter.
119
10.1.1 SharePoint
SharePoint is a server product from Microsoft that can be used to create web sites, often non-Internet
facing web sites, such as project websites and Intranets. SharePoint can be used to store, sort, share and
give access to information from almost any device. SharePoint includes a free tool named SharePoint
Designer that can be used to design, create and adapt web sites, without any deep development skills.
SharePoint comes in two flavors: a free version, named SharePoint Foundation and a licensed version,
named SharePoint Server. SharePoint Server is built for large enterprises with advanced features,
including business intelligence and advanced search.
Often companies already have one or even multiple SharePoint installations. Customers are used to
using the SharePoint interface and the company may have internal resources to adapt and customize
SharePoint. SharePoint is also a reliable technology that has been on the market for a long time, and
there are a number of add-ons and well-tested solutions for any given scenario. This gives SharePoint an
edge when implementing a self-service portal where none exists. However, a disadvantage of running
SharePoint is that there is no real out-of-box integration or connector to System Center 2012 R2 Service
Manager (Service Manager or SCSM) or the Microsoft automation platforms, which include System Center
Orchestrator (Orchestrator), Service Management Automation (SMA) and Azure Automation. However,
it is possible for IT Pros to develop this integration with System Center. You will learn how to leverage
SharePoint for self-service later in this chapter, with focus on self-service for Azure services.
120
Second, App Controller only supports Azure Service Management (Azure v1), so it cannot leverage the
new features of Azure Resource Manager (Azure v2). In fact, App Controller is not listed on the System
Center 2016 roadmap, so it will no longer be an option for managing on-premises resources either.
If your organization is not already using App Controller, the authors recommend against introducing it
into your environment.
121
122
123
TIP: Gridpro, a Microsoft partner, sells a product called Request Management for WAP, which adds
integration between Service Manager and the WAP portal. With the Gridpro add-on, you can publish the
service catalog into the portal and also work with work items, like incidents and service requests, from the
WAP portal. More info can be found on the Gridpro website at www.gridpro.se.
124
125
126
Figure 10.5.2 shows a service request that is in progress. The GridPro Request Management solution
uses the WAP portal, providing ITSM integration for organizations using WAP. The native Service
Manager portal can still be used side-by-side the WAP portal to provide self-service outside of IT, who
would not typically use the WAP portal.
127
Cireson is a Microsoft partner founded in San Diego with services partners around the world. The
Cireson Portal, shown in Figure 10.5.3 and 10.5.4, totally replaces the native self-service portal in
Service Manager. The portal provides features like service catalog, request management, work item
management and knowledge base. The Cireson portal is HTML 5 and does not require WAP, SharePoint
or Silverlight, making it browser independent and mobile device friendly.
Additionally, the Cireson Portal offers functionality enabling service desk analysts and change managers
to perform their job duties entirely in a web browser, dramatically reducing the need for the SCSM
Console. To augment the native reporting feature of Service Manager, the Cireson Portal also includes a
few built-in dashboards.
128
10.6 Self-Service
Portal Summary
The following table shows a summary of the different portal alternatives in System Center 2012 R2 for
enabling self-service. It is important to evaluate each alternative for your specific use cases and the
functionality requirements of each.
Portal
Advantages
Disadvantages
SharePoint
App Controller
Service
Manager Portal
3rd Party
(Commercial)
Portal
In-House
Developed
Windows
Azure Pack
129
10.7 Automation
With Orchestrator, Service Management Automation (SMA) and Azure Automation, you can automate
almost any manual process. However, before you try to do that, there are a number of questions that
should be asked and answered to ensure you are focusing on automating the processes that will
provide the most benefit to the organization.
The process of identifying the best candidates for runbook automation requires examining both the
financial benefits (return-on-investment) to the business, as well as the technical aspects of process
automation. As runbook automation attempts to replace human effort, you will find some processes
much easier to automate than others will. By identifying the best candidates from both financial and
technical perspectives, you will reduce the likelihood of success in automating processes that offer
clear value to the business.
10.8 Identifying
Candidates for Automation
(business perspective)
Regardless of which automation engine you choose, you should review the following questions to help
identify which processes, if automated, would offer the greatest return on investment.
W
hich processes are the most time-consuming?
With Orchestrator, it is easy to automate even complex scenarios, integrating with components
throughout the data center and into the cloud. However, as a start, it is generally much better idea
to look at the processes and tasks that are the most time-consuming today. This kind of information
can often be found in the reporting tool for service desk.
W
hich service levels are suffering the most?
Look into which service level agreements (SLAs) the organization most often breaches or deliveries
that seem to always push very close to deadlines. Remember, you do not necessarily need to
automate 100% of a multi-step process to realize the benefits of process automation. Can some of
the steps leading to SLA breach be automated? Can some of the steps in the process be automated
to speed up the delivery?
W
hich incidents recur most frequently?
Common incidents, for example Windows services that frequently stop unexpectedly, are good
candidates for automation.
130
W
hich incidents are most expensive for the company?
When an incident occurs that affects many users, such as a file server cluster going offline, there
is an inherent urgency to resolve the issue. Automating these incident resolutions can have an
exponentially greater payback based on reduction in work time lost across a large group of users.
W
hich processes result in significant delays for your customers?
For example, a project manager requests a new project site in SharePoint to kick off a new project. If
that work item sits in a support queue for a couple of days, it potentially delays the work of an entire
project team. In this case, even if it takes an engineer only 5-10 minutes to complete a manual task
to create the SharePoint site, this may be a good candidate for automation to eliminate the lag time
between request submission and fulfillment.
Identifying the best candidates from a financial perspective is only the first step. You must then identify
feasibility and level-of-effort from a technical perspective.
10.9 Identifying
Candidates for Automation
(technical perspective)
There are several questions that should be answered before authoring begins to identify the technical
feasibility of automating candidate processes. In evaluating the technical aspects of feasibility and
level-of-effort, you will notice the financial element involved. We are attempting to identify technically
suitable candidates for which the effort involved makes financial sense.
Is this task well suited for automation?
Many tasks can be executed with Microsoft automation engines, but you always need to ask if the task
is well suited for automation. At the end of the day, Orchestrator can install a software package to all our
Windows client PCs, but this task is most likely better done with Configuration Manager. Try to focus your
use of automation on augmenting the capabilities of your existing tools, not replacing them.
D
evelopment cost and effort?
Take an example where you have a task that you perform every quarter, and it takes about an hour
to complete the task manually. You want to build a solution to automate the task, but it would take
around 40 hours to develop and test the solution. Return on the investment of the development
costs would be around 10 years! Automating this process would not be a good investment of your
time. Always estimate development hours and return on investment time before starting. Also,
always plan for unexpected problems and challenges, as these happen in the real world. As a rule,
always add 25% on the development time for unexpected problems.
131
P
orts and permissions required?
Depending on integration and product, the solution will require access to different ports with
different accounts. Often these accounts require a high level of permissions, such as scenarios around
provisioning and de-provisioning in VMM, Active Directory and Azure. It is important to address needs
for both network firewall ports and service account requirements early in your planning process.
C
losing the loop with ITSM integration.
At the beginning of this chapter, we discussed how important it is to look at automation from a
service delivery perspective. In an early stage of automation, project, you should plan for integration
with the organizations ITSM tool, such as Service Manager.
Once you have found good automation candidates, you can as a final filter, look into the number of
exceptions and variances. More exceptions and variances in the process will make the automation
more difficult to automate. For example, maybe you are building automation for new virtual servers
and you use only one version of Windows Server and one version of SQL server. The number of
combinations would be one. In this case, it is a very easy scenario to automate.
Think about the same scenario, but with three different Windows Server versions and two different
SQL server versions. That will result in six different combinations. This would give you much more to
consider when building the automation around the process.
Automation does not transform a bad process into a good process. Validated, well-documented
processes are key to effective automation.
132
Orchestrator
SMA
Azure Automation
Local
In Azure
Note: For Orchestrator, there are many workarounds in the community for all the cons, such as state tracking,
checkpointing and parallel processing. If you choose Orchestrator as your automation solution and would
like more information, see Best Practices for Authoring and Managing Orchestrator at https://channel9.
msdn.com/events/MMS/2013/SD-B317 on the Microsoft Channel 9 website.
133
When choosing automation platform for automating in Azure, you should start by investigating if a
cloud based tool can be used, as there are often legal requirements or privacy concerns that must
be addressed in the decision process. Azure Automation has many features of on-premises solutions,
as well as some not available in the on-premises options. It is also the platform on which Microsoft
will focus the most development effort in the future. If Azure Automation is not an option for your
organization, evaluate the PowerShell scripting skills of your team. If you have limited PowerShell skills
and need to get started with automation quickly, the graphical authoring feature of Orchestrator makes
it a better option. If you have PowerShell skills on your IT Operations team, consider SMA, as it is the
most current on-premises automation platform from Microsoft. If you do select Orchestrator, know that
there will come a time when you need to migrate from Orchestrator to SMA or Azure Automation for
continued support from Microsoft.
134
Figure 10.10.1 shows the Orchestrator Runbook Designer. The Runbook Designer is used to author
runbooks in Orchestrator.
Figure 10.10.2 shows the activity pane with all the different activities and product-specific groups of
activities, called integration packs.
135
A great benefit of the Runbook Designer is that no development skills are required to author
integration and automation. The runbook author can drag and drop activities into the workspace and
connect the activities with configurable links (called smart links). The runbook author then opens the
properties of each activity to configure the properties. Figure 10.10.3 shows filter properties of a Get
User activity that lists users from Active Directory.
All Orchestrator activities publish data output to a shared data area, called the data bus. Activities
executing later within the same runbook can read data from the data bus, and use it as input. In Figure
10.10.3, you can see that the value is contained within curly brackets {}. This means that the activity is
using a dynamic value from the data bus. The data bus is a key component of Orchestrator, used to
build runbooks that respond dynamically based on runtime conditions. For example, if Activity A in
Figure 10.10.4 is a Read File activity, it will publish information like File Path, File Size and Filename to the
data bus. Activity B and Activity C in Figure 10.10.4 can then read this data and use it as input.
136
Figure 10.10.5 shows how you can select published data from earlier activities within the runbook.
Table 10.10.1 shows the key components of an Orchestrator environment, with a brief explanation of
the function of each.
Component
Description
Management
Server
The Management Server is a layer between the database and Runbook Designer. The Management
Server is only needed when author new runbooks.
Runbook
Server
The Orchestrator Runbook Server is the server that execute the runbook. For example, if you have
built a runbook that integrate with Service Manager, then it is the runbook server that connects to
the Service Manager management server. You can install multiple runbook servers to support large
scale of running runbooks, fault tolerance or in some scenarios, you need to install multiple runbook
servers on different network zones or at different customers.
Orchestrator
Database
The Orchestrator database is a Microsoft SQL database that contains all settings, runbooks, logs and
status of runbooks. The database is critical for the environment and can be clustered to support fault
tolerance.
Runbook
Designer
Runbook Designer is the console shown earlier in figure 10.10.3, that is used to author runbooks.
Runbook
Tester
Runbook Tester is a console that can be used to test runbooks. It is not really testing runbooks, as the
runbook will run. However, the Runbook Tester can be used to step through a runbook and verify
each activity in a controlled manner. Two other important things to know about Runbook Tester
is that it will try to run the runbook on the computer were Runbook Tester is running, not on the
Runbook Server. In addition, the Runbook Tester will try to run the runbook with the account running
Runbook Tester and not the service account on the Runbook Server.
Orchestration
Console
The web based Orchestration Console can be used to start, stop and check status of runbooks. The
Orchestration Console support security roles and can be used to give a group or users to start one or
multiple runbooks.
Orchestrator
Web Service
The Web Service is a Representational State Transfer (REST)-based service that let applications
connect to Orchestrator to start, stop or get information about runbooks. The Orchestration Console
use the web service to connect to the Orchestrator Database.
Deployment
Manager
Deployment Manager is a tool that can be used to deploy new components of Orchestrator, for
example integration packs, runbook servers and Runbook Designer. Often these deployment
operations are blocked by firewalls ports and then you can install each component manually instead.
TABLE 10.10.1 ORCHESTRATOR COMPONENTS AND DESCRIPTIONS
137
For more information about Orchestrator architecture, see the following article on the Microsoft
TechNet website: http://technet.microsoft.com/en-us/library/hh420377.aspx.
138
Figure 10.10.8 shows the SMA runbook authoring interface. Because it is not a proper code editor, most
administrators use the PowerShell ISE for authoring SMA runbooks.
139
Table 10.10.2 shows all of the components of a SMA environment. All components can be deployed on
a single server or distributed on multiple servers.
As you can see in table 10.10.2, WAP is not a required component for SMA. However, without WAP there
is no graphical interface for SMA. Instead, all authoring, configuration and administration has to be
done through a PowerShell UI (PowerShell prompt, PowerShell ISE, etc.).
Component
Description
Web service
The web service is the primary channel into SMA. WAP uses the web service,
and you can use the web service to communicate with SMA from PowerShell
and Orchestrator.
Runbook worker
PowerShell module
The PowerShell module for SMA is an important component as you can do any
SMA task from PowerShell. For example, import and export runbooks.
Database
The SQL database stores all runbooks, settings, activities, runbook jobs and
integration modules.
TABLE 10.10.2 SMA COMPONENTS AND DESCRIPTIONS
The high level application architecture and component communication in SMA is illustrated in figure 10.10.9.
For more information about Service Management Automation architecture, see http://technet.
microsoft.com/en-us/library/dn469259.aspx on the Microsoft TechNet website
140
141
Source Control, shown in Figure 10.10.10, is a feature that integrates with GitHub. GitHub is a
collaboration platform for code management, and is commonly used in open source project. With
the integration to GitHub, you can centrally store all your code (runbooks) and track changes. You can
import and export versions between Azure Automation and GitHub quickly and easily.
142
143
By default, Runbooks in Azure Automation are executed on runbook worker servers provided by
Microsoft in Azure. These servers cannot access resources inside of a VM or your local datacenter unless
you provide access over the Internet (Azure Automation is not Azure Site-to-Site VPN or ExpressRoute
aware). However, in many scenarios you need to execute runbooks on local servers, such as when
creating an account in Active Directory Domain Services on-premises. The hybrid worker, shown
in Figure 10.10.10, is a feature in Azure Automation to execute runbooks on a server in your Azure
subscription or even your local data center. Hybrid workers use the Microsoft Management Agent
(Installed with Operations Management Suite) and do not require any open firewall ports from Internet
to local network. Instead, all communication is outgoing traffic from agent to Azure Automation on
port 443. It is possible to target a runbook to a group of Hybrid workers, and then any member of the
hybrid worker group will execute the runbook.
Additional Reading: Though we will provide a couple of examples in this chapter, extensive coverage of
Azure Automation is outside the scope of this book. You can read more about Azure Automation and the new
hybrid worker role on the Microsoft website in Azure Automation Hybrid Runbook Workers at https://azure.
microsoft.com/en-us/documentation/articles/automation-hybrid-runbook-worker/.
144
145
The process described in these steps minimizes development effort, while at the same time, leveraging
each Microsoft automation engine for tasks to which it is well suited. Updating a SharePoint list is easy
to do with the Orchestrator integration pack for SharePoint as compared with writing an equivalent
runbook in PowerShell workflow for SMA. For automating complex deployments in Azure, the features of
PowerShell workflow leveraged in SMA provide greater control through parallel and serial processing, as
well as the ability to write checkpoints when major steps are completed within the runbook.
146
147
Invoking a SMA runbook is not complicated, as there is a Windows PowerShell module for SMA. Figure
10.11.4 shows the Windows PowerShell code needed to invoke a SMA runbook from Orchestrator. The
script shown performs the following tasks:
1. Creates a remote session to WAP01
2. R
uns the Start-SMARunbook cmdlet. The SMA runbook is named Deploy_New_VM and it has
two parameters, VMName and InstanceSize.
3. Both parameters are picked up from the Orchestrator data bus and forwarded to the SMA runbook.
148
In Figure 10.11.5, you can see that one parameter, VMName, is passed to the Update_Sharepoint
runbook. The PowerShell code for Update_Sharepoint is shown in Figure 10.11.6. The runbook is using
a script, first written by Tiander Turpijn at Microsoft.
TIP: There is a blog post that describes triggering an Orchestrator runbook from a SMA runbook on the
Microsoft TechNet website at https://blogs.technet.com/b/privatecloud/archive/2013/12/11/callingan-orchestrator-runbook-from-sma-part-2.aspx.
To follow the example, you need the GUID of each of your runbook parameters. To get them you can
run the following SQL query in your Orchestrator database:
SELECT CUSTOM_START_PARAMETERS.UniqueID, CUSTOM_START_PARAMETERS.
Value AS [Parameter Name], OBJECTS.Name AS [Activity Name],
POLICIES.Name AS [Runbook Name] FROM
CUSTOM_START_
PARAMETERS INNER JOIN OBJECTS ON CUSTOM_START_PARAMETERS.ParentID =
OBJECTS.UniqueID INNER JOIN POLICIES ON OBJECTS.ParentID = POLICIES.
UniqueID
In Figure 10.11.6, you can see that the runbook uses a user account named andersbe to invoke a
runbook named Update SharePoint in the \3. Azure\18\ folder. It connects to the Orchestrator server
named SCO01 and passes one parameter named Server.
149
As you can see in this example, you can use the best of both automation engines that Orchestrator
2012 R2 delivers.
Step-by-Step
You can review the step-by-step process for interacting with the SharePoint web service from Orchestrator in
SharePoint list and choice columns and the SharePoint IP at http://contoso.se/blog/?p=3845.
150
151
TIP: You can also trigger an SMA runbook from an Orchestrator runbook through the SMA web service.
Tiander Turpijn, a Microsoft senior Program Manager, has shared an example of this on his blog
at http://blogs.technet.com/b/privatecloud/archive/2013/11/01/calling-an-orchestrator-runbookfrom-sma-part-1.aspx.
Next, you will complete the following steps in the Service Manager Authoring Tool to create a custom class.
1. Start the Service Manager Authoring Tool.
2. In the Service Manager Authoring Tool, click File and select New.
3. I n the New Management Pack dialog box, input the name for the new management pack, for
example Contoso.SMA. Click Save.
4. In the Management Pack Explorer, right-click Classes and select Create Other Class.
5. In the Base Class dialog box, select Activity and click OK.
6. In the Create Class dialog box, input Contoso.SMA.DeployVM, and click Create.
7. I n the class properties list, delete the default property named Proerty_XX (Property_33 in the
sample environment), shown in Figure 10.12.1.
You will create two new properties on the new class, on for VM name VMName and one for VM
size VMSize. You will configure the VMSize property with data type List. In Service Manager you will
configure a list of values used to select VM size.
8. Click Create property and input VMName as internal name, click Create.
9. Click Create property and input VMSize as internal name, click Create.
10. In the Details pane for the VMSize property, change Data Type to List.
11. In the Select a list dialog box, click Create List.
12. In the Create List dialog box, input VMSizeList as internal name and VM Size as Display name. Click
Create.
13. In the Select a list dialog box, select the new VM Size list, click Ok. Your two new properties should
now look like Figure 10.12.2
152
14. In the Management Pack Explorer, right-click Workflows and select Create.
15. In the General page of the Create Workflow Wizard, input ContosoSMAInvokeRunbook
as name. Click Next.
16. On the Trigger Condition page, select Run only when click Next.
17. On the Trigger Criteria page, click Browse and select the Contoso.SMA.DeployVM class. Replace
Change event with When an object of the selected class is updated, then click Additional Criteria.
18. In the Pick additional criteria dialog box, click the Changed To tab, and add criteria as shown in
Figure 10.12.3 then click OK.
[Activity] Status equals In Progress
153
22. Once you click Close, the workflow designer will be displayed. Add a Windows PowerShell Script
to the workflow, shown in Figure 10.12.4.
23. You will configure the WindowsPowerShell activity to run a PowerShell script to start the SMA
runbook. Select the Windows PowerShell activity, and in the Details pane click the ellipsis for Script
Body, shown in Figure 10.12.5.
24. Configure a Script Activity, paste the following script to the Script Body text field.
IMPORT-MODULE SMLETS
$SIZE1 = $INSTANCESIZE
$SIZE2 = $SIZE1 -REPLACE {,
$SIZE3 = $SIZE2 -REPLACE },
$SIZE4 = GET-SCSMENUMERATION -ID $SIZE3
$SIZENAME = $SIZE4.DISPLAYNAME
154
26. The activity and management pack is now ready to import. Great work! Save the management pack
in the Service Manager Authoring Tool.
155
27. Copy workflow.Dll and ContosoSMAInvokeRunbook.dll from the management pack folder to
Service Manager installation folder (C:\Program Files\Microsoft System Center 2012 R2\Service
Manager) and then restart the Microsoft Monitoring Agent on the Service Manager management
server running workflow.
28. You can now open SCSM Console and import the management pack.
29. In the SCSM Console, browse to Library and Lists, open the VM Size list.
30. In the List Properties dialog box, add the following items, which are all Azure VM sizes, and click OK.
ExtraSmall
Small
Medium
Large
ExtraLarge
TIP: For more information about Microsoft Azure VM sizes look at http://www.windowsazure.com/en-us/
pricing/details/virtual-machines/
31. In the SCSM Console, browse to Library and Templates, click Create Template.
32. In the Create Template dialog box, input a name, for example Contoso SMA Deploy Azure VM.
Select Contoso.SMA.DeployVM as the Class and click OK.
33. In the Contoso.SMA.DeployVM Properties dialog box, input Contoso SMA Deploy Azure VM as
Display Name and click OK.
156
34. You can now use the new activity anywhere you like, for example in a service request templates, as
shown in Figure 10.12.7.
157
}
}
Download the Code
You can download the full script from GitHub at https://github.com/insidemscloud/AzureIaasBook, in
the \Chapter 10 directory. The file name is SCSM__Deploy_New_VM.ps1.
158
In the WAP portal, shown in Figure 10.12.8, you can see the SMA runbook start and you can also see the
input parameters coming from Service Manager:
This example is written in a simplified fashion, with no complicated configuration or requirements. For
example, you might want to add error handling in the script, to handle situations when the SMLets
cannot be loaded or Service Manager cannot be contacted. In both those cases, you would want to
set the activity status to failure in Service Manager. It is also a good practice to seal management packs
that include class structure, because a management pack cannot reference a class in an unsealed
management. If the class structure is in an unsealed management pack other management packs
cannot use the classes.
159
160
4. O
nce the requested server or servers are deployed, the runbook can update the service request and
attach a remote desktop connection to the ticket or send the connection file to the requester in an
automated e-mail.
There are several advantages to this solution over stand-alone PowerShell scripting. First, developers
can now connect to the new machines in a secure way without having to think about which cloud
the server is running. Additionally, this solution brings control and auditing with the work item in
Service Manager. Finally, it is also possible to generate chargeback and showback reports based on the
information in Service Manager.
No solution is perfect, and this one is no exception. One disadvantage of this solution is that developers
cannot shut down VMs. They can shut down a server within the Windows OS, but when an Azure VM is
shut down in this manner, resources are not returned to the pool and Azure will continue to charge for
the VM until it is shut down outside the OS and the resources are released. Shutting down a VM in the
Azure portal or via PowerShell shuts down the VM and deallocates all resources, including IP address.
The VM is placed into the Shutdown deallocated state.
You could publish a new service in the self-service portal that lists all Microsoft Azure servers that the
portal user owns, and let the user order shut down of one of them. Service Manager can then invoke
a runbook that shut downs the server and release the VM resources. During step 3, you can configure
the runbook to create an object in Service Manager CMDB that later can be used for chargeback and
showback at the server level if desired.
161
162
param (
[Parameter(Mandatory=$false)]
[string] $servername,
[Parameter(Mandatory=$false)]
[string] $servicename
)
$restart = inlinescript {
$s = New-PSSession -ComputerName $using:servername `
-credential $using:login
} -ArgumentList $using:servicename
Remove-PSSession $s
}
}
163
7. Click the Test Pane and test the runbook. In the Test blade, change Run on from Azure to Hybrid
Worker to execute the runbook on the Hybrid Worker. Once the runbook is tested click Published to
publish the runbook.
You have now installed a Hybrid Worker and authored a runbook to restart a Windows service on a
remote physical or virtual machine. Both server and service name are input as parameters to the runbook.
Download the Code
You can download the full script from GitHub at https://github.com/insidemscloud/AzureIaasBook, in
the \Chapter 10 directory. The file name is SvcRestart_HybrdWorker.ps1.
164
6. In the graphical authoring space, add CMDLETS and smart links according to Figure 10.14.1.
7. C
lick on the Add-AzureAccount activity and configure it according to the following settings. When
you click on an activity a configuration pane on the right side will appear.
8. Click Parameters, then under Parameter sets, click, User.
9. I n the Activity Parameter Configuration blade, under Parameters, click CREDENTIAL, as shown in
Figure 10.14.2.
165
10. In the Data source dropdown, select Credential asset, as shown in Figure 10.14.3. Then, choose the
credential you created in the previous steps. Click OK, and then OK again to save your selection.
11. Next, click on the Get-AzureVM activity and click on Parameters, Parameter sets, and then
ListAllVMs, as shown in figure 10.14.4. Click OK to save your changes.
166
13. In the Activity Parameter Configuration blade, click Choose a parameter set, as shown in Figure 10.14.5.
14. On the Parameter Set blade, choose ByName. This will cause the NAME and SERVICENAME areas to
display a visual indicator that these parameters are mandatory, as shown in figure 10.14.6.
167
15. Next, click on NAME. In the Data source dropdown, choose Activity output.
16. From the Activity list, select Get-AzureVM.
17. In the box provided, enter Name, as shown in figure 10.14.7. Click OK to save your changes.
18. Now select SERVICENAME. In the Data source dropdown, choose Activity output.
19. From the Activity list, select Get-AzureVM.
20. In the box provided, enter ServiceName, as shown in figure 10.14.8. Click OK to save your changes.
168
21. Once all activities are configured click Save and then Publish.
If you want to test the runbook before publish it you can click Test pane and test the runbook.
Remember that test is not a dry run, instead the runbook will run normal.
WARNING: If the runbook is configured to change (add, update, or delete) anything, it will
implement the change also during a test run.
22. You have now built the runbook. The next step is to schedule the runbook to run every evening at
22:00. On the ShutdownVM runbook blade, shown in Figure 10.14.9, click Schedule.
23. On the Schedule Runbook blade, click Link a schedule to your runbook, click Create a new
Schedule.
24. On the New Schedule blade, input the following settings and click Create.
Name: Every Day 2200
Starts: Enter your desired start date here
Recurrence: Daily
Runs every (number of days): 1
Click Create
25. On the Schedule Runbook blade, verify that the new Schedule is selected, click OK.
You have now authored a runbook in the graphical authoring mode that list all the Azure v1 VMs in
your Azure subscription. The runbook then shut down all VMs. You also configured a schedule to trigger
the runbook every day at 22:00.
169
170
171
12.1.1 Backup
Backup processes are designed to serve one purpose: prevent data loss in the event of a catastrophic failure.
In the event of a failure, data can be retrieved and restored to a separate location from the original source.
Two metrics typically govern backup processes: Recovery Point Objective (RPO), and Recovery Time
Objective (RTO). The RPO defines how many and how frequently backups are taken, and the RTO
defines how long it takes to restore the data to a fully functional state. For example, an RPO might be
defined to require a SQL server be backed up once a day, and those backups kept for the last 14 days
before they expire and the space is reclaimed. It might also require that once a month a full backup is
sent offsite. The RTO however, might be defined to require that any data from the last 14 days be able
to be restored within 30 minutes to avoid downtime, while data that needs to be restored from two
months ago needs to be restored within 24 hours.
172
173
174
DPM 2012 R2
SQL Server
2014, 2012 SP2, 2012, 2008 R2, 2008
n/a
In the next section, we will look at the native data protection capabilities of Azure in the Azure Backup feature.
175
176
How it works
We should start with a brief description of how the service works under the hood. To back up an Azure VM,
you first need a point-in-time snapshot of the data. The Azure Backup service initiates the backup job at the
scheduled time, and triggers the backup extension to take a snapshot. The backup extension coordinates
with the Microsoft VSS service in the Azure VM to achieve consistency (Windows VMs only). Once
consistency is reached, the backup extension invokes the blob snapshot API of the Azure Storage service to
get a consistent snapshot of the disks of the virtual machine (VM), without having to shut it down.
After the snapshot has been taken, the data is transferred by the Azure Backup service into the backup
vault. The service handles the job of identifying and transferring only the blocks that have changed
from the last backup making the backups storage very efficient. When the data transfer is completed,
the snapshot is removed and a recovery point is created. You can view this recovery point in the Azure
management portal.
Prerequisites
The primary prerequisite to configuring backups is creating a backup vault. The backup vault is a
logical container that stores all the backups and recovery points that have been created over time. The
backup vault also contains the backup policies that will be applied to the VMs being backed up.
You can use the Quick Create option to create an Azure backup vault in only a few clicks, as you no
longer have to create an upload an x.509 v3 certificate. In the Azure Management Portal, click New >
Recovery Services > Backup Vault > Quick Create, as pictured in figure 12.3.1.
In case you need it, the step-by-step process for configuring a backup vault in the Azure management
portal is available on the Microsoft website in Azure virtual machine backup - Introduction at
https://azure.microsoft.com/en-us/documentation/articles/backup-azure-backup-create-vault/
Calculating data and cost protected instances
Azure VMs that are backed up using Azure Backup will be subject to Azure Backup pricing. The
Protected Instances calculation is based on the actual size of the VM, which is the sum of all the data in
the VM, excluding the resource disk. You are not billed based on the maximum size supported for each
data disk attached to the VM, but on the actual data stored in the data disk. Similarly, charges for the
backup storage are also based on the amount of data stored with Azure Backup, which is the sum of
the actual data in each recovery point.
177
The billing does not start until the first successful backup is completed. At this point, the billing for both
storage and protected instances will begin.
Next, we will review the steps for configuring backup of Azure VMs using Azure Backup.
Step 1: Discover Azure Virtual Machines
The discovery process queries Azure for the list of VMs in the subscription, along with additional
information like the cloud service name and the region.
To trigger the discovery process, do the following steps:
26. Click on All Items, and then click on your backup vault. Then, click on the Registered Items tab.
27. Choose the type of workload in the dropdown menu as Azure Virtual Machine, and click on the
checkbox to the right to select.
29. The discovery process can run for a few minutes while the VMs not already protected by Azure
Backup are being identified. Once the discovery process is complete, a toast notification appears at
the bottom of the portal window.
Step 2: Register Azure virtual machine
Before a VM can be protected, it needs to be registered with the Azure Backup service. The registration
achieves two primary goals:
To have the backup extension connected to the VM agent in the Azure VM.
To associate the VM with the Azure Backup service so backup policies can be applied.
Note: The backup extension is not installed during the registration step. The installation and update of the
backup agent is now part of the scheduled backup job.
178
Registration is typically a one-time activity. Upgrade and patching of the Azure Backup extension is
handled in the background by Azure without any user intervention or downtime. This relieves your system
administrators of the agent management overhead that is typically associated with backup solutions.
To register virtual machines, complete the following steps:
1. Navigate to the backup vault, which can be found under Recovery Services in the Azure portal,
and click on the Registered Items tab
2. Choose the type of workload in the dropdown menu as Azure Virtual Machine and click on the
checkbox at the lower right of the window to select.
3. Click on the Register button at the bottom of the page.
4. In the Register Items pop-up, choose the VMs that you would like to register.
NOTE: If there are two or more VMs with the same name use the cloud service to distinguish between the VMs.
The register operation allows you to select and register multiple VMs at once. This substantially reduces
the one-time effort spent in preparing the VM for backup. For each VM you register, Azure Backup
completes the following tasks:
A
job is created for each VM that should be registered. The toast notification shows the status of this
activity. Click on View Job to go to the Jobs page.
The VM also appears in the list of registered items and the status of the registration operation is shown.
Once the operation is completed, the status in the portal will change to reflect the registered state.
NOTE: Only the VMs that are not registered and are in the same region as the backup vault, will show up.
179
3. This will bring up a Protect Items wizard where the VMs to be protected can be selected. If there
are two or more VMs with the same name, use the cloud service to distinguish between the VMs.
As with the register operation, the protect operations allows you to select and protect multiple VMs at
once, which means that multiple VMs can be selected and configured.
NOTE: Only the VMs that have been registered correctly with the Azure Backup service and are in the same
region as the backup vault will show up here.
180
4. In the second screen of the Protect Items wizard, choose a backup and retention policy
to back up the selected VMs. Pick from an existing set of policies or define a new one (Create new),
as shown in figure 12.3.7.
5. Click on the checkbox at the lower right of the window to save your changes.
NOTE: For preview, up to 30 days of retention and a maximum of once-a-day backup is supported.
In each backup vault, you can have multiple backup policies. The policies contain the details about
backup schedule and retention. For example, one backup policy could be for daily backup at 11:00PM,
while another backup policy could be for weekly backup at 2:00AM. While each backup policy can have
multiple VMs that are associated with the policy, a VM can be associated with only one policy at any given
point in time. Retention options include backup retention for daily, weekly, monthly and yearly backups.
181
6. A job is created for each VM to configure the protection policy and to associate the VM to the policy.
Click on the Jobs tab and choose the Configure Protection option in the Operation dropdown to
view the list of Configure Protection jobs.
Once completed, the VMs are protected with a policy and must wait until the scheduled backup time
for the initial backup to be completed. The VM will now appear under the Protected Items tab and will
have a Protected Status of Protected (pending initial backup).
NOTE: Starting the initial backup immediately after configuring protection is not available as an option today.
At the scheduled time, the Azure Backup service creates a backup job for each VM that needs to be
backed up. Click on the Jobs tab to view the list of backup jobs. As a part of the backup operation, the
Azure Backup service issues a command to the backup extension in each VM to flush all writes and take
a consistent snapshot.
Once completed, the Protection Status of the VM in the Protected Items tab will show as Protected.
Viewing Backup Status and Details
Once protected, the VM count also increases in the Dashboard page summary. In addition, the
Dashboard page shows the number of jobs from the last 24 hours that were successful, failed, and still
in progress. Clicking on any one category will drill down into that category in the Jobs page.
For guidance on troubleshooting common errors with Azure Backup, see "Troubleshooting errors" of
the "Back up Azure virtual machines" page at https://azure.microsoft.com/en-us/documentation/
articles/backup-azure-vms/#troubleshooting-errors.
182
3. T he portal will generate a vault credential using a combination of the vault name and the current
date. Click Save to download the vault credentials to the local account's downloads folder, or select
Save As from the Save menu to specify a location for the vault credentials.
4. A
fter creating the Azure Backup vault, you will install the Microsoft Azure Recovery Services agent
on each of your on-premises servers (Windows Server, or Windows client) that enables back up of
data and applications to Azure.
5. I n the Azure Portal, click Recovery Services, then select the backup vault that you want to register
with a server. The Quick Start page for that backup vault appears.
6. On the Quick Start page, click the For Windows Server or System Center Data Protection Manager
or Windows client option under Download Agent, as shown in figure 12.3.9. Click Save to copy it to
the local machine.
183
7. Once the agent is installed, double click MARSAgentInstaller.exe to launch the installation of the
Azure Backup agent. Choose the installation folder and scratch folder required for the agent. The
cache location specified must have free space, which is at least 5% of the backup data.
The Azure Backup agent installs .NET Framework 4.5 and Windows PowerShell (if it is not available
already) to complete the installation.
8. If you use a proxy server to connect to the internet, in the Proxy configuration screen, enter the proxy
server details. If you use an authenticated proxy, enter the user name and password details in this screen.
184
9. O
n the Installation screen, click Install. Once the agent is installed, click the Proceed to
Registration button to continue with agent registration in Azure.
Step 2: Register the Agent
1. On the Vault Identification screen, browse to and select the vault credentials file you downloaded
previously, as shown in figure 12.3.12.
Note: The vault credentials file is valid only for 48 hours after it is downloaded from the portal.
2. On the Encryption setting screen, you can either generate a passphrase or provide a passphrase
(minimum of 16 characters), as shown in figure 12.3.13.
Note: Remember to save the passphrase in a secure location, because a backup copy is not stored in Azure.
185
3. Click Finish. When agent registration is complete, click the Close button to launch the Microsoft
Azure Recovery Services Agent.
Step 4: Configure Backup and Retention
1. In the Actions pane, click Schedule Backup, as shown in figure 12.3.14.
186
6. On the Select Retention Policy screen, set desired daily, weekly, monthly and yearly retention
policies, then click Next.
7. On the Choose Initial Backup Type screen, select Automatically over the network or Offline
Backup, then click Next.
8. On the Confirmation screen, review your selections and click Finish.
Note: The initial backup will happen at the first scheduled time. If you want to take a backup immediately,
you can click the Back Up Now menu item.
187
12.4.1 Overview
In Windows Server 2012, Microsoft introduced Hyper-V Replica. This technology is built in to the
Hyper-V hypervisor role, enabling encrypted replication of the VM data and configuration from one
Hyper-V host to another. Replication occurs at intervals of 30 seconds, 5 minutes, or 15 minutes. This
helps to ensure that standby servers are always recent copies of mission critical servers, shortening the
recovery time in the event of a disaster. Replica VMs can be configured to utilize the same IP addresses
as the source, or alternatively, use a different IP address space. Additionally, recovery points can be
created, enabling recovery to an earlier point in the day (up to the last 24 hours). This functionality is all
built-in free of charge in Windows Server 2012 and later.
Of course, in the event of a disaster, someone needs to make the decision to switch over to the replicas
and initiate the process. Often, applications must be failed over in a specific order to account for
dependencies (e.g. SQL services require Active Directory to authenticate service and user accounts),
opening up the possibility of human error in the failover process. In the event of a disaster, the human
factor can become a bottleneck to speedy recovery.
Fortunately, Azure Site Recovery (ASR) drastically simplifies the failover process, enabling administrators
to create groups of VMs that failover together, enabling single-click orchestrated failover in the event of
a data center going offline.
ASR leverages System Center Virtual Machine Manager (VMM), by monitoring the VMM servers, and
replicating configurations, snapshots, and data from one location to the destination. ASR can replicate
the on-premises data center VMs from Hyper-V to Azure IaaS, enabling cost savings by eliminating
the need for a second physical data center. This makes ASR a comprehensive, automated, and highly
capable disaster recovery tool.
ASR has a few unsupported scenarios you should be aware of
Unified Extensible Firmware Interface (UEFI)/Extensible Firmware Interface (EFI) boot is not supported.
Bitlocker encrypted volumes are not supported.
Clustered servers are also not supported.
Volumes larger than 1023 MB cannot be protected.
Step-by-step guidance on configuring ASR, as well as a list of FAQs, is available on the Microsoft site at
http://azure.microsoft.com/blog/2014/08/05/azure-site-recovery-enables-one-click-orchestratedfailover-of-virtual-machines-to-azure/.
2015 Veeam Software
188
189
190
191
To backup databases from versions of SQL Server older than SQL Server 2012, you must download and
install the "Microsoft SQL Server Backup to Microsoft Windows Azure Tool". This tool enables backup
to Azure from SQL Server 2005, 2008 and 2008 R2 databases with encryption capabilities. You can
download this tool at http://www.microsoft.com/en-us/download/details.aspx?id=40740.
192
Create Credential
Backup Using the Backup Task (in SSMS)
Backup Using T-SQL BACKUP DATABASE Command
Once you have a created a backup, you may interested in how to recover a database using a backup
hosted in Azure storage. The restore process will be covered in the Restoring Data from an Azure Blob
section later in this chapter.
12.5.2.1 Create Azure Storage Account:
If you are not already logged into the Azure Management Portal, open a supported web browser and
browse to https://manage.windowsazure.com/, then sign in using your Azure account.
1. Click on STORAGE
from the blue navigation pane on the left, as shown in figure 12.5.1.
193
4. In URL, enter a friendly name to provide a unique path, shown in figure 12.5.3.
2. At the bottom of the screen, click Add to open the New container dialogue.
194
3. E nter a unique value in the Name field, and set value of Access drop down to Private as shown in
Figure 12.5.5 below. Click the checkmark to create the container.
195
4. In the New Credential window (shown in figure 12.5.7), specify the following values for ease of use:
Credential Name Use the name of the storage container
Identity Use the name of the storage account
Password The access key from the storage container. You can find this value by selecting the
storage container in the Azure portal, and at the bottom of the screen, selecting Manage Access
Keys. Copy the primary access key, as shown in figure 12.5.8.
196
2. On the general page, select the URL option to create a backup to Azure storage, as shown in figure
12.5.10. When you select this option, you see other options enabled on this page:
a. File Name: Name of the backup file.
197
b. SQL Credential: You can either specify an existing SQL Server Credential, or create a new one by
clicking on the Create next to the SQL Credential box.
NOTE: The dialog that opens when you click Create requires a management certificate or the publishing
profile for the subscription. SQL Server currently supports publishing profile version 2.0. It is easier to simply
create the credential as documented in the previous step.
c. Azure storage container: The name of the Windows Azure storage container to store the backup files.
d. URL prefix: This is built automatically using the information specified in the fields described
in the previous steps. If you do edit this value manually, make sure it matches with the other
information you provided previously. For example, if you modify the storage URL, make sure the
SQL Credential is set to authenticate to the same storage account.
198
TIP: Backing up with the COMPRESSION option means lower storage usage and thus, lower costs to store
your SQL database in Azure. Notice the size difference with and without compression shown in figure 12.5.11.
Now that you have created a database backup, you can attempt to restore a backup from Azure storage.
199
2. When you select Devices in the General page of the Restore task in SQL Server Management Studio, this
takes you to the Select backup devices dialog box, which includes URL as a backup media type.
3. W
hen you select URL and click Add. This opens the Connect to Azure storage dialog. Specify the
SQL Credential information to authenticate to Azure storage, as shown in figure 12.5.13.
4. SQL Server then connects to Azure storage using the SQL Credential information you provided and opens
the Locate Backup File in Windows Azure dialog. The backup files residing in the storage are displayed
on this page. Select the file you want to use to restore and click OK.
This takes you back to the Select Backup Devices dialog, and Clicking OK on this dialog takes you back to
the main Restore dialog (shown in figure 12.5.14) where you will be able complete the restore.
200
5. You can use the Script menu to create a T-SQL restore script and save to file, clipboard or open in
a new query editor window, as shown in figure 12.5.15. You can also select the Agent Job option,
which will bring your selections in the Restore Database wizard into the Schedule Job interface.
As you can see, backing up and recovering SQL databases from backups stored in Azure is a
straightforward process.
201
202
To enable this feature on a VM that is already deployed, you will have to use Azure PowerShell to
complete the configuration. A sample script is included below. Be sure update the values in brackets
<> with values applicable to your environment.
$storageaccount = "<storageaccountname>"
$storageaccountkey = (Get-AzureStorageKey -StorageAccountName
$storageaccount).Primary
$storagecontext = New-AzureStorageContext -StorageAccountName
$storageaccount -StorageAccountKey $storageaccountkey
$autobackupconfig = New-AzureVMSqlServerAutoBackupConfig `
-StorageContext $storagecontext -Enable -RetentionPeriod 10
203
It could take several minutes to install and configure the SQL Server IaaS Agent. View the status of the
VM in the Azure Management Portal. It should indicate that it is installing extensions. The extensions
area should also report that the Microsoft.SqlServer.Management.SqlIaaSAgent is being enabled. In
PowerShell, you can test that the extension has completely installed and configured by using the
following command:
(Get-AzureVM -ServiceName <vmservicename> | `
Get-AzureVMSqlServerExtension).AutoBackupSettings
To disable automatic backup, run the same script without the -Enable parameter to the NewAzureVMSqlServerAutoBackupConfig. As with installation, it can take several minutes to disable
Automated Backup.
Download the Code
You can download the full script from GitHub at https://github.com/insidemscloud/AzureIaasBook, in
the \Chapter 12 directory. The file name is SQLAutoBackup.ps1.
204
205
StorSimple, make data protection and archiving to Azure easy and efficient. To Veeam, StorSimple
looks like any another connected, on-premises data repository. However, StorSimple is more than just
storageit automatically manages the movement of data to and from Azure for efficient availability.
Veeam Backup & Replication provides agentless image-level backup to help meet stringent RPOs and
RTOs while allowing more recovery options than you ever thought possible:
Recovery of a failed VM in as little as two minutes
Near-continuous data protection with built-in replication
F ast, agentless item recovery and e-discovery for Microsoft Exchange, SharePoint and Active
Directory, along with transaction level recovery of SQL databases
Automatic recoverability testing of every backup and every replica, every time
Veeam and StorSimple work in unison to mitigate the cost and management of data growth while
providing secure backup and recovery. Veeam ensures that backups are initially stored on a traditional
primary storage for short-term recovery, and depending on your availability needs, Veeam archives
older versions of backups to StorSimple for long-term compliance. StorSimple will, in turn, ensure that
backups are moved into Azure via cloud snapshot.
206
WAN Accelerator (optional) WAN accelerators are optional components in the Veeam Cloud
Connect infrastructure. Tenants may use WAN accelerators for Backup Copy jobs targeted at the cloud
repository. WAN accelerators deployed in the cloud run the same services and perform the same role
as WAN accelerators in an on-premises backup infrastructure. When configuring Veeam Backup Copy
jobs, tenants can choose to exchange data over a direct channel or communicate with the cloud
repository via a pair of WAN accelerators. To pass VM data via WAN accelerators, the service provider
and tenants must each configure WAN accelerator, with the source WAN accelerator located on tenant
side (in the tenant data center), the target WAN accelerator is configured on the SP side.
T
enant Veeam Backup Server To connect to the cloud and use the cloud repository service
provided by the SP, tenants utilize Veeam backup servers deployed on their side.
Expanding the capacity of the single VM setup is easy to do leveraging the distributed model of Veeam
Backup & Replication, along with the rapid resource provisioning capabilities in Azure. The diagram
below illustrates a distributed model.
207
For systems integrators trying to build a successful business in the Microsoft cloud, Veeam Cloud
Connect offers another service partners can offer to create a recurring revenue stream.
As a Veeam Cloud Connect Service Provider, you deploy from the Azure Marketplace, where you will find
the Veeam Cloud Connect VM offering. If your company is not yet enrolled in the Veeam Cloud Provider
Program, you can click the link provided in the Azure Marketplace offering and receive a 30-day trial.
You will find multiple resources on the Veeam website to familiarize you with the solution, including
the following whitepapers, which cover everything from reference architecture to comprehensive
hands-on deployment guidance.
Veeam Backup & Replication v8: Cloud Connect Reference Architecture
http://www.veeam.com/wp-cloud-connect-reference-architecture-veeam-backup-replication-v8.html
Veeam Cloud Connect: Manual configuration guide for Microsoft Azure
http://www.veeam.com/wp-build-services-business-veeam-microsoft-azure.html
Veeam Cloud Connect: Pre-configured VM deployment from the Microsoft Azure Marketplace
http://www.veeam.com/wp-build-services-business-veeam-microsoft-azure-marketplace.html
These resources and a 30-day trial license make getting up to speed a manageable task.
12.6.1.3 Veeam Cloud Connect for Enterprise
Veeam Cloud ConnectTM for Enterprise, a new offering from Veeam, is a Cloud Connect option for
customers who would prefer to manage their own hybrid disaster recovery strategy. You will find the
Veeam Cloud for Enterprise offering in the Azure Marketplace as well. The VM is basically the same as
the Service Provider edition. It is the license that enables the Enterprise edition.
208
12.7 Summary
Microsoft offers a number of workload protection options in Azure, enabling organizations to ensure their
data is protected in event of a disaster, even if they dont have a second data center. With the Azure Backup
service, Microsoft has enabled a comprehensive, cloud-based offsite backup solution. Azure backup vaults
provide secure, encrypted backup targets for customers of all sizes. Whether an organization has more
complex backup and recovery configurations utilizing Data Protection Manager, or whether they have a
simpler, more basic need for offsite backup, Azure Backup enables offsite backup for everyone.
With Azure Site Recovery, Microsoft has enabled cloud-based disaster recovery orchestration. In what is
perhaps one of the simplest, most easy-to-use implementations of disaster recovery ever, Azure Site Recovery
enables effective disaster recovery for organizations of all sizes. With Azure Site Recovery, disaster recovery
plans can easily be tested, ensuring business continuity in the event of an actual disaster.
With multiple protection options for Microsoft SQL Server, organizations have greater flexibility in
planning and implementing high availability and disaster recovery strategies for one of the most
common (and often, most critical) workloads in a variety of scenarios.
For partners and customers looking to leverage capabilities of Azure as part of a hybrid disaster
recovery strategy for heterogeneous environments, Veeam offers options to suite a variety of needs.
In the next chapter, we will examine monitoring and reporting options for systems and applications
running in Microsoft Azure.
209
Chapter 13:
Monitoring
and Reporting
This chapter will focus on two key components of cloud computing: monitoring and reporting. Monitoring
will help us make sure all services are online and have the configuration and compute resources for optimal
performance. In the cloud era with quick scale in and out, monitoring can also deliver information about
capacity versus demand, which we can then use to automatically scale instances in and out to reduce
unnecessary spending on storage and compute resources. Because cost savings is often one of the
compelling benefits that drives cloud adoption, this deserves some attention as well.
Monitoring in this chapter will focus on monitoring of virtual machines (VMs) in Microsoft Azure and
connected storage accounts. In the reporting section of the chapter, we will focus on reporting of usage and
capacity of VMs running in Microsoft Azure. Data from monitoring tools are often the source of reporting
information. Therefore the chapter will begin with monitoring and conclude with reporting.
210
13.1 Monitoring
Monitoring of VMs running in Microsoft Azure can be accomplished in a couple of different ways.
Which method is best will be determined by your need for breadth and depth of monitoring data, as
well as your budget. Your options include:
Azure Management Portal The Azure management portal can provide us with light monitoring.
System Center 2012 R2 Operations Manager Provides deep monitoring inside of the VM.
Microsoft Operations Management Suite Provides deep configuration monitoring, performance
and event analysis, making sure all settings inside of our VMs are according to best practices.
When discussing monitoring of VMs in Microsoft Azure, it is important to remember that everything
above the hardware layer is still your responsibility. Microsoft Azure is responsible for maintenance of
the fabric (patching Hyper-V hosts, etc.) and running the VM. Everything above the hypervisor (VMs and
applications installed on them) is up to you to manage and monitor.
Another important topic regarding management of VMs in Microsoft Azure is that all outgoing traffic
(data egress) from a Microsoft Azure datacenter is billed (if not connected with ExpressRoute). In some
scenarios, a more practical option is to deploy the monitoring solution, such as System Center R2
Operations Manager (Operations Manager), in Microsoft Azure and connect to the monitoring solution
with a console, instead of sending all monitoring data to the on-premises data center. This option
nearly eliminates charges related to data egress from your Azure subscription.
Today, Operations Management Suite (OMS) cannot replace Operations Manager regarding real time
monitoring, but it might be a good alternative in the future. At present, OMS enhances the native
capabilities of Operations Manager.
The Microsoft support statement for System Center 2012 R2 is available in Microsoft server software
support for Microsoft Azure VMs at https://support.microsoft.com/en-us/kb/2721672.
211
Figure 13.1.2 shows all performance counters available for a VM by default, as well as the configuration
options of the time range and chart type.
212
In the Microsoft Azure management portal, you can configure a rule to send a notification when a
performance counter hits a threshold. Follow these steps to configure a notification rule based on a
performance counter:
6. Open the Microsoft Azure management portal.
7. Click Virtual Machine.
8. Click on one of your VMs
9. On the virtual machine dashboard, click the ALL SETTINGS link shown in figure 13.1.1
10. On the Settings blade, click Alert rules
213
214
Storage accounts are used by VMs to store virtual hard disks. If there is a performance issue with the
storage account, it may negatively impact VM performance. The storage account monitor dashboard,
shown in figure 13.1.4, provides a set of performance counters related to the storage account. To
monitor VM related performance counters, you only need to enable monitoring of blobs. VMs read and
write to their virtual hard disks by using the GetBlob and PutPage REST API commands. To add more
metrics, use Alert rules, just as you would with VMs, perform the following steps:
1. Browse to the Microsoft Azure management portal.
2. Find one of your storage account
3. Click All settings
4. On the Settings blade, click Alert rules
5. On the Alert rules blade, click Add alert
6. On the Add an alert rule blade, fill in the name and the condition for the alert rule. Click OK to store the
new alert rule. Webhooks and e-mail notification can be used the same way as when monitoring VMs
215
Verbose or Minimal monitoring, which one to use? Before monitoring blob performance, you need
to enable monitoring. Monitoring can be configured on two different levels; minimal/aggregate
and verbose/API metrics. When using minimal, only the aggregated value is available for each
performance counter. This is the recommended setting for daily monitoring.
Verbose monitoring is recommended for troubleshooting and detailed analysis only. The log file is
stored on the storage account in a hidden folder named $logs. You can also use this data to trace
requests, analyze usage trends, and diagnose issues within the storage account. You can use storage
tools such CloudBerry (http://www.cloudberrylab.com/) or CloudXplorer (http://clumsyleaf.com/
products/cloudxplorer) to access the log files. Figure 13.1.5 shows an example of a storage log file.
It is possible to create notification rules for storage accounts the same way we did with VMs. An
interesting performance count to monitor is the E2E Latency counter. E2E Latency is the average endto-end latency of successful requests made to the storage account. This value includes the required
processing time within Windows Azure Storage to read the request, send the response, and receive
acknowledgement of the response. To add a rule to monitor E2E latency, follow these steps:
1. Browse to Microsoft Azure management portal.
2. Find the storage account
3. On the storage account, click All settings
4. On the Settings blade, click Alert rules
5. On the Alert Rules blade, click Add Alert
6. On the Add an alert rule blade, fill in name and select the AverageE2ELatency
7. On the Add an alert rule blade, fill in condition, for example greater than 25 during 15 minutes.
8. S ave your changes. The alert rule is now configured and will generate an e-mail if the threshold
configured in the rule is breached.
You can configure a webhooks and e-mail notifications in the same manner as with VMs.
216
217
218
5. Select Yes in the Online Catalog Connection dialog box, as we want Operations Manager to
download required management pack if missing.
6. I n the Select Management Packs to import dialog box, navigate to the temporary folder where you
extracted the management pack files. By default the folder for the extracted management pack
files is C:\Program Files (x86)\System Center Management Packs\System Center Management Pack for
Windows Azure.
7. S elect both management pack files, Micrososft.SystemCenter.WindowsAzure.mpb and Microsoft.
SystemCenter.WindowsAzure.SLA.mpb and click Open.
8. I n the Import Management Packs dialog box, click Install. Once both management packs are
imported, click Close.
9. T o configure the management pack to work with your subscription, navigate to the Windows Azure
node in the Administration workspace.
10. In the Windows Azure Overview view, click Add subscription.
11. In the Add Windows Azure subscription wizard, input your subscription ID. This can be found in the
Microsoft Azure management portal, on the Settings page. Also specify the certificate file (in PFX
format) to use, as well as the password for the certificate file. In the Microsoft Azure management
portal you need to upload the certificate in CER format. Click Next.
12. In the Add Windows Azure subscription wizard, select a resource pool to use for the monitoring of
Azure resources. Click Add Subscription.
Note: Since you are monitoring public cloud resources, the resource pool must have Internet access.
219
13. In the Add Windows Azure subscription wizard, verify that the subscription has been successfully added
and then click Finish. Figure 13.1.6 show the wizard with successfully added Microsoft Azure subscription.
Before we configure monitoring, we need to verify that Operations Manager has discovered our
resources in Microsoft Azure. In the Operations Manager console, navigate to the Monitoring
workspace and expand the Windows Azure folder. In the Windows Azure folder, there is a sub-folder
named Azure Resource Inventory. This folder contains a number of views that will list all discovered
resources, such as VMs. These objects will have no health state, as shown in figure 13.1.7, as we are not
monitoring them yet. Note that the discovery can take up to an hour.
220
221
13. In the Add Monitoring Wizard, Summary, verify all settings and click Create.
You have now configured Operations Manager to monitor VMs and storage in Azure. If you navigate to
the Virtual Machine State, under the Windows Azure folder in the Monitoring workspace, you will soon
see a health state on each VM, as shown in figure 13.1.10.
222
The Microsoft Azure management pack contains a large number of views out of the box. These views can be
used when monitoring Microsoft Azure resources. The management pack includes a number of performance
views, one of which is shown in figure 13.1.11, that can be used for both proactive and reactive work.
Operations Manager is now monitoring your VMs and storage from the outside, from the Microsoft
Azure fabric perspective. The current monitoring we have configured for Azure resources is more or less
on the same level as monitoring a VM from Hyper-V host perspective.
To monitor deeper performance of the VM and applications, such as disk monitoring and application
specifics, you need to install an agent on each VM in the same way you install agents to VMs and
physical servers in your datacenter. As an example, for an Azure running SQL server, you will want to
install the Operations Manager agent on the VM, and import the Windows Server management pack
and the SQL Server management pack for monitoring the VM operating system and SQL application.
223
OMS includes Solutions that can collect and analyze data related to the following:
Active Directory Assessment Assesses configuration state (based on MS best practices)
and health of Active Directory.
Malware Assessment Status of antivirus and antimalware. The current version of the solution support
collect status of Windows Defender and System Center Endpoint Protection (SCEP) real-time clients.
Backup View usage in Azure Backup vault, including total storage usage and number of executed jobs.
Capacity Planning Capacity planning and visibility into your private cloud. You can use this
solution to test what-if scenarios and identify over or under-allocated virtual machines. This
solution can also be useful when planning compute and storage for your private cloud.
Note: This solution requires Operations Manager and Virtual Machine Manager in an integrated
configuration to provide the necessary performance data to feed the Capacity Planning solution.
224
Security and Audit explore security related data and helps find security risks. This solution also
collect security and analyze security logs. With this solution you can track activates in for example
Active Directory, for example failed logons or user added to groups.
SQL Assessment Assesses configuration state (based on MS best practices) and health of
SQL Server instances. With this solution you can for example see if there is a recommended reconfiguration on your SQL servers, including knowledge why this setting is recommended.
Wire Data This solution collects data about your network, such as networks and subnets. It also
collects network traffic from monitored servers. In the OMS portal you can then analyze the traffic on
your networks, such as the amount of data sent by a server and which protocols the server is using to
communicate. This solution can help you identify servers communicating in unexpected ways.
A
lert Management Presents summary Operations Manager alert data and analyze Operations
Manager environment. With this solution, you can gain insights into trends in your Operations Manager
environment, such as most common alerts or which management packs generate the most alerts.
Note: This solution requires a connection to an Operations Manager management group to function.
Automation Review and monitor an Azure Automation account. From the OMS dashboard, you
can see the number of runbook jobs executed.
Change Tracking The change tracking solution keep track of changes made by Windows
Installer, such as if an MSI package has been installed or uninstalled. The solution also keeps track of
changes to Windows Services.
System Update Assessment Identifies missing updates and servers not recently updated.
A
zure Site Recovery Monitors replication status for Hyper-V and VMware VMs to Azure Site
Recovery Vault. The current version of Azure Site Recovery and Azure Backup solution support only
monitor one backup vault and one site recovery vault.
225
Deficiencies and other error conditions identified in OMS can then be fed into Operations Manager
where alerts are generated, as shown in figure 13.1.3.2.
As you can see, there are many monitoring options for Azure VMs and related infrastructure, which can
be used together to meet a variety of monitoring needs.
226
13.2 Reporting
This section of the chapter will focus on reports, primary usage reports. Cost is often a big business
driver for moving to Microsoft Azure, which makes Microsoft Azure billing and usage reports highly
important. When talking about reporting for VMs running in Microsoft Azure, there are two different
native alternatives. Which is best depends on how the organization is using Microsoft Azure. We will
discuss reporting in the Microsoft Azure management portal, as well as reporting with System Center
2012 R2 Service Manager (Service Manager).
Service Manager is IT Service Management (ITSM) component and self-service user interface
described in Chapter 10 Automation. It also includes a reporting feature that can be
leveraged to provide basic showback reporting for Azure consumption.
Microsoft Azure will generate an invoice with all the compute hours that have been used for the
invoice time frame, such as last month. Figure 13.2.1 shows an invoice overview of all resources used
and the cost. The invoice shown in figure 13.2.1 has a pre-paid amount of dollars to spend each
month. The invoice will also specify storage, network and data operations. For example, read and write
transactions to the storage account. In the Azure management portal you can download usage details
in CSV file format. The format of these files is very raw, as shown in figure 13.2.2, and data often needs
to be manipulated before being published.
227
228
With System Center components in place, the flow of a self-service request would be:
1. A request is placed in the System Center Service Manager (Service Manager) self-service portal.
2. In Service Manager, the request generates a service request work item.
3. When the service request is approved, Service Manager invokes the automation platform for
example Azure Automation.
4. Azure Automation builds the new VM in Microsoft Azure.
5. Azure Automation creates a new configuration item (CI) in the Service Manager CMDB.
6. The work item (the service request) is marked as completed in Service Manager.
The result will be a new VM in Microsoft Azure, added by Azure Automation to meet corporate IT policy
and compliance requirements, and a new CI is created in the CMDB for tracking the new VM. The CI in
the Service Manager CMDB is the key to reporting. With this CI, we can use Service Managers powerful
reporting mechanism to generate reports. Figure 13.2.3 shows a custom report in Service Manager
that shows all VMs in Microsoft Azure grouped by different Microsoft Azure subscriptions. Figure 13.2.4
shows the same report with detailed information for the Infrastructure subscription. Even if the usage
report and billing information is sent from Microsoft to the different cost centers directly, there is value
for the IT organization in keeping track of VMs running in Microsoft Azure.
229
In the report shown in figures 13.2.3 and 13.2.4, we have an Expire Date column implemented as part
of the automated self-service configuration. This property is used for lifecycle management of the VMs,
as well as to forecast costs. When a tenant requests a VM, they must supply an expiration date. Before
the expiration date, the automation platform can be configured with a runbook to send the requestor/
owner of the VM an e-mail asking if they want to extend the expire date for the VM. If the VM expires,
the automation platform can then be configured to automate the deletion of the VM.
How do we generate a performance report for a server running as a VM in Microsoft Azure? We do that the
same as in your on-premises datacenter with Operations Manager and OMS. As discussed earlier, Microsoft
Azure provides a VM, but it is up to you to manage it. If you install the Operations Manager agent on the VM
you can use all the different management packs in Operations Manager to monitor the VM, its applications
and generate reports. If you install the OMS agent on the VM, you can use solutions in OMS to provide light
monitoring of the VM. OMS can collect performance counters and display them in the OMS portal.
Availability reports can also be generated with Operations Manager. You can use VMs in Microsoft
Azure as monitoring gateways to check availability of your services. A benefit of using VMs running
in Microsoft Azure is that you can deploy VMs in different datacenters to run transactional availability
checks of your services from different parts of the world.
230
13.3 Summary
In this chapter we started by looking into different alternatives for monitoring of VMs running in
Microsoft Azure. We started with light monitoring in the Microsoft Azure management portal. We then
moved on to Operations Manager, which is part of Microsoft System Center and brings a great deal of
functionality for deep service monitoring. We also discussed the Operations Management Suite and
how it enhances native Operations Manager capabilities. Then we discussed reporting options for
Azure and how to generate reports for VMs and usage data. In the end of the chapter we touched on
performance and availability reports.
When working with reporting and monitoring of VMs running in Microsoft Azure, it is important
to remember that Microsoft Azure is just responsible for the VMs. We still need to provide system
management in the same manner as we do with VMs running in our on-premises datacenter.
231
John McCabe works for Microsoft as a Senior Premier Field Engineer. In this
role he has worked with the largest customers around the world, supporting
and implementing cutting edge solutions on Microsoft Technologies.
Also to his role in Microsoft, he is responsible for developing core training
for the Global Business Support Engineering Teams. John has been a
contributing author to several books including Mastering Windows Server
2012 R2 from Sybex. John has spoken at many conferences around Europe
including delivering key notes. Prior to Microsoft John was an MVP in Unified
Communications with a consulting background of 15 years across many
different technologies including Network, Security and Architecture.
232
233
COMING SOON
NEW Veeam
Availability
Suite v9
234